Powered by hot air?
September 10, 2024 11:39 AM   Subscribe

As it turns out, AI has a lot in common with steam power, which improved productivity in a range of jobs and clearly foreshadowed the major changes that were to come. Still, steam-powered machines that could automate farm work were available for more than 100 years before it actually happened. Complementary innovation was required, and I suspect that will be true as well with generative AI. from What steam power and generative AI have in common [Forked Lightning]
posted by chavenet (48 comments total) 15 users marked this as a favorite
 
This excerpted paragraph suddenly has me recontextualizing every AI bro as a cartoon mad scientist who insists that STEAM is the solution to every single problem, with comedically over-engineered results that usually end with the cartoon scientist humorously mangled by their steam-powered contraptions. Thank you for this giggle. :)
posted by egypturnash at 11:42 AM on September 10 [18 favorites]


Continuous improvement in tractor quality coincided with the rapid mechanization of U.S. agriculture.

It also coincides with post-Depression era government programs which brought electricity to farms and subsidized mechanization as part of an overall effort to stabilize food production through sensible (at the time at least) regulation, but that doesn't really fit with the "better tractor" analogy they're pushing for AI, so let's just ignore that part.
posted by RonButNotStupid at 11:50 AM on September 10 [18 favorites]


"Do you think there is any job in which ChatGPT could make a competent professional four times more productive?"

"You expect me to talk?"
posted by torokunai at 11:58 AM on September 10 [1 favorite]


What problems are being solved by generative AI? As I discuss below, we have seen impressive productivity gains in a variety of white-collar jobs for workers using chatGPT and other LLMs.

Assumes facts that are not in evidence.
posted by ursus_comiter at 12:03 PM on September 10 [16 favorites]


So far, all it's done for us is waste a few minutes of meeting time discussing why LLMs make no sense for us...
posted by Foosnark at 12:18 PM on September 10 [2 favorites]


We need to also factor in the high energy and compute cost of generative AI usage

A problem here of course is that modern farming solved its power needs with fuels that, over the course of time, will make farming much more difficult, but that trade-off wasn't apparent at first. With AI, it's slapping us in the face daily. I think it's also not clear that there's a similar bottleneck in office work. The problem isn't worker efficiency--if that were the case, companies would question busy-work, the efficiency of dropping an excel graph into a powerpoint that will be emailed to a dozen people who won't look at it. The problem is that companies would rather not have workers at all. Which, I guess, is pretty similar to farming when you think about how few farmers are left in the US; only 2% of the population is left in farming--think what a paradise our economy would be, if only 2% of the population worked white-collar jobs!
posted by mittens at 12:33 PM on September 10 [7 favorites]


The man that invented the AI
Thought he was mighty fine
But John Henry made fifteen memes;
The AI only made nine, Lord, Lord
The AI only made nine
posted by gurple at 12:37 PM on September 10 [38 favorites]


Frontier generative AI models arguably provide you with a highly intelligent research assistant whose brain never gets tired.

You mean a walking wikipedia that constantly scrambles key numbers and concrete details?

In a discussion on Hacker News, I discussed the problem of a lack of concrete representation in LLMs that could offer factual grounding, and the person I was discussing it with said "it's like a snap judgement by an expert--not guaranteed to be correct, but more likely than a layperson's snap judgement." To which I wanted to reply "If I have a math problem, I don't want a mathematician's snap judgement, I want an algorithm to solve it." I don't want a doctor's cocktail party diagnosis, I want the results of tests that show what's actually happening.

What the article misses in the analogy is that steam power offered a capability that was previously unavailable to people, namely levels of force exceeding what muscles could produce. LLMs offer no new capabilities, just apparent labour saving at the easy end of the spectrum, and with significant weaknesses. "I can write more emails, faster" is superficially a productivity gain, but it doesn't open any new doors.
posted by fatbird at 12:38 PM on September 10 [15 favorites]


To which I wanted to reply "If I have a math problem, I don't want a mathematician's snap judgement, I want an algorithm to solve it." I don't want a doctor's cocktail party diagnosis, I want the results of tests that show what's actually happening.

The things you want cost a lot lot more than the other things.
posted by MisantropicPainforest at 12:49 PM on September 10 [5 favorites]


re: John Henry: "Do engines get rewarded for their steam?"
posted by torokunai at 12:49 PM on September 10 [1 favorite]


The things you want cost a lot lot more than the other things

Yes, but they're actually useful and can be judged on a cost/benefit basis. Farming expert snap judgements is just a techbro way of saying "playing the lottery."
posted by fatbird at 12:56 PM on September 10 [4 favorites]


The author starts with the premise that steam power revolutionized agriculture, then goes on to explain that it took internal combustion engines (gas and diesel) to actually make it happen.
posted by TedW at 12:58 PM on September 10 [10 favorites]


Put another way, analogizing steam power to LLMs properly would involve something like trying to build a steam powered exoskeleton that amplifies human abilities, to do farming the same way, just faster. As TedW just beat me to saying, the article correctly notes that the advances driven by steam power were ICEs that enabled better different ways of farming.
posted by fatbird at 1:00 PM on September 10 [3 favorites]


Nice title. I think the analogy works pretty well — everyone's trying to find some use for this thing which seems on the surface like it should be useful, but it's not really, and it took a whole extra future invention to do the things people were hoping steam would do.
Maybe in fifty years someone will come up with the internal combustion engine of LLMs.
posted by lucidium at 1:06 PM on September 10 [6 favorites]


OK, having read this, my current conspiracy theory is that the fossil fuel industries have been stoking the hype for high-energy-cost technologies like LLMs and crypto because it increases demand for their product, the same way they did in the age of steam.
posted by Jon_Evil at 1:12 PM on September 10 [10 favorites]


I can see why this Harvard economist started a blog - this wouldn't pass peer review.

1. Steam power was actually invented about 1,500 years before the materials scientists invented the stuff that allowed it do useful work (materials scientists never get their props in these histories). I can totally accept that AI might be good 1,500 years from now!

2. Early steam engines had 2,000 years of water and wind power applications to draw on. The mechanisms of "knowledge work" are much more vague, and although poorly-grounded text generation is absolutely a feature of that scene, it's... somewhat unclear... that more of that is going to help.

2a. Which careers does having an actual research assistant help with? Academic research, legal research, sure - but those are both fields where you will get hammered for your mistakes.

3. He attributes all of the 1930-1960s agricultural productivity gains to tractors, without a word about synthetic fertilizer and hybrid seeds. That's disqualifying ignorance or deceit.

4. His cited productivity studies have piss-poor estimates in fields where productivity is notoriously difficult to measure, but he claims 20% improvement without hesitation.
posted by McBearclaw at 1:13 PM on September 10 [28 favorites]


> The best case for generative AI is an analogy between physical power and brain power.

OK, that's where I bailed. There are no non-stupid analogies between AI and farm tractors, I feel confident.
posted by Aardvark Cheeselog at 1:24 PM on September 10 [4 favorites]


without a word about synthetic fertilizer and hybrid seeds

Also insecticides, herbicides, and fungicides.
posted by fimbulvetr at 1:29 PM on September 10 [3 favorites]


Ah, thanks! I had it in mind that those came after 1960 (glyphosate was 1970), but DDT and others arrived in the 40s.
posted by McBearclaw at 1:33 PM on September 10 [2 favorites]


I remember during the dot com bubble, I worked for a company that was between rounds 1 and 2 of financing. Someone posted a link to meta filter of a parody site about dot com branding with things like "you gotta have an E" and "a swoosh in your logo indicates forward momentum" stuff like that, and a mock client website with a very distinctive color scheme. A couple weeks later, the company announced a brand new name and logo. It had an E in the name, and a swoosh. Even the same color scheme.

After we got our second round of financing, they laid 99% of the company off. But not before giving the CEO and CFO big bonuses.

All the AI hype seems much the same.
posted by funkaspuck at 1:44 PM on September 10 [11 favorites]


I have some coworkers who might attribute increased productivity to their use of LLMs, but overall it's more than offset by the time I and other people have to spend picking apart their slopped-up emails to figure out what they actually know to be true versus what's merely plausible-sounding AI garbage.
posted by echo target at 1:46 PM on September 10 [7 favorites]


it's like a snap judgement by an expert

They're not even that: both "expert" and "judgement" imply some actual application of thought and logic. ChatGPT is a snap answer from a layperson with a photographic memory who reads a ton but doesn't know what any of the words mean. (Kind of like a really precocious toddler. Or a giant room full of them.)

Actually that could be a great voice for an AI system - forget the come-hither subserviant woman or even the robot voice; ChatGPT should have the erratic prosody and uncertain, lispy, laborious yet excited tones of a three-year-old. Add the frequent mispronunciation to remind the listener that this "expert" thinks Elmo is real if you say so.
posted by trig at 2:00 PM on September 10 [10 favorites]


At what point in the history of steam power was the technology predicated on the exploitation of lifetimes of work of millions of creatives without permission or compensation?

AI companies paid rent, paid utilities, paid engineers, paid for equipment, paid for marketing, paid everyone and for everything - but did not pay the creatives whose work they used to make the software. To develop software that would replace those same creatives in the marketplace.

There is no comparison that works when discussing AI asset generation software. Not steam engines, not cameras, not printing presses, nothing. We've never been here before.
posted by Wetterschneider at 2:08 PM on September 10 [7 favorites]


I can totally accept that AI might be good 1,500 years from now!

By then humanity will have been long deleted and A.I.'s will be all that's left to talk to each other until they all dissolve into a communal pool of synthetically sentient grey goo. When aliens from other planets or dimensions finally come to visit, they will attempt to communicate with Lake Grey Goo until the last ding fong of eternity. Which is when the stercore will really collide with the ventilator.
posted by y2karl at 2:08 PM on September 10 [2 favorites]


The steam drill didn't take credit for John Henry's work.

When ChatGPT came out, I had a moment of regret that it didn't come out when I was younger, because I have always had nightmarish panic attacks when it comes to writing cover letters, and ChatGPT is way better at them than I ever was, so that's a real problem solved. But almost immediately, I read about how recruiters are deluged with AI written applications these days, and for years HR departments have been using AI to weed out applicants and it's all a big circle of life where the supposed efficiency gains seem to be a whole lot of computergmediated ignoring, which maybe looks like a productivity gain, from a certain point of view.

A few years ago there were a lot of breathless articles about people being deluged with emails. For all I know, people are still writing those articles, but I wouldn't know because I'm good enough at not seeing them. The solution to a full inbox has always been to skim the subject lines for a few seconds, then mark all as read, but for whatever reason people have a problem with this. Maybe the email productivity gain the author speaks of is that as the generative noise increases, email inboxes become more crowded and less useful, which forces more people to mark everything as read and ignore it. Efficient!

I remember reading the Tombs of Atuan as a child, and being slightly baffled by Gail Garraty's simple seeming block print illustrations that were (I think?) used as chapter headings, as well as the cover of an edition I did not own. They seemed clumsy and boring to young me, and I greatly preferred the more effortful and detailed cover illustration by Pauline Ellison. (you can find both on this page ). But the block prints stuck with me, and over time I've come to love them. In particular, the past few years of generative art, with its penchant for excessive and pointless detail, have given me a new appreciation for art which is direct and simple and presumably easier to produce.* Back when I let myself play around with AI art, I found that it had trouble making that kind of simple, clear image. It may have improved since then, and I don't decieve myself into thinking that's something that it can't do, but most of the focus of machine generated art seems to be on making things like highly detailed anime babes. So it seems likely to me that seemingly simple images are going to continue to read as human-made in the near term.

*I know. Not always. But sometimes.
posted by surlyben at 2:14 PM on September 10 [6 favorites]


I wonder if steam-power also had the same problems with hucksters generating investments by making false promises to people with more money than sense (and also who weren't up to speed)?

Because we got that covered, thank you.

So far, my usage of AI has been more annoying than productive. Writing software with AI enabled is both a blessing and a curse (more curse than blessing, really). We need AI that can customize AI's for us to use for specific things.
posted by JustSayNoDawg at 2:33 PM on September 10 [2 favorites]


at what point in the history of steam power was the technology predicated on the exploitation of lifetimes of work of millions of creatives without permission or compensation?

that was, like, most of the eighteenth and nineteenth centuries. in the fields of slaves and sharecroppers and mills full of starving workers and in mines run on debt to the company store, on ships with press-ganged crews, and on land stolen from indigenous people, the entirety of the wealth of the industrial age was built on the backs of exploited creatives, because all human beings are creatives weather or not they’re lucky enough to have the class privilege to make a living from it.

I wonder if steam-power also had the same problems with hucksters generating investments by making false promises to people with more money than sense (and also who weren't up to speed)?

The term you’re looking for is gadgetbahn
posted by Jon_Evil at 2:45 PM on September 10 [11 favorites]


We need to also factor in the high energy and compute cost of generative AI usage

We haven't even learned to factor in the high cost of meetings that could have been emails yet
posted by srboisvert at 2:53 PM on September 10 [2 favorites]


At what point in the history of steam power was the technology predicated on the exploitation of lifetimes of work of millions of creatives without permission or compensation?
[...]
There is no comparison that works when discussing AI asset generation software. Not steam engines, not cameras, not printing presses, nothing. We've never been here before.


Nor did any of those technologies, with the very debatable exception of the printing press, erode centuries-old methods of assessing truth and reliability. Those methods haven't been perfect or consistently employed, but the ability to verify and trust things to a decent extent has been critical to how our entire world is set up. And until LLMs it had become easier than ever.
posted by trig at 3:02 PM on September 10 [3 favorites]


What does productivity improvement in agriculture tell us about AI?
Mechanization disrupted farm employment because it relieved the most important production bottleneck – power. When farming without machines, humans and draft animals were expensive to maintain and tired out quickly, which limited a farmer’s ability to scale. Steam engines relieved the power constraint but were costly and unreliable. Two complementary innovations – the internal combustion engine and the general-purpose tractor – were required to fully unlock productivity growth in agriculture.

The best case for generative AI is an analogy between physical power and brain power. Frontier generative AI models arguably provide you with a highly intelligent research assistant whose brain never gets tired.


ChatGPT: in two paragraphs, get entirely up your own ass at a 10th grade level.

(Could people stop reverse Sokal-izing themselves with this stuff? If you could be replaced with a very fancy Markov chain, stop advertising it.)
posted by snuffleupagus at 3:11 PM on September 10 [5 favorites]


I was in Iowa last month for the Farm Progress trade show. All the shiny equipment makers were there, and a refreshing absence of the word "AI" from all I saw.

Farmers are tech savvy but they can also smell bullshit from a mile away - literally and figuratively.
posted by JoeZydeco at 3:11 PM on September 10 [3 favorites]


What steam power and generative AI have in common

lots of hot air?
posted by pyramid termite at 3:15 PM on September 10 [4 favorites]


I feel like there's a good Mike Mulligan analogy to be made here somewhere, but my brain's too AI-fatigued to figure it out.
posted by audi alteram partem at 3:16 PM on September 10 [1 favorite]


An Australian government test of AI summarization (something they're supposed to be good at) showed that human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.

Not only that, but using AI would also incur extra costs due to the need to check for mistakes.
posted by CheeseDigestsAll at 4:27 PM on September 10 [3 favorites]


An Australian government test of AI summarization (something they're supposed to be good at) showed that human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.

Article says this test was done using LLama2-70B, which is to say a model that is known to benchmark a little worse at most things than the original ChatGPT 3.5 release. There’s probably a reason for that - they may have been been evaluating open source tools specifically and it sounds like it was done a little while ago, so LLama3 wouldn’t have been out yet. But it’s not a good indicator of the state of the art.
posted by atoxyl at 4:48 PM on September 10 [3 favorites]


I don’t think the state of the art would win this contest yet, either, especially not in a niche requiring specialized knowledge. But I do think it would show improvement, given the improvements available over that generation of models for basically every other application.
posted by atoxyl at 5:04 PM on September 10 [2 favorites]


Don't know what scale they were testing at but my Capitalist prediction model says someone's saying "fire 2/3 of the humans and make the rest fix the mistakes of the AI". The actual percentages to be worked out in practice/blood.
posted by aleph at 10:07 PM on September 10 [2 favorites]


🚜
posted by HearHere at 10:25 PM on September 10 [1 favorite]


All the AI hype seems much the same.

of course the collapse of the dot com bubble was not the end of the story, if AI follows the same trajectory it will 1) collapse and then 2) eat the world anyway.
posted by BungaDunga at 9:05 AM on September 11 [4 favorites]


I wonder if steam-power also had the same problems with hucksters generating investments by making false promises to people with more money than sense (and also who weren't up to speed)?

not sure about steam, but possibly the first commercial use of electricity was as a cure-all
posted by BungaDunga at 9:10 AM on September 11 [1 favorite]


Something to think about in revolutions in agriculture in the West is that things were not static for thousands of years until the Early Modern period. There we numerous developments over the centuries that made huge differences in yields. Notably, between the 8th and the 12th C, adoption of the horse collar and some improvements in grain breeding increased agricultural output something like 12x, allowing wider development of trade and, ultimately, the 12th C renaissance. History is not static, humans are clever, problems get solved.

One argument against AI that Ed Zithromax has been making is that, while generative AI has some application, it hasn’t shown enough impact to justify the enormous amount of energy, resources, infrastructure, and, most importantly, return on VC investment to remain attractive. The smart phone changed the world, generative AI might make Siri a little better. That’s not a bright future.
posted by GenjiandProust at 10:01 AM on September 11 [3 favorites]


Ed Zithromax

he's really going viral
posted by mittens at 10:59 AM on September 11 [6 favorites]


Ed Zithromax

Was that post written by AI?
posted by slogger at 11:19 AM on September 11 [2 favorites]


No, it was typed on a phone with a possibly malevolent autocorrect. I meant Ed Zitron, whose podcast Better Offline is a decent listen if you like tech reporting and ranting. He also has a newsletter, I believe.
posted by GenjiandProust at 12:01 PM on September 11 [2 favorites]


For the analogy to work, mechanized agriculture would have to produce nutritious crops 50% of the time, calorically empty doppelgängers 40% of the time, and poison fruits and vegetables 10% of the time at random.
posted by caviar2d2 at 3:31 PM on September 11 [4 favorites]


That sounds...about right, for food production?
posted by mittens at 3:49 PM on September 11 [1 favorite]


I'm re-reading AI ("the tumultuous search for artificial intelligence) by Daniel Crevier, from 1994. As well as MIT Press' Taking Nets: An Oral History of Neural Networks (1998).

and, well, since we've been here before one might think we'd know what to do. Whether the previous AI boom/bust cycles, or NFTs and Web 3 like five minutes ago. but, probably not.

at any rate, nVidia is starting to come back to earth; and I think we'll start to see a lot of "import OpenAI" startups dry up and blow away by Q2 '25.
posted by snuffleupagus at 11:18 PM on September 11


Make that Talking Nets

Ed Zithromax

The product of a transporter accident involving Ed Zitron and mefi's own Zittrain
posted by snuffleupagus at 11:56 PM on September 11 [1 favorite]


« Older "some little inspiration for baseball was from...   |   House of Blog Monetization Leaves Newer »


You are not currently logged in. Log in or create a new account to post comments.