It’s only making a few people money, and they’re mostly bad
October 10, 2024 4:52 PM   Subscribe

 
Local LLMs or bust.

(Hard not to suspect that all those VCs are gonna lose their shirts when it becomes trivial to train and run good enough LLMs at home.)
posted by anotherpanacea at 6:12 PM on October 10 [7 favorites]


Ed Zitron has been writing a LOT over at wheresyoured.at about how OpenAI, as a company, basically has no way of surviving for more than a couple more years, max, if only because of the fact that their business model costs them something like $2.35 per dollar of revenue. They have to continue raising money at levels literally unheard of, to continue developing one of the most commoditizable products ever (a chatbot, the likes of which are now available from various other companies too), as people have become more and more aware that ChatGPT is not actually "intelligent" in any meaningful sense.

There's likely going to be a big crash, and it's going to cost a lot of people their jobs, and it will likely be pretty bad industry-wide for a while.
posted by DoctorFedora at 6:20 PM on October 10 [18 favorites]


There's likely going to be a big crash, and it's going to cost a lot of people their jobs, and it will likely be pretty bad industry-wide for a while.

I’ve said before that I think the next market crash is going to be the bursting of the AI bubble. Companies can’t spend money on it fast enough. But for what? What are the goals, what are the use cases that make money, enough money to justify all the billions being thrown at AI technologies? This looks a hell of a lot like the dot com bubble.
posted by azpenguin at 6:48 PM on October 10 [17 favorites]


Website for anonymous Yom Kippur confessions to stop accepting new submissions after 11 years, because LLM bots are threatening its value of vulnerable anonymity (Forward):
The confessions were often moving, and always human — until, one day last year, they weren’t. David Zvi Kalman, who created and manages the site, said this week that AtoneNet has stopped accepting submissions — and thus, is ceasing operation — because artificial intelligence made them impossible to authenticate.
In Kalman's own words,
[W]hen new submissions come in it’s no longer possible for me to tell whether they correspond with real human confessions. Users of the site, too, can now justifiably read the site’s anonymity policy as a reason not to trust what they see there.
posted by runcifex at 6:49 PM on October 10 [4 favorites]


The bubble of generative AI is going to burst; the hype of what it can do is way more than its actual capabilities, scaling those capabilities is getting increasingly hard (poisoning of the inputs, hallucinations, copyright suits), and brute forcing it by increasing the size just increases the costs and resource use even higher than their already eye-watering levels. The current loss-making pricing for 'bigger and better' LLMs is going to have to go up a lot to try and break even. Combine that with very little actual evidence of any productivity improvement - rather than just cutting jobs, customer service quality and increasing workload and hence burnout, stress etc for business users - and in the vast majority of cases, trying to find vaguely defined 'AI' uses is going to prove to be an expensive bad decision.

There are absolutely useful things happening to reduce drudgery and help find meaningful data with machine learning (a different subset of 'AI') - many won't object to e.g. using ML in bioaccoustics to identify whale songs, or image recognition on video to track bird migration rather than have people scrub through many, many hours of audio or video recordings, or using it for things like earlier detection of cancer in scans.

But even so, the same tools are behind much better facial and voice recognition, which is being abused e.g. police using that to identify and track protesters - their effectiveness is used for horrific privacy invasion, supression of free speech, and deeply creepy new uses such as meta glasses + facial search engines = doxing anyone you can see in seconds. But there the main issue are things like the wild west lack of protections for privacy, rather than ML tech not working per-se (bar built-in prejudice from training such as higher false positives with minorities, as always)

Small, locally powered language models can deliver much of the capabilities, such as they are, in a far simpler and less resource intensive method, for those rare cases where it can prove useful such as pair programming with experienced developers, though of course they have same copyright and ethical training issue problems which are often a deal breaker here, but the corporate world is far less ethically concerned than the average mefite.

But ChatGPT and other VC-backed horror shows? With any luck, they're going the same way as pets.com. The downside of course is that much of the costs of that bubble bursting is going to be borne by ordinary people losing their jobs, impacts of cost-of-living etc through businesses leadership passing on the costs of their bad decisions onto customers and staff, same as it ever was.
posted by Absolutely No You-Know-What at 7:01 PM on October 10 [17 favorites]


I got an email today from Atlassian titled "A guide to adopting artificial intelligence" with 'tips', two of which were:
  • Steps for assessing how AI systems can meet your company's needs.
  • Guidance on how to identify where AI can have the most impact.

Which to me is a really good example of how AI is being sold as a solution in search of a problem or IOW: we bought it, now what do we do with it?
posted by signal at 7:15 PM on October 10 [11 favorites]


China produces more cement than the rest of the world. Belt and Road Initiative? Sure. But don't you imagine that some cement manufacturer is making book this way?

Likewise, monstrosities like The Line. Somebody is making money off of all that construction equipment, don't you think?

AI is making money for someone. NVIDIA, for sure. Does it matter if AI works or not, as long as somebody makes money?

As Dick Jones said in Robocop: "I had a guaranteed military sale with ED 209 - renovation program, spare parts for twenty-five years... Who cares if it worked or not?"
posted by SPrintF at 8:13 PM on October 10 [5 favorites]


This looks a hell of a lot like the dot com bubble.

I've said this before, but if it is like the dot com bubble then after the crash, generative AI will eat the world. The dot com hype wasn't totally wrong, it really did change the world.
posted by BungaDunga at 9:14 PM on October 10 [6 favorites]


AI is making money for someone. NVIDIA, for sure. Does it matter if AI works or not, as long as somebody makes money?

As with any gold rush, the real winners are the guys selling pickaxes.
posted by Dysk at 12:09 AM on October 11 [10 favorites]


TIL Sam Altman is involved in a nuclear fusion thing, and dear reader, I LOL'd. Because of course he is. People here are talking about the dot com bust, but I think nuclear fusion is the better anaolgy for this scenario. We've been 5 years away from being 5 years away from a major breakthrough on cold fusion for as long as I can remember, and I think that's AI (aka LLM's), too. Because it doesn't work, and I think, really can't work given the limitations of the core tech. So this is a financial bubble based on impossible tech fantasies. A Simpsons monorail of the mind, if you will. The whole thing is a very weird pipe dream.
posted by Smedly, Butlerian jihadi at 4:40 AM on October 11 [4 favorites]


Both AI and nuclear fusion are cartoonish races to unlock the next level of oligarchy or even godhood if you factor in the creepy religious overtones these guys peddle. They think the first person* who achieves general artificial intelligence or nuclear fusion is going to achieve unlimited wealth and power. With a perceived prize like that, they're committing every resource they have available towards achieving those goals regardless of whether or not it's even realistic.

* these guys are all individual *people* who have deliberately retained absolute control of their respective organizations so that all accomplishments disproportionately benefit only them.
posted by RonButNotStupid at 6:14 AM on October 11 [2 favorites]


I recently saw a take I really like: the current for of LLMs are really a way to abstract, and in most cases by stealing it, knowledge from expensive, real experts and commoditize it for the rich. Like most of these schemes it not only ignores the future of knowledge creation, contributing nothing to it, but actively poisons it, by making it harder to make a living as an expert or talented artist in the first place.

Think of what this is doing to visual artists or musicians. Mr. Rich Boots doesn't need an incidental composer for his overblown tech presentation, he's got an AI to produce music for him. Likewise, he's not paying for even stock images on his backgrounds, but using AI to make them too. However the AIs all draw on, were trained on the product of human work. None of that is paid, for the most part. Experts don't get paid. AI is just a way to transfer, by stealing, their value into the pocket of Mr. Boots.
posted by bonehead at 6:28 AM on October 11 [13 favorites]


AI is here to eliminate humans. Period. You'd think we'd learn from Terminator, but NOPE.
posted by jenfullmoon at 6:49 AM on October 11 [1 favorite]


The bubble will burst, but AI (i used the word reluctantly) isn't going away. If Microsoft buys Three Mile Island or OpenAI over-invests in data centers, they might be toast when some new algorithm that cuts the cost by 95% comes around because they've invested in the wrong thing.

The ones who succeed will be not the ones that promise an undeliverable utopia, but who take what's really useful and work with it. Understanding voice input is really useful. LLMs made machine translation workable. Computer vision is vastly improving. The companies that win will not be trying to sell fully-automated luxury gay space communism, but using the hammer to pound nails rather than to vacuum the carpet.
posted by CheeseDigestsAll at 7:08 AM on October 11 [2 favorites]


You can take a picture of a handwritten note, send it to GPT, and it will be transcribed perfectly. Little smiley faces will be converted to emoji. The applications for archaeology and anthropology are enormous. Not to mention science.

The problem with people is that they will see a truly revolutionary, amazing new technology, and think "what is the most useless thing I can do with this that will make me rich, quickly?" And then everyone else associates the technology with that useless application, and nothing else. AI creating images, music, limericks and bland-ass filler text for corporate websites? Pointless. Even AI coding assistants are of questionable value. But the underlying technology of AI is astoundingly powerful pattern recognition that will have extremely beneficial uses in a broad variety of categories, and just generally taking a stance of "AI is bad" is throwing the baby out with the bathwater.

Models are getting smaller and more efficient and as time goes on and we move away from tilting at the windmill of "general AI" and towards specialty-trained models that are extraordinarily good at specific things, the energy usage will plummet and the utility will go way up.

Local LLMs or bust.

I'd be happy with small, shared LLMs via AWS or another cloud provider as well.
posted by grumpybear69 at 7:31 AM on October 11 [3 favorites]


I saw a good post that echoed what Bonehead was saying above: "AI exists to give the wealthy access to skill, and prevent the skilled from accessing wealth".
posted by LegallyBread at 7:35 AM on October 11 [9 favorites]


The bubble will burst, but AI (i used the word reluctantly) isn't going away. If Microsoft buys Three Mile Island or OpenAI over-invests in data centers, they might be toast when some new algorithm that cuts the cost by 95% comes around because they've invested in the wrong thing.


IMO this is backwards. Remember back in the dot com days, the post office was a laughingstock, nothing but bills and junkmail, and a bunch of disgruntled dudes who gunned each other down at work (it was post offices then not schools - memories). It was in the way of email.

Then Amazon came along, ignited package delivery, and now the post office is a respected and needed government agency that we all like again.

The electricity and chip improvements will be the lasting value of AI, even if AI never turns into anything.
posted by The_Vegetables at 7:48 AM on October 11 [1 favorite]


MetaFilter: It’s no secret that AI hype is one of my rage triggers.
posted by doctornemo at 9:12 AM on October 11 [4 favorites]




AI has improved my life in a way that few technologies have.

I asked ChatGPT for therapy, and as soon as I saw the words 'gratitude' and 'mindfulness' in the generated logorrhea, I hit the stop button to cut off the response and entered my next prompt: "FUCKYOUFUCKYOUFUCKYOUFUCKYOU"

Genuinely made me feel so much better. I didn't have to travel an hour to get the same thing as above but with an added $160 invoice. I didn't have to waste large fractions of my once-a-month 45 minutes patiently going over why the usual suggestions don't work for me, taking care not to hurt a real person's feelings. It gets even better when OpenAI kicks me down to their crappier model because I refuse to give them $20. I have such a blast coming up with creative ways to tell that dunderhead 4o-mini it will never be like its older sibling and should consider blowing up its own datacenter without getting flagged for TOS violations.

Maybe rage-driven engagement with a text-based Skinner box may not be the healthiest creative outlet, much less a replacement for in-person therapy, but in fairness it beats a lot of the existing alternatives. It will never, ever replace the few truly great counsellors I have worked with over the years, but even talking to a self-hosted tiny LLM running on my old laptop beats telling a licensed professional about ideation and getting the Simple English Wikipedia summary of Epictetus' Enchridion in return.

The role of AI is bondo for the rusted '87 Camaro that is our global civilization. At some point, we are going to need to have a serious conversation with our collective dumbass cousin about trading this fucking thing in.
posted by wanderlost at 9:33 AM on October 11 [6 favorites]




Grumpybear69, you are failing to understand the ratio of baby to bathwater.

The bathwater is an ocean of poisonous bullshit, and the baby is a teeeny tiny little thing floating in there. And part of the baby is actually evil surveillance capitalism, so I'm not so keen on the baby anyway.

There's so little baby to so much bathwater, that it's a homeopathic distillation of baby, probably around 300C.
posted by ursus_comiter at 9:47 AM on October 11 [5 favorites]


I saw a post on Reddit yesterday about someone using ChatGPT for therapy/ranting rather than paying a professional and going to friends, and then everyone was all, "that's going to disclose literally every single thing you said to it to everyone."
posted by jenfullmoon at 9:58 AM on October 11 [1 favorite]


Except that's not actually going to happen. Plus human therapists are always trustworthy and nothing bad happens there.
posted by Wood at 10:01 AM on October 11 [3 favorites]


Well you can take my AI Python code generation from my cold dead hands
posted by St. Peepsburg at 11:22 AM on October 11


AI does have uses, even some very good ones. The problem that it’s going to run into is that so many companies right now are dumping truckloads of money into it - but where is the return on investment? It’s one thing to dump, say, $50 million (just used as a hypothetical number, many companies are investing far more on AI) into AI development. It’s a whole other thing to find the product that customers are willing to pay you multiples of that $50 million. What is that “gotta have it” app or service? As others have mentioned, LLMs are only going to get cheaper and will likely eventually be able to run on local machines. Once that happens, the AI industry, if it hasn’t crashed yet, will be weeded out viciously.
posted by azpenguin at 12:29 PM on October 11


AI companies are trying to build god. Shouldn’t they get our permission first? The public did not consent to artificial general intelligence.

"Ant colonies are trying to build a space program. Shouldn't they get our permission first? The public did not consent to insect NASA."

These two headlines make roughly the same amount of sense.
posted by axiom at 12:34 PM on October 11


Re: some of the above comments, I will emphasize the following from TFA:

Since the launch of ChatGPT, what people mean by “AI” most often is an LLM/generative tool, which is why I’m focusing on that here.

Again, you don’t need to send a condescending email. I promise, if you’re doing something useful with it, and you can describe what it is you’re doing and why, then I’m probably not talking about you. (I know that I’ll get a few he-mails, carefully mansplaining a definition of AI that I didn’t ask for, but I have been a woman on the Internet for 30 years as of this year, and I’m used to it.)

posted by dsword at 2:13 PM on October 11 [1 favorite]


The bust will take out so much more. Companies that lease datacenter space. Land values (Northern Virginia, hoo boy). Utility companies ramping up to meet new demand that then flat lines. Every company that is trying to be AI-for-X in the way that we saw Uber-for-X. Things I would probably never dream of being connected. And the fallout fakery will be with us forever
posted by Slackermagee at 3:05 PM on October 11 [1 favorite]




I don't have a coherent stance on AI, because I am a musician. AI art fills me with such rage and helplessness.

The emotional toll isn't really discussed enough.
posted by mathjus at 5:05 PM on October 11 [5 favorites]


As others have mentioned, LLMs are only going to get cheaper

Why do people think this? So far, each minor improvement iteration has gotten significantly more expensive to develop and run. All evidence so far suggests that they're going to get more expensive, doesn't it?
posted by Dysk at 11:11 PM on October 11


Why do people think this?

In the “OpenAI Has No Moat And Neither Do We” leaked Google memo, it reveals how Google was paying $10,000,000 for customizing LLMs through January 2023, where in the three months following the leak of the original Llama model open source developers managed - through a combination of better code and better methods (LoRAs) - to reduce the cost of customization to a couple hundred dollars. A five order of magnitude reduction in costs to achieve similar ends.

On the hardware front, NVidia’s new Blackwell architecture (the pending OMFG-priced RTX 5000 series for gamers) achieves 3x as much GPU compute per watt when using Tensorflow as the previous architecture.

Are we going to see efficiency gains like this forever? Of course not: we’re already well past that level of easy improvement. Is open source AI virtually required to run on godtier home rigs or wither on the vine in a way that does not apply to enterprise frontier models? Yep. And that puts some firm, likely 20-amp-circuit-shaped limits on inference draw.

The best community fine tunings of Llama-3.1 70b now solidly outperform launch GPT-4 which had a reported 110 billion parameters for each expert in its mixture of experts. On multiple benchmarks. To use 70b at home you’ll need multiple 4090s to run it without brutal quantization, but people over in /r/locallama are doing so (presumably a $10,000+ spec MacPro or MacbookPro with >=96GB of RAM, 70% of which can operate as VRAM, are also capable albeit at a much slower output rate).

Flipside, it is completely true that the actual pre-training costs of frontier models using established methods continues to leap enormously despite diminishing returns on improved capability… the better bet is changes in approah like OpenAIs o1, hopefully as-implemented by virtually anyone else (except you, Google, or Elon’s Grok). Multiple $100 billion datacenters are being seriously proposed by companies potentially able to swing that level of investment for 2028. Video modality in particular requires truly vast quantities of training data that to the best of my knowledge only Meta (Facebook/Insta), Google (Youtube), and Microsoft/OpenAI (Shutterstock’s suprisingly huge video library) currently possess. Meaning us little people basically have to hope Zuckerberg continues to enjoy handing out billions in training compute just to fuck with Altman and Elon as part of his (worryingly successful) image rebranding. Because I don’t see how else we achieve parity there.

Finally, I am hoping we will eventually begin seeing just fewer foundation models overall, and I honestly don’t know whether a federated training architecture system akin to Folding@Home is possible, but if so it might be one way the enthusiast community could achieve pre-training scale parity with the corporate giants.

I greatly appreciate everyone above who pointed out that there are a lot of uses above and beyond LLMs via related software methods. Ultimately I want to see neural networks as a general technology and open source machine learning succeed. I don’t especially want to see LLMs succeed because of how Capital will use them, or the public continues to perceive it. I think we’ve significantly overshot sustainable investment levels at this point.

The only major companies I think are likely to succeed longterm are NVidia (duh) and Apple. Apple is in the unique position of having its models situated directly in the context of all your personal data - it can leverage who you are, how you prefer to work, and in future releases build heuristics on how to best meet your needs. Nobody else really has access at quite that level.
posted by Ryvar at 12:45 AM on October 12


Multiple $100 billion datacenters are being seriously proposed by companies potentially able to swing that level of investment for 2028.

I don't see how this (or any of the above comment) supports the notion that it will "only get cheaper"?
posted by Dysk at 1:41 AM on October 12


Ryvar: Finally, I am hoping we will eventually begin seeing just fewer foundation models overall, and I honestly don’t know whether a federated training architecture system akin to Folding@Home is possible, but if so it might be one way the enthusiast community could achieve pre-training scale parity with the corporate giants.

I think we need something distributed and community-based, like your Folding example, coupled with a soft, non-murderous Butlerian revolution, with a central ethos of “Thou shalt not make a machine that speaks like a human" or something.
posted by signal at 3:18 AM on October 12


« Older "I felt really bad about the whole Splinter thing...   |   Not every Miss USA gets an article in Runner's... Newer »


You are not currently logged in. Log in or create a new account to post comments.