You might have seen this coming
September 15, 2024 2:43 AM   Subscribe

It seems unlikely that a system relying on hallucinated base rates and numerical simulations goes all the way to outperforming (half-decent) human forecasters in any meaningful way. from Contra papers claiming superhuman AI forecasting [Lesswrong]
posted by chavenet (8 comments total) 5 users marked this as a favorite
 
I don’t know what a contra paper is, so I’m gonna have to read this just to find out what happens when you hit ↑↑↓↓←→←→BA Start

(Seriously tho, as a recovering tech startup worker, I will read this, thanks for posting.)
posted by FallibleHuman at 6:30 AM on September 15 [3 favorites]


not sure how I feel about LessWrong getting cold feet on this, as one of the internet's biggest cheerleaders for AI solving all human politics
posted by Merus at 6:50 AM on September 15 [8 favorites]


Menus, I think you might be hallucinating Less Wrong's position on AI. They are generally the people warning about catastrophic AI risk, and urging caution in its development.

But also, while this isn't actually the LW position, it would be perfectly consistent to believe (a) AI will eventually dramatically improve our lives but (b) LLMs aren't ready for prime time and may never be -- some other technology which we call "AI" is the thing that would do that.
posted by novalis_dt at 7:30 AM on September 15 [2 favorites]


Silly me for imagining "forecasting" had to do with atmospheric conditions in particular.
posted by the antecedent of that pronoun at 7:35 AM on September 15 [3 favorites]


wait, language models arent good at things that have nothing to do with modeling language?

folks are so close to catching on to this grift.
posted by AlbertCalavicci at 8:45 AM on September 15 [4 favorites]


NB: This is specifically about using LLMs. It should be contrasted with the very rigorous work being done using other kinds of ML models to develop, e.g., weather forecasting models that are more accurate and vastly faster than traditional simulations.
posted by jedicus at 9:38 AM on September 15 [5 favorites]


They are generally the people warning about catastrophic AI risk, and urging caution in its development.

Their existential risk positions involve belief in extreme efficacy of AI though, so it's not that simple. But, in this case, I think the disclaimer paragraph gives the whole story, and this has nothing to do with AI risk. It may fit into the "you need me to do this properly" type priesthood-seeking LW behavior though.
posted by fleacircus at 12:04 PM on September 15 [4 favorites]


The Subprime AI Crisis:
The reason I'm writing this today is that it feels like the tides are rapidly turning, and multiple pale horses of the AI apocalypse have emerged: “a big, stupid magic trick” in the form of OpenAI's (rushed) launch of its "o1 (codenamed: strawberry") model, rumored price increases for future OpenAI models (and elsewhere), layoffs at Scale AI, and leaders fleeing OpenAI. These are all signs that things are beginning to collapse...

I am deeply concerned that this entire industry is built on sand. Large Language Models at the scale of ChatGPT, Claude, Gemini and Llama are unsustainable, and do not appear to have a path to profitability due to the compute-intensive nature of generative AI. Training them necessitates spending hundreds of millions — if not billions — of dollars, and requires such a large amount of training data that these companies have effectively stolen from millions of artists and writers and hoped they'd get away with it.

And even if you put these problems aside, generative AI and its associated architectures do not appear to do anything revolutionary, and absolutely nothing about the generative AI hype cycle has truly lived up to the term "artificial intelligence." At best, generative AI seems capable of generating some things correctly sometimes, summarizing documents, or doing research at an indeterminate level of "faster." Microsoft's Copilot for Microsoft 365 claims to have "thousands of skills" that give you "infinite possibilities for enterprise," yet the examples it gives involve generating or summarizing emails, "starting a presentation using a prompt" and querying Excel spreadsheets — useful, perhaps, but hardly revolutionary.

We’re not “in the early days.” Since November 2022, big tech has spent over $150 billion in combined capex and investments into their own infrastructure and budding AI startups, as well as in their own models. OpenAI has raised $13 billion, and can effectively hire whoever they want, as can Anthropic.

The result of an industry-wide Marshall Plan to get generative AI off the ground has resulted in four or five near-identical Large Language Models,the world's least-profitable startup, as well as thousands of overpriced and underwhelming integrations.
posted by TheophileEscargot at 1:36 AM on September 17 [2 favorites]


« Older Without Changing a Thing   |   Voice impressionist Greg Morton does a lot Newer »


This thread has been archived and is closed to new comments