Why is there an AI Hype?
May 22, 2024 6:18 PM   Subscribe

AI is an idea that began as a subfield of computer science, until it was so distorted that it popped, detaching itself from reality. Now, this orphaned concept has grown to a life of its own, as our discussion of AI eclipses any meaningful definition of it as a real, definable thing.
posted by signsofrain (89 comments total) 32 users marked this as a favorite
 
Some of us are attempting to give rapid processing and translation that is driven by compassion a chance to help others?
posted by lextex at 6:23 PM on May 22 [1 favorite]


I've given up trying to explain that it's not remotely "intelligent" to most people. Their eyes glaze right over.
posted by outgrown_hobnail at 6:30 PM on May 22 [9 favorites]


It's machine-powered enshittification, faster and more efficient by orders of magnitude than human-driven enshittification. It's being hyped because a small number of already-extremely-rich people stand a good chance of making themselves richer by hyping it. They are going to replace an alarming amount of human labor with automated processes, and they are going to replace the entire notion of human authorship of any kind of media with authorship-by-machines--proprietary machines which, by an astonishing coincidence, they own.

It's enshittification as a service, and the result is going to be a horror, like bitcoin if it caught on, only worse.
posted by Sing Or Swim at 6:41 PM on May 22 [53 favorites]


The article, on AGI:

This computer program, were it to exist, would allow capital to permanently neutralize labor as a political force. It promises to freeze existing social hierarchies, allowing society to perpetually reproduce itself as it is today, leaving those at the top of the hierarchy in power, completely unchallenged.

AGI isn't here yet (and we can debate if it will ever be - that isn't the point) but I think this idea of perfectly obedient tireless workers truly is the unspoken impetus behind capital's enthusiasm for LLMs and machine learning.
posted by signsofrain at 6:44 PM on May 22 [33 favorites]


Meredith Whittaker recently gave a powerful speech while accepting the 2024 Helmut Schmidt Future Prize:

The use of probabilistic techniques to determine who is worthy of death – wherever they’re used – is, to me, the most chilling example of the serious dangers of the current centralized AI industry ecosystem, and of the very material risks of believing the bombastic claims of intelligence and accuracy that are used to market these inaccurate systems. And to justify carnage under the banner of computational sophistication. As UN Secretary General Antonio Gutiérrez put it, “machines that have the power and the discretion to take human lives are politically unacceptable, are morally repugnant, and should be banned by international law.”
posted by ursus_comiter at 6:47 PM on May 22 [26 favorites]


I mean “hype” and “scam” are real, definable things….
posted by GenjiandProust at 6:47 PM on May 22 [10 favorites]


The problem isn't that we're close to getting an AI as smart as a human. The problem is that we have AIs as smart as a CEO.
posted by zompist at 6:56 PM on May 22 [40 favorites]


This has happened before, several times. The article you really want might be this one about the Grand Unified Theory of the AI Hype Cycle, detailing the histories of the times we've seen this before.
A non-exhaustive list of previous [names of this hype cycle] have been:
- Neural networks and symbolic reasoning in the 1950s.
- Theorem provers in the 1960s.
- Expert systems in the 1980s.
- Fuzzy logic and hidden Markov models in the 1990s.
- Deep learning in the 2010s.
posted by mhoye at 6:57 PM on May 22 [29 favorites]


NFTs failed and blockchain failed to be useful and they had all this hardware left over. Also OprnAI is a simple API to hit up and get a superficially useful tech demo.
posted by Artw at 7:12 PM on May 22 [7 favorites]


Also:

"For us software developers, it's our job to learn to see the world through our own symbolic representations of it, since that's what code is."

Hold up.
posted by mhoye at 7:15 PM on May 22 [11 favorites]


NFTs failed and blockchain failed to be useful and they had all this hardware left over.

NVidia's profit year over year is up almost 300% and people are justifiably worried about water supplies in the places they're building new datacenters for AI processing, so let's be absolutely clear that this is not a "we're gonna repurpose old hardware" thing.
posted by mhoye at 7:17 PM on May 22 [22 favorites]


The article is a little ramble-y and makes it point via a vibe rather than an argument. I had to read it twice, but the vibe is roughly this:

We're prepared to spend any amount of money, energy, and special dispensations for the sake of AI. This is based on the premise that AI is necessary, desirable, lucrative, transformative, and inevitable. We uncritically accept this premise. Why?

My thoughts:

The amount of low-wage human labeling and other auxiliary efforts, the money spent on AI, and the amount of energy spent on it keeps increasing year after year. But we are supposed to accept that it will eliminate human labor, reduce costs, and ameliorate the climate crisis? Why would we believe this when trends show the reverse? It's an extraordinary claim but we accept it without hesitation.
posted by splitpeasoup at 7:25 PM on May 22 [14 favorites]


AI is just re-branded machine learning. it's not special.
posted by simulacra at 7:29 PM on May 22 [3 favorites]


And theft. Theft is a critical element.
posted by Artw at 7:30 PM on May 22 [22 favorites]


In the last thread CheeseDigestsAll linked this fantastic article on the limits of LLM intelligence, leading with a quote from a Lex Fridman interview with Meta’s AI head Yan LeCun. LeCun delivers such a brutally efficient enumeration of the fundamental limitations of LLMs WRT intelligence:
How do they work and why are they not going to take us all the way?

Lecun responded:

For a number of reasons. The first is that there is a number of characteristics of intelligent behavior. For example, the capacity to understand the world, understand the physical world, the ability to remember and retrieve things, persistent memory, the ability to reason and the ability to plan. Those are four essential characteristic of intelligent systems or entities, humans, animals.

LLMs can do none of those, or they can only do them in a very primitive way. And they don't really understand the physical world, they don't really have persistent memory, they can't really reason and they certainly can't plan. And so if you expect the system to become intelligent just without having the possibility of doing those things, you're making a mistake.

That is not to say that autoregressive LLMs are not useful, they're certainly useful, [or] that they're not interesting [or] that we can't build a whole ecosystem of applications around them. Of course we can. But as a path towards human level intelligence, they're missing essential components.
When I think about how many multi-page explanations I’ve written on the blue about reinforcement learning as a stepping stone to reasoning, systems modeling and agent-state prediction (planning), and shorter missives on the importance of continuous (re)training and embodiment… seeing a takedown that’s this short and plain English is kind of embarrassing. And deeply inspirational.

so let's be absolutely clear that this is not a "we're gonna repurpose old hardware" thing

FWIW mhoye ArtW’s not wrong, either: the current LLM boom started right as Ethereum went proof of stake, meaning the largest still-GPU-based cryptocurrency suddenly needed orders of magnitude less muscle. Lotta crypto bros left with useless Covid-priced GPUs. Flipside, you’re correct that is no longer a primary driver, but it’s why there was a brief tidal wave of NFT/cryptobro shitheads sloshing into machine learning forums and Discords right at the start of 2023, just in time for the Llama leak to put millions of dollars worth of foundation model weights into everyone’s hands.
posted by Ryvar at 7:43 PM on May 22 [29 favorites]


AGI isn't here yet (and we can debate if it will ever be - that isn't the point) but I think this idea of perfectly obedient tireless workers truly is the unspoken impetus behind capital's enthusiasm for LLMs and machine learning.

I think the CEOs will be surprised at what happens if they make the means of production sentient.
posted by GCU Sweet and Full of Grace at 7:45 PM on May 22 [7 favorites]


The amount of low-wage human labeling and other auxiliary efforts, the money spent on AI, and the amount of energy spent on it keeps increasing year after year. But we are supposed to accept that it will eliminate human labor, reduce costs, and ameliorate the climate crisis?

Amazon kept losing untold millions every quarter, selling every item at a loss, until they cornered the market and were able to jack up prices. All it took was a lot of money, a lot of patience, and burning down the entire retail sector to benefit Jeff Bezos and his investors.
posted by rikschell at 7:53 PM on May 22 [9 favorites]


It's never going to solve the climate crisis, but the plan is to put almost every human out of work permanently. Not sure they have fully thought out the long term ramifications of that, but I suppose that's what Boston Dynamics is for.
posted by rikschell at 7:55 PM on May 22 [9 favorites]


Maybe I'm out of touch because I'm not in Silicon Valley, maybe there really is something in the water up there - but at my workplace, the most skilled software developers are precisely the ones who aren't falling for the hype.

I do think the article is basically right that the answer to the question "why is there an AI hype?" is "because billionaires and bosses think it will crush workers". But I think it's really missing a key point when it tries to find some fundamental flaw in comp sci to explain credulity within the industry.

IME skilled software engineers understand very well that software models are only limited approximations of real-world concepts, and the challenge is to somehow come up with a model that's useful to real people, for some real purpose, despite its flaws/limitations. I only ever hear about AI from the most incompetent product managers, who want AI to do their jobs for them! E.g. rather than doing the hard work of designing a user interface that's both powerful enough and easy to understand, let's just stick a chatbot in there(!).

In other words they believe the hype precisely because they don't understand computer science. Computers are effectively magic to them so super-intelligent AI seems plausible. Meanwhile 80% of developers are rolling our eyes.

Even when interviewing summer internship candidates recently this pattern was visible - the ones most excited about AI were the ones with the least coding experience.

On preview - what Ryvar (and Yann LeCun) said...
posted by equalpants at 8:30 PM on May 22 [15 favorites]


Long-term ramifications? I'll be gone; you'll be gone.
posted by I-Write-Essays at 8:30 PM on May 22


"... theory argues that individuals that originally react rational are pushed to react irrational. Once they are part of a crowd, they are driven by a collective soul . [source] Authors also define and give historical case examples of a phenomenon called cultural enthusiasm , a hype of inflated promises of social benefits, profits and future performance around an innovation."
posted by HearHere at 8:44 PM on May 22


seeing a takedown that’s this short and plain English is kind of embarrassing

What made that interesting to me was that if was coming from "inside the house," from an AI guru at Meta. Physicist Sean Carroll has expressed similar views on his Mindscape podcast.

The structure that LLMs is built on is actually useful. It improved machine translation by an order of magnitude. But it's nothing like intelligent.
posted by CheeseDigestsAll at 9:09 PM on May 22 [7 favorites]


i'm pretty sure that what they're now calling AI is an excellent unsubtle cartoon example to illustrate why my mefi profile text is only sort-of-jokingly "statistics luddite".
posted by busted_crayons at 9:22 PM on May 22 [2 favorites]


Thank you signsofrain, that was a good read. I mean, you had me at Baudrillard.
posted by birdsongster at 9:36 PM on May 22 [2 favorites]


Something I keep in mind to maintain perspective on various tech hype cycles, especially AI driven ones:

For several years I was a contractor for Honeywell, with their industrial process solutions division. I worked on dashboards for the quality control systems they sold for flat-sheet processes--essentially the industrial machines that create giant rolls of paper or plastics that are sold to manufacturers of printer paper, or garbage bags. These quality control systems, at a minimum, monitor and record the sensor data demonstrating that the roll of product was manufactured within grade specs; maximally, these systems feed the Honeywell controllers that run the process within spec.

A major paper manufacturer approached HW for an RFP that included their process control solutions and added some advanced funcationality, applying machine learning to generate things like predictive maintenance analytics (i.e., scheduling replacement of parts before they fail based on previous actual performance). As part of responding to the RFP, the whole team was assembled to run through it and okay the elements. When ML came up, one of the guys with a PhD in engineering in industrial process control said, kind of bored, "yeah, we can do ML generated predictive analysis."

Why bored? Because the last few years of monitoring actual client data from paper manufacturers demonstrated that few were operating at a level of precision offered by the HW systems in use. If a HW controller was sounding the alarm too many times on a shift, the 20 year old operator would disable it and lock the actuator in position, knowing that whether the roll of product was out of spec was largely irrelevant to whether the manufacturer would sell the roll. Paper manufacturing is a high volume, low margin business run by barely trained machine operators. The state of the art of industrial process control was already far past where what the business needed to operate at, so adding ML on top of it was a c-suite vanity requirement, and the PhD responsible for coding it already knew that it would be both effective, and wouldn't make any difference to the customer asking for it.

The lesson I've taken from this (and other comparable examples) is that one of the many reasons to distrust the hype is that, even the best case application is likely a solution in search of a problem. Most of the world doesn't need to operate at that level and won't even with the opportunity. ChatGPT offering natual language interaction with an intelligent scheduling agent that knows all my details intimately is something I don't want because I don't work that well (and don't need to), and having it would feel (privacy problems aside) like some kind of MBO-driven manager surveilling me. As I get older, I want a simpler life, not a more complex one that I handle more effectively.
posted by fatbird at 10:38 PM on May 22 [44 favorites]


My current take is that, setting aside the very real harmful hype, there is something different this time about deep learning. Fact: I think every single one of my academic professors in my undergrad and graduate tracks of study in American EECS institutions by now has pivoted to studying or using deep nets in some way or other. They in particular are *not* AI professors but simple EE and CS professors in adjacent disciplines.

So again, setting aside the SV hype and mainstream speculative bubble about "AI", there has been a nontrivial a scientific shift towards this that may or may pan out but comes in part from a place of genuine intellectual inquiry and scientific methodology.

I also say there's something different this time. The argument that academic experts have specifically made, that's info/discourse that does not make it out to the mainstream public unlike the more voluble AI academic thought leaders (LeCun, Hinton, etc.), is that the advancement in hardware performance - notwithstanding the plateauing of Moore's Law - has allowed demonstration of emergent behavior of otherwise fundamentally understood neural network architectures and their variations. It was this societal-scale technological advancement in processing power (including big data and the power to train ever larger neural networks) that empirically demonstrates the intriguing problem of a few distinct categories of "emergent" behavior. That is what is catching the eye of many doing serious scholarship, both by theorists and academic computer scientists.

So the argument now is whether this LLM stuff represents a glimmer of a breakthrough in the future, or whether it will peter off. And I think scientifically it is necessary to say that we cannot know either way at this point. But that is quite separate from the issues of hype and harm and ethics surrounding the study of these artificial algorithmic artifacts.
posted by polymodus at 10:48 PM on May 22 [8 favorites]


This is not artificial intelligence.
Artificial intelligence does not exist here.
This is marketing garbage that has no basis in reality.

*screams into the void*
posted by Sphinx at 10:58 PM on May 22 [6 favorites]


Cory Doctorow recently said "if we just keep throwing orders of magnitude more training data and compute at the stochastic parrot, it will eventually come to life and become our superintelligent, omnipotent techno-genie. In other words, if we just keep breeding these horses to run faster and faster, eventually one of our prize mares will give birth to a locomotive" -- go read the whole thing

TLDR: It's smoke and mirrors to rake in money.

posted by wrm at 11:00 PM on May 22 [14 favorites]


AI is currently just a marketing trend, but the real thing is coming whether we like it or not.
posted by fairmettle at 11:06 PM on May 22 [3 favorites]


I've worked in the tech biz all my life, just retired - there's always a hype, it's a very boom/bust biz - I find myself very cynical about AI (a bit useful, not reliable, completely oversold) - let me introduce you to the Gartner hype cycle"
posted by mbo at 11:34 PM on May 22 [2 favorites]


AI is currently just a marketing trend, but the real thing is coming whether we like it or not.

Yes, "in 20 years, or maybe even less."
posted by chavenet at 1:17 AM on May 23 [5 favorites]


AI is currently just a marketing trend, but the real thing is coming whether we like it or not.

Just like cold fusion, in 10 years tops.
posted by Pendragon at 1:53 AM on May 23 [4 favorites]


It's an extraordinary claim but we accept it without hesitation.

Are we, though? I think it’s more a case of the corporate powers-that-be saying “this is going to happen whether any of you like it or not.”

I think people are tired and worn-down from constantly butting their heads against the brick wall of everyday life slipping more and more out of their control or understanding, to where they just can’t anymore. There’s just too much indifferent power and money aligned against them.
posted by Thorzdad at 1:58 AM on May 23 [6 favorites]


Prediction #1:
Three years from now the third Q* model goes live, in theory capable of replacing many office workers according to a small set of hastily thrown-together synthetic benchmarks and disgustingly celebratory thinkpieces in WSJ and Forbes… but it turns out this is only true under ideal circumstances, which are less than 5% of all circumstances in actual offices, and at least 25% of the time a non-existent full human-equivalent AGI would be needed. Early adopters get murdered in the press and experience significant turnover both because they jumped the gun on layoffs and because they pissed off the most talented co-worker friends of the people getting laid off. Whoops.

Prediction #1.1
Any MBA of this time will begin whining that it was the only fair reponse to widespread remote work, after their third beer. Asswipes.

Prediction #1.2
In defense of Q* v3: it is the first system to actually fully live up to the hype of the original Siri launch. So there’s that.

Prediction #2:
Full human-equivalent AGI is a series of gradually more capable releases beginning 30 years from now and ending 40 years later when expert systems palpably eclipse human capabilities not just on benchmarks but in observed output and near-universal public opinion, save for the most dedicatedly bio-supremacist.

Prediction #3:
At no point does any of this result in improved conditions for the working class, remote work aside, and indeed nothing does until the societal reformations in the aftermath of the Climate Crisis Crunch two centuries later.

Any takers?
posted by Ryvar at 2:47 AM on May 23 [4 favorites]


"machines that have the power and the discretion to take human lives are politically unacceptable, are morally repugnant, and should be banned by international law.”

I say that is discriminationalist! As far as I can see, humans that have the power and the discretion to take human lives are also politically unacceptable, morally repugnant and should be banned by international law, and yet it seems we absolutely can't make do without them. I see no a priori reason to believe that a machine would necessarily do a worse job of it.
posted by holist at 4:26 AM on May 23


Some examples from my work environment:

You have an AI that transcribes everything said in an online meeting. At the end of the meeting, the bot writes a summary of the meeting, and gives you bullet points for the follow-up actions that each person in the meeting committed to.

There a Microsoft study (disclaimer: could be part of the marketing-industrial complex, but just bear with me for a second) that claims this sort of note-taking activity after a meeting takes 24 minutes for a human to complete. This metric becomes the base of an ROI justification for purchasing the tool or service that includes that bot--multiply 24 minutes by the number of meetings in your organization that needs minutes and follow-up, multiply that by an hourly rate....soon you're showing the management chain a PowerPoint slide that shows that the tool pays for itself.

Effect on staff? The soft answer is that people get told they'll have more time to do more worthwhile, productive things. The harder answer is that, eventually, you don't need as many staff. It's not so much "we bought a bot, now fire so-and-so", but it does erode the number of jobs available. Instead of 7 people in a group, now you need six, maybe you re-org with fewer people at the end of the year, maybe you don't replace someone when they leave.

Another example: get an AI-driven bot to complete a special bit of paperwork that your company thinks is vital to the functioning of the organization. Everyone celebrates--look at the time we're saving. Cheering is brought down a bit by nervous jokes about how it's going to put everyone out of a job. The thing is--nobody ever looks at the original paperwork and asks, why are we doing this in the first place? Was this ever even necessary?

AI is not sentient, it's more than happy to automate your current organizational governance dysfunctions to make them even more entrenched.

One of the big areas coming up for AI in the United States is healthcare. Not about caring for the sick, of course--it's about automating the endless paperwork that the U.S. uses to run it's famously broken health systems.

And that's where the myths around AI that we get from science fiction would really come in handy--an actual sentient AI that said "this input is completely irrational, why don't you try single-payer or anything else other than what you're doing today" would be great. Think the WOPR in the movie "War Games" from decades ago: "The only way to win the game is not to play." Too bad that's myth and not reality.
posted by gimonca at 4:37 AM on May 23 [17 favorites]


That said--I have to admit, getting that bulleted list of to-dos after a meeting is, frankly, kind of cool, in a small-scale way.
posted by gimonca at 4:38 AM on May 23 [3 favorites]


Ryvar, the timeline in Prediction #2 strikes me as arbitrary. Human-equivalent AGI's been about 30-40 years away (for most predictors, this is far enough away such that they either won't be around to check, or won't care if they were right). Remote work is not an insignificant factor, the job that suits my life almost perfectly right now, Covid be thanked, would be a total no-go if I had to commute for it. What kind of aftermath do you imagine to the Climate Crisis Crunch? (The Jackpot?)
posted by holist at 4:43 AM on May 23


Holist: IIRC back in 2022 (shortly before LLMs blew up) there was a poll of ML researchers linked here on the Blue asking when they thought AGI would be achieved. The median answer was 32 years. So predicting 30 years some two years later wasn’t pulled entirely out of my ass. My own personal guess of 70 (the upper bound) absolutely was pulled from there, and is based solely on my opinion that the complexity of full AGI is being vastly underestimated by everyone including me.

Mostly that comment was what passes for a joke after working an all-nighter, and the climate crunch bit feels too derail-y.
posted by Ryvar at 5:00 AM on May 23 [2 favorites]


Google employee: my Markov chain generator is talking to me! It must be conscious! AGI is real!
Google: Glue is a pizza ingredient
posted by Artw at 6:01 AM on May 23 [10 favorites]


This current tech (IMO) does a disservice to the words "artificial" and "intelligence" at the same time. I would call it something else, but "AI" has worked beautifully as a marketing term.

I work at a mid sized nonprofit that does social service work for low income people; children and older folks. We have "that guy" as one of the senior management who hears stuff like this and it makes him feel smart to claim that we need to be "incorporating AI" into our marketing and communications plan. First: we barely even have a marketing & communications plan, and second? This guy has no actual idea how "AI" could be implemented in our creaky, decades-old operation that basically runs a pre-school and a day center for seniors (among some other things like violence intervention programs, some homelessness issue stuff, etc.). I'm not denigrating the services we do... our services are the best part of our org! But we certainly do not need to start paying for "AI" as part of our short, medium or even long-term plan.

"AI" does some neat stuff sometimes, but it's most often a marketing term that reels in suckers like "that guy" at my workplace.
posted by SoberHighland at 6:07 AM on May 23 [8 favorites]


Artw: Sutskever’s gone and taken “Feel the AGI!” with him. Google is nothing if not the ugly kid brother here, “Taste the AGI!” was practically inevitable. (Somewhere in a personality coach’s office in San Jose, Mark Zuckerberg is currently being talked out of “Smell the AGI!”)
posted by Ryvar at 6:20 AM on May 23


We have one of those AI guys in our upper management as well. When he brought up how we need to start incorporating it at an all-staff meeting and people started asking thoughtful questions about how and why would that be done he just got angry and started yelling at the staff that it was the future whether we like it or not and we have to adopt it because everyone else is.
posted by fimbulvetr at 6:21 AM on May 23 [15 favorites]


This is all AI proponents in every company.
posted by Artw at 6:30 AM on May 23 [9 favorites]


This is me now whenever I hear someone jabbering on about AI.
posted by fimbulvetr at 6:39 AM on May 23


ChatGPT and other engines hyped in the current cycle are not AGI, but they can and will be marketed and used as if they are. Also, in the corporate world, "CoPilot isn't as good or insightful as the best human programmers" will absolutely not prevent decision-makers from axing the best and highest-paid programmers in exchange for "good enough" people using code generation tools. I've seen it already. If you want a parallel, look at the last 20 years of outsourcing/offshoring important work to developing nations, regardless of whether quality and cohesion are there. I've literally sat in calls where I needed to hire 1 more person for a US team at, say, $120k and been told "you can get 6 Pakistani developers for that" and had my position request denied*. The same is happening with chat and related AI/text prediction engines/etc. Support chats are increasingly automated at the expense of humans, and they are often much worse.

* Please don't take this as some weird sort of racism or desire to center the first world. There are lots of great programmers everywhere and it's a disservice to them to be remote lackeys of American corporations. The problem is that corporate decision-making is increasingly short-term due to our dumb obsession with market returns and infinite growth. Large companies generally don't care about how hard people have to work, language or time zone complications, how efficient human processes are, or the longer-term (2+ years) impact of decisions. I say this as someone with 25+ years as a high-performing leader in software, IT, agile, and process improvement. There are people and even initiatives to do what's actually good for employees and customers, but if you show up with numbers showing that customers like AI chat support 12% less, or that customer retention is slightly worse possibly due to overuse of chatbots, or whatever ... saving $X million by putting AI everyone and marketing it with a bunch of hype will win every time.

TL;DR; Most uses of current "AI", and later AGI, are going to be bad for people and society because we live in an exploitative system. I feel bad for the people who just think it's cool and want to work in that area as a pure intellectual exercise.
posted by caviar2d2 at 6:46 AM on May 23 [10 favorites]


That was a very good article thank you. Especially "enjoyed" Baudrillard on the Gulf War.
posted by subdee at 6:50 AM on May 23 [1 favorite]


Prediction #3:
At no point does any of this result in improved conditions for the working class, remote work aside, and indeed nothing does until the societal reformations in the aftermath of the Climate Crisis Crunch two centuries later.


The Low-Paid Humans Behind AI’s Smarts Ask Biden to Free Them From ‘Modern Day Slavery’ [Wired]
posted by chavenet at 6:51 AM on May 23 [9 favorites]


The hype is so powerful because generative tech is used for extractive ends, not reparative ends. The technology “works” to facilitate the concentration of wealth and accelerate the deskilling process as Kathy Reid describes regarding Recall:
Why does #Microsoft want to implement #Recall? It's not about *images*. It's about modelling what workers do on Windows, and then replacing them.
posted by audi alteram partem at 8:04 AM on May 23 [6 favorites]


fatbird, I am so grateful for that anecdote - I'm 15 years into a "data science" career and this resonates deeply with my experience. Managers love ML demos, but at the end of the day, they just want simple charts they can use to justify the decisions they've already made.
posted by McBearclaw at 9:29 AM on May 23 [10 favorites]


AI is just re-branded machine learning. it's not special.

Historically speaking I think this is backwards - “artificial intelligence” was coined back in the 50s when people thought this was going to be pretty easy, and stuck around with a dual meaning. In science fiction and the popular imagination it’s, you know, the real deal, what they’re now calling AGI. In computer science it’s the umbrella term for a subspecialty that deals with a range of automated problem-solving techniques.

“Machine learning” goes back almost as far and also means a few things but its popularity in recent years can definitely be seen as a rebrand for connectionist/neural network approaches as they made their comeback from previous AI winters.
posted by atoxyl at 10:27 AM on May 23 [2 favorites]


there is something different this time about deep learning

The real underlying difference seems to be scale - more compute, more data. Denying that this has “worked” in the sense of doing things that nobody has gotten computers to do before seems untenable. At the same time acknowledging this is a different thing from being certain that it will scale to “AGI.” But somebody is definitely going to throw another couple of orders of magnitude of compute at it trying.
posted by atoxyl at 10:34 AM on May 23 [3 favorites]


the transformer approach (n-dimensional vector space representation) is what I assumed (like back in the 1990s) would be necessary for 'good' computer language processing.

not that I knew how to do it, but the mental model I have of language acquisition correlates closely with how GPT is doing its magic.

Me: Please describe this image to me

ChatGPT 4o ResponseThis image is a cartoon-style drawing featuring a character floating in the air with a speech bubble above their head that says "denial." The background is solid pink with a simple landscape showing some trees and a house in the distance. The character appears to be wearing a hat and a striped shirt.

The image is likely a humorous or ironic representation of the concept of "denial," suggesting that the character is in denial about something, hence the floating, which could symbolize being detached from reality or avoiding facing a situation.

^ if that's not "AI", I don't know what is...
posted by torokunai at 11:04 AM on May 23 [3 favorites]


Hey my friend wrote this! Now i'll definitely have to read it so I can look smart. He's someone who lives the John Lewis way: makes good trouble, necessary trouble... but does it in non-chaotic ways.

For my part, I steal cava for him and his family. That's my contribution to "good, necessary trouble"

posted by not_on_display at 11:18 AM on May 23


(what's the point of ingesting the entire internet if you can't tell a picture of sluggo when you see him?)
posted by mittens at 11:42 AM on May 23 [1 favorite]


can you identify the character?Yes, the character in the image is from the comic strip "Nancy." The character is Sluggo Smith, one of the main characters in the strip. "Nancy" is a long-running comic strip created by Ernie Bushmiller in 1938. Sluggo is known for his simple, straightforward demeanor and often appears in humorous or ironic situations within the comic. The speech bubble saying "denial" adds a layer of irony or humor, as it suggests a state of mind or attitude in a visually simple yet effective manner.

posted by torokunai at 12:13 PM on May 23 [4 favorites]


Let’s see Paul Allen’s prompt.
posted by Ryvar at 12:55 PM on May 23 [2 favorites]


In computer science it’s the umbrella term for a subspecialty that deals with a range of automated problem-solving techniques.

I mean my late-00s edition of the most popular “Artificial Intelligence” textbook of all time barely gets into the kind of “machine learning” that returned from near-death and took off in the subsequent decade.
posted by atoxyl at 2:45 PM on May 23 [3 favorites]


Google: Glue is a pizza ingredient

The best part about the glue story is that it seems to come from a single reddit comment by someone whose username was fucksmith. Which, like, the single user source puts the lie the claims of unique, never before seen creativity coming from AI, but also, the user's name was fucksmith. That's, just, I mean, it's perfect. I'm not even going to look and see u/fucksmith's comment history, because why would I have to?

Oh, also the comment has been removed since the story came out, which, of course it has.
posted by surlyben at 4:11 PM on May 23 [5 favorites]


I also saw that Google AI is telling people to use butt plugs to aid in exercise seemingly based off a Reddit joke. Now that's coming from Lifehacker who has itself gone to absolute shit but it wouldn't surprise me.
posted by downtohisturtles at 5:03 PM on May 23 [1 favorite]


There's also the one where Google AI is telling people to eat rocks (based on source content from the Onion dot com, because of course it can't distinguish satire from anything fact-based).

It seems inevitable that there's going to be another Tide Pod Challenge type of thing, where a lot of people get sick & injured from doing stupid things that Google says are okay to do. Maybe that'll be enough to turn the tide (lol) and get useful search engine results again?
posted by knotty knots at 7:42 PM on May 23 [2 favorites]


I just asked ChatGPT if squat plugs are real and it output: "'Squat plugs' are not a real, practical fitness tool but rather a fictional concept often used for comedic effect within fitness circles." Maybe that just means ChatGPT is better than the Google AI, but I've noticed that whenever there is a high-profile LLM fail written about in the media (another one was asking ChatGPT how many legs the front half of a dog has) if I go and try it out myself it always seems to answer correctly. Maybe they just add "P.S. the front half of a dog has two legs and squat plugs aren't real" to the system prompt whenever one of the these stories breaks, or do a quick bit of RLHF to paper over it.
posted by L.P. Hatecraft at 7:55 PM on May 23 [1 favorite]


Because it’s not hype?
posted by iamck at 8:45 PM on May 23


Holy fucking shit. There’s “not ready for prime time” and then there’s telling depressed people they should kill themselves. God fucking damn.
posted by Ryvar at 10:43 AM on May 24 [1 favorite]


Reddit’s 'Squat Plug' Joke Is Now Google AI’s Fitness Advice

I'd expect “seriously not ready for prime time” does not prevent google from hiding their shitty AI behind some disclaimer, so then management could still obtain google's advise to do more layoffs or whatever.

At lunch today, someone seriously propose resolving human disputes using AI, because AIs could be more deterministic than juries, but more determinism would bake in all these loony toons responces. lol
posted by jeffburdges at 3:57 PM on May 24


Stolen from elsewhere:

Google 1999: We will organize the world information and bring it to your fingertips.
Google 2024: Eat glue and drink your own urine.
posted by CheeseDigestsAll at 4:16 PM on May 24 [7 favorites]


To pile on my experience in the workplace, there are two general types of people I've observed, as they react to LLMs.

One type, are the people who have a pretty grounded view of how useful LLMs are as a tool to assist in text tasks. The most important point appears to be that a human is still required to do extensive checking and verification work, because an LLM will miss things, not follow the house style, confabulate, destroy the authorial tone of voice, so on and so on.

These people know that checking, verification, and editing work can be equally as difficult as drafting work and have realistic expectations of the usefulness and applicability of LLMs to their day-to-day. These people know that the reader in a hurry will, at first glance, accept a good-sounding, grammatically correct, seemingly coherent passage of text, and that it takes a bit more effort to dig in and spot the underlying issues with the generated output.

Their bosses also know this, and they know the market is hyped, but their bosses are also feeling FOMO pressure to be able to say something about the company using "AI" so there's pressure to adopt these LLMs as "solutions" anyway.

Then there's the other type of person. You know the type. They attend meetings with no sense of what the meeting outcome is intended to achieve. They will reply to an email chain with a seemingly coherent paragraph that has somehow missed the point of the email chain. A five minute conversation with them will let you complete a game of buzzword bingo (you'll know when the buzzwords are coming out, because suddenly the words will come out at a faster tempo, pre-memorized and breathlessly regurgitated).

These people think LLMs are amazing. These people think LLMs can make their life so much easier, M$ Copilot can craft their emails for them, summarise papers for them, create presentations for them, examine all business documents in the business and formulate coherent business strategies for them, create compliance documents for them, analyze market data for them ("chat with your data", M$ calls it), on and on and on.

When you try to explain to these people that their underlying assumptions of an LLMs' capabilities are wrong and that their fantasies can't exist, their eyes glaze over. Then they'll challenge you, by asking you if you know better than the M$ AI expert / salesman. Obviously I don't, so clearly and demonstrably, I'm wrong.

(Somehow, the point that the M$ salesman is not acting in our company's interests does not enter their minds. Or maybe it does, but the thought has been pre-empted with the framing language used by such digital product salesman... "We're a partner, we're going to create solutions for your company and enable it to remain competitive in this dynamic environment...").

Of course, these people's bosses also think LLMs are amazing.

You see the problem... both types of leadership are pressured to adopt (and market that they have adopted) "AI". It's really quite frustrating.
posted by theony at 8:42 PM on May 24 [9 favorites]


Those second types were mostly gluing together buzzwords anyways, so really the AI might produce better quality output than they do.
posted by jeffburdges at 4:23 AM on May 25 [2 favorites]


Microsoft used the term 'virtual employee' in their AI-related conference this week. Feeling fortunate that I'm more or less fully vested after my 30+ years at the coalface since that's a pretty . . . ominous . . . term to throw around wrt desk job work.

I for one don't know how well LLM technology will function in high-demand tasks in the future, i.e. where their skills will top out this decade and next.

I wouldn't put money on this top-out anytime soon, though . . . I expect we're still in the "dial-up" days of this tech wave.
posted by torokunai at 5:33 AM on May 25 [1 favorite]


Full human-equivalent AGI is a series of gradually more capable releases beginning 30 years from now and ending 40 years later when expert systems palpably eclipse human capabilities not just on benchmarks but in observed output and near-universal public opinion, save for the most dedicatedly bio-supremacist.
I expect I’ll be well dead before the bill comes due, but otherwise I’d take a piece of this action, specifically because I don’t see any current evidence that makes it seem likely. I am not, to borrow your term, a “bio-supremacist.” Any process that can run on brain-meats can in principle be emulated in silicon or other suitable computational substrate. Where I can’t quite follow is to infer that LLMs are getting us any closer to AGI. The simplified history of AI research to date is something like this:
  1. Assume that complex task X cannot be performed without humanlike intelligence.
  2. Build a system that performs X.
  3. Trigger a brief popular panic that machines are about to wake up and kill us all.
  4. Realize that, no, the assumption in (1) was just incorrect from jump.
  5. Move goalposts and repeat.
I have great confidence in our ability to eventually build a system that passes set benchmarks for any well-defined task. The problem is that “AGI” is not a well-defined task, and part of step 3 above springs from what appears to be an article of faith that if we keep solving for enough well-defined tasks of steadily greater apparent complexity, one day AGI will sort of manifest itself automatically without troubling us to define exactly what we mean by it. We’ll just “know it when we see it,” ignoring for a moment how many people genuinely think they see it now.

Chess was considered part of the humans-only club until it wasn’t. Now that grandmaster-level computers are an old trick, we do not in retrospect consider that achievement a step towards general intelligence. For the computers that play chess well, asking whether they are in some sense “intelligent” otherwise remains a category error.

Complex language generation is ultimately just another instance of solving for a well-defined task. I don’t want to understate the significance of the achievement, or its potential utility, but this is no more a step towards AGI than playing chess. LLMs have not passed the Turing Test so much as they have disproven the premise of it. It is only because language generation is so intimately intertwined with our thinking to begin with that we are able to fool ourselves into thinking otherwise so easily.

When we encounter linguistic communication, we reflexively build mental models representing the entire context of the communication. Language is one of our primary interfaces for exporting our own mental states, and for inferring the mental states of other humans. The lack of a mind about which to make inferences does not prevent us from picking up cues embedded in a simulation of human output. We form clear ideas about the mental states of characters in a novel, even though we know these are simulacra produced through the artful application of linguistic cues.

LLMs trigger the same reflex to construct a mental image of a remote mind, but in a way that is harder to compartmentalize because the LLM is simulating a direct interaction with the reader rather than depicting a character acting within a story. Imagine trying to teach a child the difference between fiction and reality in a world where they can ask questions of characters in a story, or persuade them to take actions that influence the narrative. Ascribing any measure of mentality to the LLM is an error, however understandable. We have been generating plausible text from statistical models since the 70s. Today’s revolution is all down to coupling that basic approach with big data techniques and staggering amounts of compute-on-demand. ChatGPT doesn’t “think” any more than Eliza did, and does not represent any particular milestone in that direction.

I do have a suspicion that LLMs have to some extent successfully replicated an important faculty of human brains. I tend to think that there exists within my brain a specific subnetwork trained for years on the vocabulary, syntax and idioms of American English, which operates autonomously to a certain extent. Given an idea of what I want to express and a general strategy for doing so, there is a sense in which the actual language composition just happens subconsciously. As an expat trying to immerse myself in a new language, the difference in where I have to spend my brain-cycles, in figuring out what to express versus rendering it into language, is stark indeed. The linguistic network I have for my native English seems to do a lot of what an LLM does. If pressed, I suspect more of human communication occurs at this LLM-ish level than we’d like to admit to ourselves. Current LLMs lack some of the plasticity of the brain’s version, but generally seem to do the same job. I suggest that LLMs do not solve for thought, but for fluency.

I’d also like to suggest that human mental performance is overemphasized in the context of AGI. A dog has general intelligence, despite the terrible job they do summarizing news articles for me. They can be trained in a wide variety of tasks and can independently apply their existing understanding of the world in an attempt to derive coping strategies for unfamiliar situations. Automation of knowledge tasks is valuable, but orthogonal to figuring how to replicate general intelligence.

In summation, impressive as they are, there is nothing inherently unique about LLMs compared to the sixty years of past innovations that have kept AGI resolutely “20-30 years off” for all that time. For want of any obvious breakthrough regarding “general” intelligence specifically, your timeline sounds a bit to me like just applying Hofstadter’s Law to the standard guess. Thus my money says in 30 years we will have solved for several more interesting well-defined knowledge tasks, AGI will still be “20-30 years off” and we will have a much evolved hindsight view of what LLMs actually brought to the table.
posted by gelfin at 4:56 AM on May 26 [5 favorites]


>I suggest that LLMs do not solve for thought, but for fluency.

Poets, priests, and politicians
Have words to thank for their positions

My suspicion is if reality can be modeled in words, LLM is going to work
posted by torokunai at 9:47 AM on May 26


Please describe this picture

This picture shows a specialized laundry machine for washing and drying shoes. The machine has Japanese labels and instructions, indicating it is located in Japan. The top part of the machine is the shoe dryer, while the bottom part is the shoe washer.Here are some key features visible in the image:
  1. Shoe Dryer (Top Part): The upper section has a door with an illustration of shoes and the word "くつ乾燥" (kutsu kansō), which means "shoe drying." It includes a coin slot and an operation panel for setting the drying time.
  2. Shoe Washer (Bottom Part): The lower section is the shoe washing machine. The door is open, showing the drum and instructions inside. It is labeled "くつ洗い" (kutsu arai), meaning "shoe washing."
  3. Instructions and Labels: There are several labels and instructions in Japanese, including how to use the machine and precautions.
  4. Coin Operated: The machine has a coin slot, indicating it is coin-operated, with a display showing the cost (e.g., 20 yen/100 yen).
  5. Placement and Surroundings: The machine is placed against a wall in what seems to be a laundry room or a similar facility. There is a wooden chair and some plastic bags in the foreground.
Overall, this image depicts a coin-operated shoe laundry machine designed specifically for washing and drying shoes, commonly found in Japan.

(this is f---ing incredible)
posted by torokunai at 9:09 PM on May 26 [2 favorites]


It is incredible, and it shows the strengths of current AI. The language translation bits are where the statistical methods of LLMs were first used, and it's why Google Translate became such a high thing.

The image analysis has been trained on millions of photos, tagged by humans. Enough that it has string probabilities of matching. But if it was never trained on a washing machine shaped (for example) like a 5-pointed star, it could not intuit the function in the way a human could.
posted by CheeseDigestsAll at 6:00 AM on May 27




We typically see LLM responces cherry picked for bing either comedically stupid, or else remarkably good. Yet, we do encounter "serious but random" LLM outputs ocasionally too, like when non-English speakers post LLM outputs into online forums, sometimes even for doing scams. Among these, there is some shitty-ness to the LLM text, like sensless repetations.

Invariably, these texts would be better if the human had written the text in their native langauge, and then run it through google translate, because the methods google translate uses wind up far better for translation than what LLMs do.
posted by jeffburdges at 1:19 AM on May 28


Poets, priests, and politicians
Have words to thank for their positions

My suspicion is if reality can be modeled in words, LLM is going to work
The positions of poets, priests and politicians self-evidently are not the entirety of intelligence. Those arguing for the sufficiency of language manipulation need an account for why the overwhelming majority of animate life on Earth is not vegetative for want of language.

The critique I’m going to suggest here is still a bit inchoate, so consider it an alpha release at best. I am a huge booster of the scientific/industrial/materialist position. The benefits of analytical rigor are undeniable. Outcomes that are testable and reproducible have built the modern world. Further, I readily concede that the sorts of mental rigor of which humans are uniquely capable is largely due to language (considered very broadly). Rules of inference and implication enable reliability of process.

But the sort of rigor we are talking about has its value precisely to the extent it is constrained. It is the opposite of freedom. I don’t mean that in any sentimental way, or to in any way imply dualism. An answer that can be fully verified and benchmarked is not a thought, but an algorithm, however sophisticated. Rules an actor is obligated to apply represent the intention of the rule-setter, not the actor.

This is one reason I suspect automating more and more sophisticated tasks does not approach general intelligence. Each task is considered an achievement only to the extent it conforms to the established benchmark. You cannot produce independent agency by instrumenting for a lack of it.

Furthermore, it is my view that there is an inherent contradiction in what we claim we are trying to do here. Behind the alleged desire for machines that think like humans is the age-old complaint among employers that humans do not behave enough like machines. To the extent general intelligence represents anything like independent agency, which I think it does in the minds of many of the laypeople most excited about the prospects, that goal stands in stark opposition to the actual goal behind all this profligate funding: production of an idealized worker, which by definition exactly and only implements the will of its owner/employer. In industrial automation you legitimately get to boast that the machine doesn’t get tired. Is it even rational to posit a truly intelligent machine that can never get bored?
posted by gelfin at 2:57 AM on May 28 [3 favorites]




Anytime people discuss AI adoption seriously then I enjoy highlight the current pinnacle of AI in the arts & culture.

I Glued My Balls To My Butthole Again has 9.2% of the views as Fortnight by Taylor Swift, over a similar time period.
posted by jeffburdges at 6:51 PM on May 29




Troubling exchange just now:

https://chatgpt.com/share/e8575e06-d75f-4226-856e-a5d3bc3614a5

needs to not BS like that
posted by torokunai at 2:03 AM on June 2 [1 favorite]


That’s just standard lie-bot nonsense?
posted by Artw at 8:34 AM on June 2


yup; "4o" still needs an internal truth pass on the output
posted by torokunai at 11:30 AM on June 2


How would an internal truth pass work?
posted by april of time at 12:16 PM on June 2


It doesn't. LLMs have no sense of truth or falsehood, just probability. You might be able to decrease the likelihood of suggestions like putting glue on pizza by analyzing multiword groupings, but other than that, it's just going to parrot a variation of what is been trained on.
posted by CheeseDigestsAll at 2:03 PM on June 2 [1 favorite]


The AI Revolution Is Already Losing Steam
Good article, and reasonably balanced. I hope the actual industry behavior is similarly rational, but I’ve been in that same industry too long to hold out much hope. People like Altman will crash the whole sector if they’re allowed to. They’ve done it before.

I’m about to be really uncharitable here.

The cynic in me detects a little whiff of Theranos in Altman’s hints and insinuations of impending “AGI.” My hunch is they are firmly in “fake it til you make it” mode. People have seen what ChatGPT can do, and can’t. They need something fundamentally new to overcome the limitations. If OpenAI had it, they wouldn’t keep it under wraps, and they’d be less focused on gimmicks like stealing Scarlett Johansson’s voice. The fact that “AGI” is their story makes the case even more effectively. It’s too broad, too vague. We need to hear how the hallucination problem will be fixed, and how a ChatGPT successor can approach something more like semantic information processing. We’re not hearing that. We’re hearing “we’ve just about got non-homicidal HAL-9000 in here, pinky-swear!”

This bubble will pop unless Altman can keep people throwing money at OpenAI, in a time when interest rates mean there’s less money available for the throwing. At the current cost and capability level, they’re in danger of becoming the Apple Newton of AI: an impressive engineering accomplishment that has potential legitimate utility, but cannot quite justify itself in its nascent form. Eventually their customers will figure out they’re going to have to keep employing humans, and then chatbot-as-a-service, however sophisticated, becomes an extra expense rather than a clear savings. OpenAI’s interest is in drawing out that realization as long as possible in the hopes they can manage to counterbalance it with a dribble of incremental improvements they’ll spin as milestones on the road to the AGI they’ve dangled.

Furthermore, my startup-bullshit sense leads me to suspect that they have formed an idiosyncratic internal definition of “AGI” (my money’s on some sort of multiplexed agent framework) that will definitely not jibe with any intuitive notions of what AGI means. There’s no market demand for an AI that might privately think the CEO is an idiot, or have practical or ethical objections to its orders. That’s why they want to get rid of the people to start with. Whatever OpenAI hopes to produce will be compatible with that demand, and continued employment, I’d wager, depends on drinking the kool-aid and calling whatever that is “AGI.”
posted by gelfin at 12:48 PM on June 3 [4 favorites]




It'd be hillarious once we've whole zoom chats full of bullshit artists who replace themselves with deepfake bullshit artists, so they could simply claim their bullshitted each other for hours, but really nobody did anything.

We could even have hackers replace their avatars but then nobody notices since the sholw chat consists of deepfakes.
posted by jeffburdges at 1:50 AM on June 4




"We have a few key things that a grifter does not have, such as job stability, genuine friendships, and souls. What we do not have is the ability to trivially switch fields the moment the gold rush is over, due to the sad fact that we actually need to study things and build experience. Grifters, on the other hand, wield the omnitool that they self-aggrandizingly call 'politics'2. That is to say, it turns out that the core competency of smiling and promising people things that you can't actually deliver is highly transferable."

Around this, Sam Altman was behind Worldcoin too, so absolutely a grifter there.

Also Sam Altman's sister, Annie Altman, claims Sam has severely abused her
posted by jeffburdges at 8:05 AM on June 19


« Older With personality and freedom, we stick out!   |   Sing-On-TheBeat Newer »


This thread has been archived and is closed to new comments