ChatGPT is bullshit
June 13, 2024 12:09 PM   Subscribe

Using bullshit as a term of art (as defined by Harry G. Frankfurt), ChatGPT and its various LLM cohort can best be described as bullshit machines.

Abstract:

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

And this bullshit might just be 'good enough' to give bosses even more leverage against workers.
posted by ursus_comiter (68 comments total) 48 users marked this as a favorite
 
Oooh, I loved Merchant's book and I didn't realize he had a blog! Thanks for posting.
posted by chaiminda at 12:23 PM on June 13 [1 favorite]


the link out to the plateau was fun:
"what are you doing, Hal?"
"making dots, Dave, why do you ask?"
posted by HearHere at 12:35 PM on June 13


Here's a great piece from Software Crisis about the specific kind of bullshit it is: The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con.
posted by tovarisch at 12:53 PM on June 13 [23 favorites]


I am still partial to Neil Gaiman's assessment:
"ChatGPT doesn't give you information. It gives you information-shaped sentences."
posted by Silvery Fish at 1:09 PM on June 13 [64 favorites]


"Mansplaining as service" is the best description I've heard.
posted by pantarei70 at 1:10 PM on June 13 [45 favorites]




(hmm Medium seems to have decided the Thompson story is members only since I last read it - try here.
posted by i_am_joe's_spleen at 1:14 PM on June 13 [2 favorites]


It may be bullshit, but it still makes more sense than Donald Trump's gibberish.
posted by briank at 1:22 PM on June 13 [7 favorites]


This seems extremely accurate: "we consider the view that when they make factual errors, they are lying or hallucinating: that is, deliberately uttering falsehoods, or blamelessly uttering them on the basis of misleading input information. We argue that neither of these ways of thinking are accurate, insofar as both lying and hallucinating require some concern with the truth of their statements, whereas LLMs are simply not designed to accurately represent the way the world is, but rather to give the impression that this is what they’re doing."
posted by latkes at 1:29 PM on June 13 [29 favorites]


Sometimes I get responses back from people on matters and I am convinced that they obtained by using AI. I don't want to accuse them of that because "that sounds like AI" is like the worst insult, but it seems like I'm getting more "answers" that don't say anything recently. Or answers that seem like they speak to keywords.
posted by dances_with_sneetches at 1:31 PM on June 13 [7 favorites]


At least Trump’s gibberish can be made into catchy tunes
posted by qxntpqbbbqxl at 1:36 PM on June 13 [3 favorites]


As an English Major and a man, I can confirm that bullshit works, to a distressing degree. You can get disturbingly far on bullshit alone, and the farther you get, the more of a problem it becomes when you actually need real knowledge.
posted by rikschell at 1:45 PM on June 13 [26 favorites]


So.... people who provide information-free answers feel they are giving you what you want, with less effort.

They think what you mostly want is the other half of a social exchange, question and answer. To the extent that _information_ is part of that exchange, it is of less importance than the fact they provided a response. And for many reasons -- what if they don't like my answer? what if the truth is unpalatable? what if my explanation is not understood? what if they sneer at my poor English? how can I explain this complicated thing? I don't want to take the time to give an answer I would consider good -- people will settle for completing the exchange and hoping for the best.

And often in organisations, middle managers do value the response more than the information. Every person has the experience of telling a manager that information is not to be had. We don't know how the project will take, we don't know how we will fulfil the sales team's promise, we don't know what output next month will be. And the manager will demand that we make something up, because they have to tell their boss.

This is also connected to academic cheating for some people who see the process of gaining a qualification as an extended hazing rather than learning and education.
posted by i_am_joe's_spleen at 1:45 PM on June 13 [22 favorites]


For many in higher ed, the desired learning process is to master bullshitting the system so effectively that one receives a sheepskin without having done any actual work. The chore is to seem like you are knowledgable without having any knowledge, and to do it so well that the doors of politics and industry graciously open to you. Don't call me cynical, this is exactly what is going on, this is the system of old money higher education As Designed. A training system for bullshitters, for people who are so good at sounding like they know what they're talking about that they don't have to know anything. LLMs are perfect for them.
posted by seanmpuckett at 1:54 PM on June 13 [22 favorites]


yeah the scary thing is I can't identify a cut-line between what I know now in my late 50s vs what a LLM can encode into billions and billions of "parameters" and reference on demand.

mebbe we have to raise these things like kids . . .
posted by torokunai at 1:59 PM on June 13 [2 favorites]


babies don't know what they know
children know what they know
fools don't know what they don't know
scholars know what they don't know
posted by torokunai at 2:01 PM on June 13 [10 favorites]


often in organisations, middle managers do value the response more than the information
*begins directing manager to llm whenever they want _some bs_*
posted by HearHere at 2:21 PM on June 13 [3 favorites]


Confabulation is also a great word.

The soup of patterns can't tell me an answer I would deem true while also knowing to give any other person an answer they would deem true. We don't have a scoring, either, for empirically-verifiable patterns so any is as likely as any other pattern. And we sold better vs worse outcomes in any moral sense over to 'line goes up' as the best outcome.
posted by k3ninho at 2:32 PM on June 13 [1 favorite]


It may be bullshit, but it still makes more sense than Donald Trump's gibberish.

I think there's a group called chat2024.com that was working on making chatbots out of presidential candidates, but maybe the feds stepped in and put a stop to it as the site is dead.

There's nothing in the Constitution against voting for an AI model that is conceived in the US — corporations are people, my friends! — but perhaps we just have to wait 35 years for the privilege.
posted by They sucked his brains out! at 2:41 PM on June 13 [2 favorites]


There's nothing in the Constitution against voting for an AI model that is conceived in the US — corporations are people, my friends!

Just a minor point here, but only natural people are eligible to be President, and corporations are only legal people.
posted by angrynerd at 2:46 PM on June 13 [3 favorites]


For now
posted by They sucked his brains out! at 2:49 PM on June 13 [5 favorites]


I have been arguing this point to anyone who will listen for a while.

And I have very little faith in academia, but it really does warm my heart very slightly that the title of this paper is, literally and entirely, "ChatGPT is bullshit"; that the reviewers and editors all agreed that this is the correct technical term, as defined in the seminal work of Harry Frankfurt; and that therefore anyone who needs to do so can now cite, extremely specfically, the fact that ChatGPT is indeed bullshit.
posted by automatronic at 2:53 PM on June 13 [32 favorites]


This article aligns with something I've been mulling over and trying to crystallize for a while now—thank you for posting it.

Unfortunately, a lot of what we're learning in this time of ChatGPT (please let it be a passing time and not an age) is that our society runs on bullshit far more than most of us would like to believe. I've been in too many meetings, conferences, and other contexts where presented with the barefaced reality of what these tools are able to generate, the response is "take my money!" Being a member of society today unavoidably involves wading through a vast sea of bullshit. LLMs didn't create that problem, but they are making it cheaper and easier to create larger and larger volumes of bullshit—if we were ankle deep in it 20 years ago, we're hip deep now, and we can all look forward to being neck deep before too long.

But the thing I find most sad about this situation is the way these bullshit generators are crowding out some of the genuinely worthwhile technological progress that's been made in the last few years. I mean, we have networks that can reliably identify and track objects in images! That's a genuinely hard problem with real, non-bullshit applications! We've just about cracked not only quadrupedal locomotion but bipedal locomotion! These are super hard things that until a few years ago, only biological brains could do even somewhat decently. Computers that are smart like dogs are world changing (world improving!) technologies, but we're too distracted trying to fire every human who works in a call center and generate an infinite torrent of shitty marketing copy.
posted by angrynerd at 3:00 PM on June 13 [35 favorites]


In 1994 my high school english teacher explictly taught us how to bullshit, using that word, aimed at eg college applications. (It worked.) Now I think she must have been familiar with Frankfurt's essay.
posted by joeyh at 3:24 PM on June 13 [5 favorites]


Frankfurt's book focuses heavily on defining and discussing the difference between lying and bullshit. The main difference between the two is intent and deception. Both people who are lying and people who are telling the truth are focused on the truth. The liar wants to steer people away from discovering the truth and the person telling the truth wants to present the truth. The bullshitter differs from both liars and people presenting the truth with their disregard of the truth.

I find this a bit of a distinction without a difference, since the intent of the bullshitter (at least the human ones) is the same as the liar, c.f. Steve Bannon's "flood the zone with shit."

It's true that LLMs don't care about the truth, but they don't care about anything, they're computer programs. I find the "stochastic parrot" terminology more accurate.
posted by CheeseDigestsAll at 3:37 PM on June 13 [3 favorites]


The only reason LLMs seem to know things is that they have been trained on things real people have said, some of which is true, and some of which we agree, more or less, is true.

It is also trained on jokes, sarcasm, lies, The Onion, things children have said and political rhetoric.

Which means when it rolls the dice and pulls out things that statistically speaking, look like sentences, sometimes they appear correct. But, that's just because there's a fair bit of true stuff on the internet. But, that dice roll can just as easily telly you that pizza is made with glue or that birds aren't real. In both cases, the LLM has worked as designed. In neither case does it 'know' anything about the output it created.

So yes. What science has created is a confident white man with the worst possible case of Dunning–Kruger syndrome.

Our ability to anthropomorphize inanimate objects does the rest.

Industry is trying to use this same trick by making cute robots. If a robot has eyes, and a few programmed expressions, people will tend to treat it somewhere between a human and a pet.

I'm suggesting to my friends that it is time to rewatch Spielberg's AI: Artificial Intelligence. The bit with the robot carnival is on the nose.
posted by chromecow at 3:41 PM on June 13 [11 favorites]


I think the meaningful difference is that LLMs don't have a regard for the truth because they are incapable of doing so. Thus, they are not lying or hallucinating, but rather bullshitting.
It doesn't assume any intentionality on their part.
posted by signal at 3:42 PM on June 13 [7 favorites]


The next generation of LLMs is going to be even better at the bullshit and I'm worried that it will be so much better that it will convince too many people that it's not spouting bullshit and getting in trouble as a result.
posted by It's Never Lurgi at 3:54 PM on June 13 [1 favorite]


The one thing I have found chatgpt to be absolutely flawless at is mealy-mouthed corporate apologia. Ask it to explain why, regrettably, most or all of the workers must be laid off, and it captures, exactly the tone and content you always see for announcements of those things.
posted by Jon Mitchell at 4:40 PM on June 13 [21 favorites]


One aspect I haven't seen discussed much is how the dialogue format shapes our perception. I think one reason we want to believe so much is that we interact with LLMs through natural language (CS term of art there) and they respond in idiomatic English. I mean yeah we see nods to Eliza and so on, but not discussions of it means to have dialogue with a partner that is actually a monologue from you in response to prompts from the machine. Yes: I want to say that it is the LLM that is prompting us to express actual manifestations of intelligence, not the other way around.

If you had to write a program in some kind of artificial language or notation, and got an answer that was telegraphic and semi-coded, I think it would be more obvious that it's a machine.

And yet that is actually the crux of it: because what it is doing is probabilistically chaining words together, that's actually all there is. There is no representation of the world in there, no store of facts being processed. It's just burbling. It is an empty vessel, endlessly chattering. Probably no accident that ChatGPT is so named.

But anyway, +++OUT OF CHEESE ERROR+++
posted by i_am_joe's_spleen at 4:57 PM on June 13 [8 favorites]


I think there's a group called chat2024.com that was working on making chatbots out of presidential candidates, but maybe the feds stepped in and put a stop to it as the site is dead.

Someone in Cheyenne, Wyoming, is trying to run for mayor with the promise that he will govern using only a chatbot.
posted by ElKevbo at 4:57 PM on June 13


With the current state of AI, I don't expect it to put out anything but bullshit. But I take a pile of bullshit and polish it into a GD gemstone. Historically, I'm the one that writes the bullshit rough draft so it's nice to have an AI do that part for me.

AI won't be as good as humans until it's nearly as smart as most humans, but it's a slick tool as well as you understand that it doesn't DO anything for you, it helps you do things.
posted by VTX at 5:00 PM on June 13


understand that it doesn't DO anything for you, it helps you do things.

You understand that and I understand that and most of the people reading this thread understand that.

It's everyone else that I'm worried about.
posted by It's Never Lurgi at 5:15 PM on June 13 [9 favorites]


Pro: Like human bullshitters, LLMs are not concerned with truth or belief, but only plausibility. The success metric for bullshit is how effectively the audience is manipulated by the performance. This is also the success metric for LLMs. Thus the pragmatism implicit in the Turing Test seems to have contained an unanticipated trap: the testers are part of the audience, and there is no way to correct for this. Distinguishing between credibility and the performance of credibility is hard enough among humans. We know the LLM is a simulation. What possible testing protocol could distinguish between a simulation of credibility and a simulation of a performance of credibility, if there can be any meaningful distinction at all?

Con: To Frankfurt’s point, bullshit is not mere disregard of truth. Lack of a distinct truth polarity is not necessarily indifference to truth. When possible, the most effective way to sound like you know what you’re talking about is to actually know what you’re talking about. Bullshit artists are simply those who won’t let lack of the latter get in the way of the former, and more to the point, value managing the audience’s response above all other concerns. Their performance of credibility is informed (however incompletely), directed, goal-seeking behavior of which the LLM is utterly incapable. It is the ethical load of the word “bullshit” that makes it inappropriate for LLMs. A “hallucination” is unintentional. “Bullshit” is not. They both evoke the anthropomorphic idea of an inner self, but the latter compounds the error with an implication of intentionality.

In conclusion, the point about “bullshit” is well-taken, but I’d like to reframe it along the same lines everything else surrounding LLMs should be reframed: stop centering the machine. The LLM is not capable of the traits it is designed to mislead us into ascribing to it. The LLM is an astounding innovation for enabling us humans to bullshit ourselves.
posted by gelfin at 5:38 PM on June 13 [13 favorites]


Using bullshit as a term of art (as defined by Harry G. Frankfurt), ChatGPT and its various LLM cohort can best be described as bullshit machines.

It's kind of an interesting philosophical question that Frankfurt might have been interested in, if he was alive. His definition of bullshit includes conscious intent. The bullshitter intends to convince you of their position, while not caring if their position is based on statements of truth or of falsehoods.

Whereas AIs have no concept of truth or falsity and cannot, because they are not conscious — which raises the question of whether they can bullshit if they don't know and cannot know what is true or false. They don't even really much care if you like their answers.

The programmers and shareholders, however...
posted by They sucked his brains out! at 5:50 PM on June 13 [1 favorite]


Hinton is well-known for pointing out the "next-token reductionism" in the argument that LLMs cannot be intelligent or understand the truth because they are designed to find the next token. My own understanding is that this is a subtle technical point that goes all the way back to Turing equivalence and other interrelated ideas that 20th century analytic philosophers had about the relationship between mechanical logic and truth. At any rate I wish the authors in this piece had addressed Hinton's point which has been well publicized, because the authors here use the next-token prediction model as their central argument, which Hinton ostensibly disagrees with.
posted by polymodus at 5:52 PM on June 13 [5 favorites]


stop centering the machine

Yep.

"Actually, Dr. Bullshit is the name of the monster's creator."
posted by away for regrooving at 6:01 PM on June 13 [31 favorites]


VTX and It's Never Lurgi, you get to part of my core beef with this tech.

It can help you do stuff. You still have to have the domain knowledge to parse what it gives you.

I too am starting to see more LLMish responses in my life, mostly on internet forums, and honestly it reminds me of the shit I would post on forums myself when I was 14 - polite gibberish that uses a lot of weasel words and vaguely explains the general consensus of the forum on that topic. There was no actual detailed knowledge to add to the discussion, just a regurgitation of what I'd read on the forum. I think I wanted to be helpful and look smart.

As someone with a little more perspective now, it drives me nuts. I still have to resist the urge to post minimally informed takes, and when others do, I just feel like it's clutter someone trying to learn something will have to wade through and possibly be misdirected by in the future.

Bah humbug.

I still need to make shirts that say "ChatGPT can't weld".
posted by jellywerker at 6:56 PM on June 13 [5 favorites]


the age demanded
posted by graywyvern at 7:07 PM on June 13


"Actually, Dr. Bullshit is the name of the monster's creator."

Heh.
posted by Tell Me No Lies at 7:39 PM on June 13 [1 favorite]


Being in the field, the epiphany came to me when I realized, it's all hallucinations, just that some of them also correspond to factual reality.
posted by paladin at 10:16 PM on June 13 [8 favorites]


"Yeah ChatGPT can't weld *now* buh buh the techbros and venture capitalists hyping it say that in just a few years ..."

*Batman slapping Robin meme*
posted by GallonOfAlan at 12:45 AM on June 14 [1 favorite]


Being in the field, the epiphany came to me when I realized, it's all hallucinations, just that some of them also correspond to factual reality.

I love this insight, though it does also sound like you're outputting unedited plagiarism having scraped and trained on a dataset from erowid.org
posted by protorp at 12:45 AM on June 14 [2 favorites]


The thing is that the bullshit is useful. It's often better bullshit than I can generate.

For a performance review lately I was told to write something explaining how my work has aligned with the Company Values over the last 12 months

Normally I find that kind of thing agonizing. But this time I just went to ChatGPT, told it to generate that explanation with a list of the company values, then just pasted in stuff I'd actually done in place of its fictitious examples.

My boss has never been happier.

Managers already exist in a world of vague bullshit. There was a piece a while back explaining why they love bullet points so much. You can put bullet points like:
  • Cut costs
  • Improve customer satisfaction
  • Increase revenue
If you had to put those into a sentence, you would have to put some kind of causal relationship in there explaining what leads to what, maybe even confronting conflicts between them. But bullet points are preferred because they are vague. They let you detach more from reality.

From a manager's point of view, the way LLMs produce utter bullshit with effortless fluency is the most wonderful thing about them.
posted by TheophileEscargot at 1:40 AM on June 14 [16 favorites]


TheophileEscargot: this exemplifies what I've been trying to say upthread. Thanks.
posted by i_am_joe's_spleen at 1:46 AM on June 14


I’d like to reframe it along the same lines everything else surrounding LLMs should be reframed: stop centering the machine. The LLM is not capable of the traits it is designed to mislead us into ascribing to it. The LLM is an astounding innovation for enabling us humans to bullshit ourselves.

I want a machine that will favourite this point harder than I can.
posted by flabdablet at 1:49 AM on June 14 [2 favorites]


I agree people often want bullshit. Also, bullshit dominates much corporate & government work.

As a more concrete statement, AIs/LLMs always output considerable garbage, so humans must review their outputs against mistakes. It follows that AIs/LLMs are useful when their output domain is easily reviewable. We human are not so great at reviewing either, so the output domain being low-stakes matters too.

Automated translation is better done by specilized AIs less prone to halucination, but automated translation already works well when reviewed by humans. Automated live translation does ocasionally get someone arrested or deported, but safeguards could be added because they're not general purpose LLMs.

Afaik an LLM cannot really write a novel worth reading, becuase it runs off into hallucinated stupidity, and novels are so long that reviewing this sucks, but more locally some AIs help mimic specific styles, which sounds useful for human authors.

Advertisers only really care about the impression, not about correctness. Advertisers could avoid words with specific legal meanings fairly easily too. AIs should therefore incredibly useful in advertising. It's all already bullshit so thy cannot do much damage. lol

A 2 min pop song could be reviewed easily too, so expect considerable AI generated pop music. Obscurest Vinyl has one track which held like 9-10% of the views of Fortnight by Taylor Swift for like a month. And this gem is crazy. An AMA describes his methodology:
Hey thanks! So, yes, the music is ai, but I write the lyrics and piece together the best "takes" to make it sound like a complete song. Sometimes i will add a layer of keys or strings to smooth out the transitions.

Since 2017, I've been desiging all of these covers myself (no ai in the art), but the music was always the missing piece to the joke. As a musician/songwriter myself, I've tried numeruous times to make it happen, but it's just not feasible. These ai programs allow me to finish this whole idea of unearthing these insanely stupid, and forgotten records haha.

I had no idea the songs would get the attention they've been getting. It's fun.
Alright so "it's just not feasible" to convince other talented humans to help do these dirty little jokes. At some point, advertisers should figure out they'd similar problems, but AIs could help them make extreemly catchy pop songs that promote their product, so then radio listeners could not necessarily seperate advertisements or real music. All of pop music could descend into bullshit. lol
posted by jeffburdges at 1:54 AM on June 14


It's all already bullshit so they cannot do much damage.

Assumes facts not in evidence.
posted by flabdablet at 2:48 AM on June 14 [3 favorites]


It's all already bullshit so they cannot do much damage.

Assumes facts not in evidence.


Assumes facts will no longer be available as evidence.
posted by srboisvert at 3:51 AM on June 14 [5 favorites]


"Actually, Dr. Bullshit is the name of the monster's creator."
I will never forgive myself for not coming up with this. Bravo.
posted by gelfin at 3:59 AM on June 14 [3 favorites]


Yeah of course, I'd written that from a narrower perspective but failed to revise: advertisments are brief shallow work, so replace your advertising department, nobody would notice. lol

As my music example brings up one real conser: Advertising could be improved by AIs, which very much creates social problems. In particular AIs could simulate human tallent which advertisors cannot currently corrupt as much as they'd like.

AIs already manipulate individuals using information which advertisors could not process cost effectively using humans.
posted by jeffburdges at 4:01 AM on June 14


OpenAI has plans to rectify this by training the model to do step by step reasoning (Lightman et al., 2023) but this is quite resource-intensive, and there is reason to be doubtful that it will completely solve the problem—nor is it clear that the result will be a large language model, rather than some broader form of AI.
This seems like they are hedging their bets - so if it turns out that this new model is more accurate they can say "that doesn't count, we were specifically talking about LLMs". What exactly is being alleged to be bullshit here? LLMs? ChatGPT? Feed forward neural networks trained on massive amounts of data? AI in general? If it's just LLMs - or even just LLMs on their own or LLMs in their current state, then who really cares? It's like saying "self-driving cars will never be a thing" but then clarifying that you are just talking about self-driving cars without LIDAR, or self-driving cars without GPS guidance or whatever. It becomes a technical nitpick. ChatGPT is already not purely an LLM - GPT4 has had vision capabilities for a while now, and GPT-4o is a multi-modal model. When people claim that current AI models are "bullshit", I take that to mean that the current mania is pursuing an approach that is a dead-end. That implies more than just that LLMs as they stand have certain limitations, but that current architectures, tools and so on cannot be easily repurposed for an AI approach that actually works.
posted by L.P. Hatecraft at 4:10 AM on June 14 [1 favorite]


When people claim that current AI models are "bullshit", I take that to mean that the current mania is pursuing an approach that is a dead-end. That implies more than just that LLMs as they stand have certain limitations, but that current architectures, tools and so on cannot be easily repurposed for an AI approach that actually works.

You can take it to mean what you like, but there is no such claim in this paper. What they're saying that these tools, as they exist today, produce bullshit.
posted by automatronic at 4:24 AM on June 14 [5 favorites]


It's hard to see how they can produce anything besides bullshit, except by accident. A machine that squeezes out spaghetti strands doesn't evolve into a five-star chef. It just makes spaghetti.
posted by kittens for breakfast at 4:33 AM on June 14 [3 favorites]


They're not simply talking about these tools as they exist today with that quoted statement though, they are making (guarded) predictions about their future capabilities. I think it's also fair to comment on what these criticisms are insinuating - which is that there are fundamental problems with the current approach to AI. My point is that if you want to limit your criticism to LLMs then the issues are not fundamental, and if you don't then the argument hasn't been made.
posted by L.P. Hatecraft at 4:34 AM on June 14


Well, that sounds like something an LLM would say, L.P. Hatecraft.
posted by kittens for breakfast at 4:38 AM on June 14


I think the opposite, kittens for breakfast. It's the anti-AI posters in these threads who constantly regurgitate clichés, much like an LLM does: "it's a confident white guy simulator", "late stage capitalism", "spicy auto-complete", "just predicting the next token" and so on. Also, just like an LLM it often seems like their training data is stuck in late 2022 or mid 2023, which is why they often say stuff like "LLMs aren't grounded in reality because they are trained on text only" - ignoring newer multimodal models.
posted by L.P. Hatecraft at 4:57 AM on June 14 [2 favorites]


LLMs aren't grounded in reality because they are trained on text only" - ignoring newer multimodal models.

The thing is, multimodal (as of today) only fixes small things. You can add on a math recognizer so they don't suck at math, and glue on other fixes for specific forms of errors, but it's still not based on any kind of epistemology.

It will take the development of an entirely new paradigm and testing and adoption of it before we see advances in "intelligence". Is that possible? Yes. In the next decade? I have my doubts. Will the ChatGPTs improve during that time? Yes, but only incrementally.
posted by CheeseDigestsAll at 5:35 AM on June 14 [7 favorites]


A nice example of the kind of bullsh*t that satisfies management is in Connie Willis's Bellwether (1996), when Management asks for the "five objectives" the attendees were supposed to write:
Gina snatched the list from her and wrote rapidly: 1. Optimize potential 2. Facilitate empowerment 3. Implement visioning. 4. Strategize priorities 5. Augment core structures."
Which was, apparently, what the character always wrote in those situations.
posted by Peach at 7:09 AM on June 14 [7 favorites]


In conclusion, the point about “bullshit” is well-taken, but I’d like to reframe it along the same lines everything else surrounding LLMs should be reframed: stop centering the machine.

So, Microsoft is bullshit. Dr. Bullshit, I presume. Humans are not necessarily Microsoft, as a human, please minimize the association of Microsoft with me, thank you.

This is the same problem when talking about climate change, we don't say "Exxon" enough. There are specific political entitles making these political decisions.
posted by eustatic at 7:31 AM on June 14 [2 favorites]


I am going to gently suggest that fans of bullshit read the essay that’s the final link of the post about how bullshit being good enough impacts labor.
posted by ursus_comiter at 7:40 AM on June 14 [3 favorites]




Just the other day, I was browsing PetFinder (as one does) and found a dog whose "story" started out like this:

"Sure, here's a playful biography for a female dog named Daisy."

*facepalm*
posted by acridrabbit at 6:03 PM on June 14 [6 favorites]


it will take the development of an entirely new paradigm and testing and adoption of it before we see advances in "intelligence". Is that possible? Yes.

If I understand correctly, it sounds like all we have to do is (a) establish what has to be done, (b) figure out how to do it, and (c) actually do it. Well, shit, we've got this thing on lock, there's no stopping us now.
posted by kittens for breakfast at 7:22 PM on June 14 [10 favorites]


Around bullshit, Sam Altman was behind Worldcoin too, so a high bullshit rate there.

Also Sam Altman's sister, Annie Altman, claims Sam has severely abused her
posted by jeffburdges at 12:20 PM on June 16 [1 favorite]


I'm watching the rerun of the 60 Minues Geoffrey Hinton interview and it's put me in fear of my life. Not of ChatGPT, but of credulous dipshits.
posted by ob1quixote at 4:33 PM on June 16 [5 favorites]


Credulous dipshits. Disposed to. Being impressed by. The portent. Of that presenter's. Delivery?

Seriously, there must be about two words each on that guy's cue cards.
posted by flabdablet at 7:28 AM on June 18 [2 favorites]


“AI Lie: Machines Don’t Learn Like Humans (And Don’t Have the Right To),” Avram Piltch, Tom's Hardware, 13 September 2023
posted by ob1quixote at 6:53 PM on June 18 [2 favorites]


« Older I remember now... These are "quaternions!"   |   “It’s all poets, now” Newer »


You are not currently logged in. Log in or create a new account to post comments.