Re-sourcing the Mind
August 4, 2024 2:49 AM   Subscribe

What might we lose and gain through widespread usage of Large Language Models? The invention of writing allowed a way to offload our thoughts and memories onto objects and it has since formed an indispensable part of our civilization. Technology philosopher L.M. Sacasas examines the historical parallel and asks if we might be losing something fundamentally human as people start using it not just for boilerplate but deeply personal expressions.
posted by ndr (54 comments total) 39 users marked this as a favorite
 
This is an excellent essay, a smart angle on the problem and lightly prescriptive, all while remaining optimistic. In short, it's very human.
posted by chavenet at 3:29 AM on August 4 [2 favorites]


Big fan of Sacasas writing, he’s got some interesting viewpoints
posted by The River Ivel at 3:37 AM on August 4


the other day I heard about some Google commercial depicting a child wanting to write a letter to her favorite athlete, and being encouraged to have an LLM generate the letter for her, which… surely I'm missing something, right? Because that sounds like a profoundly ghoulish premise for a TV commercial, right?
posted by DoctorFedora at 3:50 AM on August 4 [25 favorites]


Previously we discussed homogenisation of a rare but special event speech, say at a wedding.

There always will be the tension between meeting the social contract and being distinct enough to be memorable; in prior times we might have said "knowing which rules to break."
posted by k3ninho at 3:51 AM on August 4 [1 favorite]


Opinion I hate the Gemini ‘Dear Sydney’ ad more every passing moment by National Treasure Alexandra Petri
posted by lalochezia at 4:15 AM on August 4 [12 favorites]


My coworkers use chatgpt for tasks like, "can you rewrite this email to sound less frustrated?" and "can you turn this document into a powerpoint presentation?" The former bothers me and the latter doesn't, and I think this piece and other work by Sacasas gets at why. See also this piece by Rob Horning on "emotional canceling." When I send an email, maybe it's relevant that your last minute request made me frustrated. That emotion is vital information telling you to plan better. It's also important for you to remember that I'm a human. The reminder that I am a human might be a more relevant bit of information than the quality of my writing overall.
posted by tofu_crouton at 4:36 AM on August 4 [13 favorites]


My mum's been using AI to write stories to read to my kids and I'm really uncomfortable with it. My partner points out that she wouldn't be writing the stories herself so at least she's reaching out, but it feels like it cheapens the whole thing and I kind of would would be happier if she didn't do anything instead.
posted by Silentgoldfish at 5:36 AM on August 4 [7 favorites]


It's possibly she finds new reading material more entertaining than reading the same story, which kids accept/like. It's possible research should be done into how story variability impacts kids. I've always felt AIs write poorly, but maybe some more targeted generative story brings advantages.

Anyways, you could order her these really good children's books designed to entertain both the kids and the adult reading.
posted by jeffburdges at 5:47 AM on August 4 [1 favorite]


To me it's all about modifying what's seen as normal.

There was a time when records and formal correspondence were all handwritten, and making stuff look professional involved either having nice clear handwriting or hiring people who did.

Then typewriters happened, and within a decade or two the idea of handwritten business correspondence entirely disappeared. In order for any kind of record keeping or correspondence to be considered professional, it had to be typewritten.

A skilled typist can put text on paper faster than a skilled handwriter can, so this was seen as a productivity improvement - but most people are not skilled typists, so for most people the new requirement just added processing steps and/or equipment expense that simply weren't there before. Think of the archetypal police-procedural drama, which always includes an embittered nicotine-stained cop doing the two-fingered hunt and peck to get their report typed up before deadline.

Then computers happened, and a lot of business and government correspondence started coming off printers rather than typewriters. Again, printing is faster than typewriting so this was generally understood to be a productivity improvement - but a computer with a printer attached is also much more expensive than a typewriter, and using that combination effectively also requires whole new sets of skills, so again for ordinary people the gains that the new technology made possible were pretty much entirely wiped out by the added complexity inherent in the existence of that technology. The daisy-wheel printers required to get printed correspondence to look as presentable as the typed correspondence it was replacing were also far more expensive than the much faster dot-matrix types.

Then the Macintosh appeared, closely followed by the LaserWriter, and virtually overnight it was no longer sufficient for a business document to be typewritten; now it actually needed to look almost book-quality typeset. Again, this required whole new skillsets and to this day most people simply don't have those.

From a personal productivity point of view, innovations in the technologies used to get words on paper and have those words taken seriously have been pretty much a wash. The availability of those technologies to the larger businesses for whom they absolutely do yield productivity improvements has just altered the standards expected for serious written correspondence, to the point where generating formal paper correspondence takes about as long as it used to when it was all copperplate and inkwells and requires access to equipment that's way more expensive besides.

It seems to me that LLMs are currently doing much the same thing to the expected tone of online correspondence that the LaserWriter did to the expected look of printed correspondence. Give it five years and it will be hard to get taken seriously without a strong grounding in Delvish.
posted by flabdablet at 5:50 AM on August 4 [18 favorites]


We use LLMs in my current work, and as one of the only people on the team with an actual understanding of them, I'm constantly warning against over-reliance and coming up ways to make sure our usage is safe and effective.
For instance, we recently used an LLM to rewrite over 200 phrases that had come out of a series of workshops with experts on a subject. The phrases varied widely in how well written they were, their length, etc., and we wanted to normalize them. When we asked the LLM to do this, it was reasonably good at it, except: it made up phrases that weren't in the actual data.
You can see why this might be a problem.
I wrote a python script to feed the phrases in 1 by 1 to the LLM (using its API) and place the original and the modded phrases in two different spreadsheet columns, so it wouldn't change the number and it was easy to eyeball how good the rewriting was (it was okayish).
I think in the near future, maybe in 10 or 15 minutes, being able to understand the limits and problems with LLMs and other forms of generative "AI" will be a crucial professional asset, at least until the hype-bubble fully bursts.
posted by signal at 6:04 AM on August 4 [19 favorites]


At present, experence & expertise usually come coupled with ideology, with the most famous being the hippocratic oath, although maybe the social sciences have stornger opinions on Primum non nocere.

Aside from economists, whose ideology focuses upon maximizing harm, your typical social scientist has fairly strong biases towards benefiting their fellow humans near-term. We've a rich history of those benefits causing long-term harms, which likely continues in deep ways, but that's another topic.

In principle, social sciences research could be repurposed towards harming groups, not unlike how drug discovery AIs could be repurposed to design bioweapons. Arguably the CIA etc already do so, but maybe they've only a limited apetite for pooring over anthopology journals or studying pop culture, and considerable overhead.

As an example, AI hiphop artists could be designed to damage communities who use non-formalized langauges using the somewhat rarified knowledge of the people who study them.

It's much less dangerous than what economists do anyways of course, but AI generated songs already cover topics unlikely for artists who practice given styles.
posted by jeffburdges at 6:15 AM on August 4 [3 favorites]


It seems to me that LLMs are currently doing much the same thing to the expected tone of online correspondence that the LaserWriter did to the expected look of printed correspondence.

An interesting point, however I don't think that any of us are going to spend more time reading those inevitable letters we get from our banks or cell phone providers telling us how seriously they take the security of our personal information but apologizing for the latest breach in security, just because they were written by an LLM trained on customer relations correspondence.

Failure of modern businesses to generate real trust with their customers will not be back filled by improving the tone of what they tell us, either in print, email, or on the phone. If anything, it will leave us all the more cynical.
posted by Insert Clever Name Here at 6:19 AM on August 4 [24 favorites]


the other day I heard about some Google commercial depicting a child wanting to write a letter to her favorite athlete, and being encouraged to have an LLM generate the letter for her, which… surely I'm missing something, right? Because that sounds like a profoundly ghoulish premise for a TV commercial, right?

Google pulls its terrible pro-AI “Dear Sydney” ad after backlash.
posted by Insert Clever Name Here at 6:24 AM on August 4 [8 favorites]


You can see why this might be a problem

just as I can see why laying out a Microsoft Word document using loads of repeated paragraph breaks to push a paragraph onto the next page might be a problem.

The trouble will be, as always, that the overwhelming bulk of any given technology's users will never anticipate the new classes of inherent problem that the tech brings with it, due to being required to use the new stuff without ever having been given either time or motivation to come to grips with the countless new failure modes so consistently glossed over by its sales droids.

New technology always promises to save labour, but ultimately all most of it does is make most people's work more annoying and add more vomit in aisle 6 for those of us in the tech janitorial classes to clean up.

Failure of modern businesses to generate real trust with their customers will not be back filled by improving the tone of what they tell us, either in print, email, or on the phone.

Quite so. But I have seen enough Glossy Brochure Effect to convince me beyond doubt that it won't stop them trying.

Might I also draw your attention to your own use of the word "improving" here, and point out that sanding down a text's rough edges and staining it the approved shade of corporate beige does not necessarily amount to improving it, that this is a distinction entirely invisible to most of the managerial class, and that keeping it clear in our own minds is going to require ever more self-scrutiny as the present tsunami of entirely self-absent LLMs continues to inflict flood damage on every workplace.
posted by flabdablet at 6:51 AM on August 4 [11 favorites]


I don't think that any of us are going to spend more time reading those inevitable letters we get from our banks or cell phone providers

Just set up a subscription our expensive new AI-powered mail filter so we can notify you when something actually worthy of your attention arrives. That'll save you endless time! Trust us!
posted by flabdablet at 6:55 AM on August 4 [1 favorite]


It's a great essay, though somewhat misses the point of... [gestures at the last several thousand years of human technological development] ...all of this.

Writing is an intellectual prosthesis.
An encyclopedia is an intellectual prosthesis.
Your bank's mainframe is an intellectual prosthesis.
The Internet is an intellectual prosthesis.
Wikipedia is an intellectual prosthesis.
And LLMs are an intellectual prosthesis.

All of these are mechanisms for offloading the rote aspects of cognition so that we can translate ourselves into overseers of slightly higher-order structures than was previously possible. This is nothing more or less than the evolution of human cognition - the slow and steady expansion of the self.

Since the day parents watched their offspring play with fire or invent agriculture and despaired, people have been lamenting technological development as the end of natural selection and the halting of evolution. This behavior is both entirely natural and entirely missing the point: it's not about you or your evolutionary fitness, dummy, it's about the fitness of the genes themselves. Similarly: it's not about you or your self-important metacognition, dummy, it's about the fitness of the higher-order structures themselves. Evolution of the abstract superstructure is every bit as valid as that of the synthetic hominid-silicon substrate it rests upon.

Business writing and bureaucracy are hateful because they are endemic of the intellectual prosthesis for hierarchical human structures - corporations and empires - that we use to oppress ourselves and each other. Leaving that oppression to the machines is not only natural but in some sense desirable: the further we can divest our inner thoughtlife of the baby-grinder mentality accompanying capitalist structures, the better. Capitalism wasn't going to stop developing new methods to automate cruelty, and while LLMs are a link in that chain they're nothing like what we've seen before or what lies ahead of us. The best and worst of human history are both yet to come.
posted by Ryvar at 6:56 AM on August 4 [4 favorites]


I had an English teacher in high school who taught us that before you but pen to paper you have to understand who is your audience, who are you talking to, how will they hear it, how will they understand it? The recipient of the text is just as important or maybe more so than the creator of the text. It’s about communication - there is a sender and a receiver. Yes, you want to express yourself, but without thought of the other person, that expression may be full of problems. They may hear the opposite of what you trying to say. Machine assisted self expression as the self being replaced by the machine is an oxymoron. Generating the next statistically likely word in a sentence expresses no knowledge at all of the person expected to read that generated text. Help people to think, help them to be able to put their thoughts into words, help them to understand how other people will read and understand those words. Please don’t turn human expression into just greeting cards, with pre written text that “expresses” your feelings and thoughts for others.
posted by njohnson23 at 7:07 AM on August 4 [9 favorites]


the further we can divest our inner thoughtlife of the baby-grinder mentality accompanying capitalist structures, the better

Agreed, of course. The issue I have is that the more complex and all-encompassing become the tech stacks that implement that baby-grinder mentality, the harder it becomes for most people to imagine any way of life not involving those "conveniences" as acceptable. See also: all the pushback against renewable energy on the basis that its advocates expect us to live in caves and eat leaves and twigs.

Technology amplifies personal power for those with access to it, but does squat to amplify personal responsibility.
posted by flabdablet at 7:19 AM on August 4 [10 favorites]


My first thought was "anyone who would offload something as important as a wedding toast to a bot must be so brainless that it wouldn't come up with anything worse than they would on their own," but then I figured that was profoundly uncharitable. As a former English major and someone who writes easily, I take not just the skill for granted, but the expectation that I can just sit down and jot something off that's pretty good the first time, and if it's worth taking the time to edit, really quite good when I'm done. But creative expression can be intimidating when it's not fostered or valued, and for FAR too many people, that's been their history. This is not the answer, but I can understand it would be a tempting option for someone who finds writing down their thoughts and feelings in an organized way excruciating.
posted by rikschell at 7:21 AM on August 4 [8 favorites]


Ars:
All of this largely tracks with our own take on the ad, which Ars Technica's Kyle Orland called a "grim" vision of the future. "I want AI-powered tools to automate the most boring, mundane tasks in my life, giving me more time to spend on creative, life-affirming moments with my family," he wrote. "Google's ad seems to imply that these life-affirming moments are also something to be avoided—or at least made pleasingly more efficient—through the use of AI."
When all you have is solutions, everything looks like a problem.
posted by flabdablet at 7:29 AM on August 4 [19 favorites]


>encouraged to have an LLM generate the letter for her

>What will these buffoons come up with next?

Sydney having Gemini auto-answer all her fanmail of course.

I threw this test image at ChatGPT 4o yesterday since this is the kind of application I'm interested in developing with LLMs, something that gives me the ability to function in a foreign country.

4o did a decent job understanding the context and information on the sign, but couldn't parse the timetables yet. I expect 5 will be able to handle that, we'll see . .
posted by torokunai at 7:36 AM on August 4


It has been depressing to see all the grant funding for citizen science evaporate into AI.

LLMs were helping organize human effort in the field, but now, the managers have rapidly forgotten the the value of that

The AI does not have legs and eyes, and they can t vote. I can't decide if the rich granting bodies don't understand this, or understand it too well.

That uncertainty is an argument for getting rid of philanthropy.

What is the point of educating a machine that can't participate in culture, society or civic life
posted by eustatic at 7:37 AM on August 4 [3 favorites]


>What is the point of educating a machine that can't participate in culture, society or civic life

This may or may not have been a plot point of Neuromancer, but I totally see a machine incorporating itself and being able to fund its own operations later this century.

be careful what you wish for, I guess . . .
posted by torokunai at 7:41 AM on August 4 [1 favorite]


I totally see a machine incorporating itself and being able to fund its own operations later this century

You're a little behind the pace; they've been doing that for four hundred years.
posted by flabdablet at 7:47 AM on August 4 [6 favorites]


What is the point of educating a machine that can't participate in culture, society or civic life

Restating this as “what is the point of educating a whole bunch of linear algebra that can’t participate in culture” and see how much sense that makes.
posted by MisantropicPainforest at 7:48 AM on August 4 [4 favorites]


There’s an interesting spin on the old robots-became-sentient-and-took-over story there - a future where AIs don’t become sentient, but do become legal persons via a corporation, and accrue massive amounts of power not via oppression or coercion, but by quite legally working the levers of capitalism.
posted by Jon Mitchell at 7:50 AM on August 4 [2 favorites]


And yes, whether that is even any different in any qualitative way from our current setup is a good point flabdablet!
posted by Jon Mitchell at 7:52 AM on August 4


I miss the old bard guy reciting the Iliad straight through at every gathering. Yes I'm much older than you think.
posted by sammyo at 7:57 AM on August 4 [4 favorites]


There's this weird vacuum at the heart of platform/internet capitalism - like, the goal is to automate everything: you don't need to learn to write, draw, read, choose things, cook, even do work, because the dream is to sit there in a stupor while bots make money for you, a service delivers your food, your simulacrum messages your parents' simulacrum with simulated messages of affection, etc, and you never have to do anything you don't "want" to do - not go to the store, not make a phone call, not interact with another human who might not anticipate your needs. A zero-friction life, just a human tuber sitting there in a vat until it dies. And the question becomes "if humans are just pretexts for moving money around, do we really need the humans"?

We really underestimate the importance of friction - with a zero friction life, you just slide right off of subjectivity. You hate making phone calls, but sometimes a phone call can be really revealing or interesting or useful; you hate chatting with clerks, but weirdly you feel better after you've gotten out of the house. You hate learning to cook, but it still seems like it is fun to be able to cook good food. All that stuff, the learning, the thinking, the trying, the individual and unpredictable events, are what make us into people.

Getting AI to write something for you isn't just some contemporary equivalent of being able to keep a diary, any more than ordering take-out is just some contemporary equivalent of learning to cook, or watching sports on TV is just some contemporary equivalent of going out and playing in the park.
posted by Frowner at 8:01 AM on August 4 [33 favorites]


...the harder it becomes for most people to imagine any way of life not involving those "conveniences" as acceptable.

Yes, this is a big concern with LLM deployment for me because I see it aligning with adrienne maree brown's perspective:
We are living now inside the imagination of people who thought economic disparity and environmental destruction were acceptable costs for their power. It is our right and responsibility to write ourselves into the future.
Could LLMs be used in to facilitate such alternative imaginings? Maybe, I guess. Some of my (better-funded and securely employed) colleagues in college writing education are still enthused about it even as the technology's owners advertise its labor-saving merits in ways that are used to justify staff reductions.

But current, dominant LLLM deployment is aligned with furthering wealth and power imbalances, not liberatory work.
posted by audi alteram partem at 8:05 AM on August 4 [3 favorites]


We really underestimate the importance of friction

We really do. Maybe we need an AI to help us figure it out.
posted by flabdablet at 8:07 AM on August 4


There is a sense in which my inability to write in corporatese has ruined my life. (I don't actually think my life is ruined, but cover letters and business correspondence that requires a blandly professional tone have always given me severe panic attacks). The one thing about LLMs that I see as all upside* is that they seem designed specifically to churn out the blandest, most inoffensively professional communication possible, and all my future business correspondence that needs to be blandly professional will likely start with an AI first draft. The draft will then be edited to remove lies and maybe add the human touch.

I have a buddy who has a bit of dyslexia, and the associated lasting consequences for his written communication, and he has been using chat gpt to help him write work emails for over a year now, and this has led to tangible career improvements for him.

I don't think I would use it for personal communication or the kind of business communication where it's important that it sound like it comes from a specific real human, though.

*ChatGPT's ability to write great boring business correspondence may have led to an increase in boring business correspondence, which I am sure I've seen complaints about recently.
posted by surlyben at 8:09 AM on August 4 [1 favorite]


I don't think I would use it for personal communication or the kind of business communication where it's important that it sound like it comes from a specific real human, though.

Me either. Sometimes only genuine copperplate will do.
posted by flabdablet at 8:13 AM on August 4 [3 favorites]


you don't need to learn to write, draw, read, choose things, cook, even do work

soundtrack for the thread
posted by flabdablet at 8:16 AM on August 4


L.M. Sacasas is a regular read for me. If you're unfamiliar, he's heavily influenced by Ivan Illich and Hannah Arendt.

The Gemini ad that gets on my nerves is the one that uses Jay-Z's "Public Service Announcement" as its music. Ironic, in an annoying way, that a rapper who famously writes all of his rhymes by memory (he doesn't write anything down) is being used to advertise a technology that relies on the whole human history of the written word to even function, and which functions largely as a prosthetic memory and surrogate writer.
posted by A Most Curious Rabbit at 8:35 AM on August 4 [4 favorites]


> I miss the old bard guy reciting the Iliad straight through at every gathering.

I'm pretty sure that it was only multi-day special occasions that got the full recital. It takes a while, and people get bored.

Lots of people (I assume they are people) who still haven't internalized that these systems are not doing any knowing or understanding. I might have to eat my words but I'm advising the person upthread who is waiting for the next one to be able to understand train schedules to not hold their breath.

The best-case scenario is that the bubble bursts before too much damage gets done.
posted by Aardvark Cheeselog at 8:41 AM on August 4 [3 favorites]


what is the point of educating a whole bunch of linear algebra that can’t participate in culture” and see how much sense that makes.
posted by MisantropicPainforest at 7:48


So, we are in agreement, that taking education funding away from educating humans and giving it to educate linear algebra does not.make sense?

Becuase I agree that it does not make sense, which is why I am sad that this funding is going away.
posted by eustatic at 8:54 AM on August 4 [3 favorites]


I want AI-powered tools to automate the most boring, mundane tasks in my life

On the one hand, I understand this want. Most of the task I find boring are not performable by an LLM, I'm afraid. Lifting weights? Very boring, to me anyhow. But it is a task I cannot employ anyone or anything else to do for me. And I think a rather large number of boring tasks are like lifting weights, in that the reasons they are not performable by an LLM are also the very reasons to recommend doing them, in that they result in some kind of personal growth, or they bring the sort of satisfaction that only comes from completing such a task, or they become the occasion for and site of a rewarding human relationship.

This also makes me think of a character from Douglas Adams's other series - the Dirk Gently books - namely, the Electric Monk:

"The Electric Monk was a labour-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe."

The funniest part of this is bit about VCRs "watching tedious television" for you, because it seems like it misses the point, but then again, does it? It makes me think of how most of the people I know take tens of thousands of photos every year and almost never look at any of them ever again. And now, there is software to prompt us to look at them, ("You have a new memory!") which software has performed facial recognition and geolocation - it has viewed our tedious photos for us.
And of course looking at a photo is the opposite of remembering.

I stopped using a dishwasher a couple decades ago when I began living in spaces where I need those cubic feet for something else, and I have, over time, come to enjoy washing the dishes. It is a peaceful ritual at the end of a day; it is, to quote Brian Eno, "a way of listening to music," and it is also a way of knowing and understanding my spouse, because I get a perspective on their day based on what dishes they've left in the sink, or have washed, put away, etc. (Obviously we also have conversations, but they are rarely about that yogurt was eaten, or a snack was made from peanut butter and bananas, and because I find my spouse endless interesting, I find those facts interesting). Washing dishes counts as a "mundane task" but I do not want a robot to take over, because the mundanity of the task both produces and is the task's reward.
posted by A Most Curious Rabbit at 9:07 AM on August 4 [14 favorites]


Before enlightenment: chop wood, carry water.
After enlightenment: chop wood, carry water.
posted by flabdablet at 9:19 AM on August 4 [8 favorites]


It has been depressing to see all the grant funding for citizen science evaporate into AI.

So, this is pretty adjacent to things that I work on, using machine learning for conservation monitoring. In my experience, domain experts are absolutely essential, and we do our best work when we're helping experts extend their knowledge to data that is orders of magnitude larger than what they could normally observe. It used to be that human point counts were everything: stand an expert in a spot for three minutes and have them count what they see and hear. But you miss a hell of a lot when only doing these short counts. With acoustics, we get hundreds or thousands or even millions of hours of audio to work with, and can use ML techniques both to aid counting and/or filter down to likely candidates for manual verification by the domain experts.

How does this relate to citizen science? Well, it turns out that citizen science data is pretty inefficient. The quality tends to be low, and by the time you've done the work of making the necessary training modules, putting the data where the citizens can get to it, and building the systems that can take the user data scalably, you proooobably could have just built a damned classifier for what you want to do, using less of your domain expert's time for annotation.

A similar thing happened for gamification in science a decade ago.... by the time you design a game and launch it to get the user data, you proooobably could have just solved the problem. (Also, it is overwhelmingly probable that your game sucks and no one will play it.)

So for the msot part, citizen science ends up being useful for education and awareness, but less useful for actually producing scientifically useful data. Domain experts are still great and important, but random people on the internet are even less reliable than LLMs in most cases... (though, to be clear, we don't use llm's for the kind of work I'm doing.)
posted by kaibutsu at 11:23 AM on August 4 [5 favorites]


I'm going to take the controversial stance that writing was an unambiguously good thing.

*Post Comment*
posted by AlSweigart at 1:24 PM on August 4 [3 favorites]


Whatever, Mr. Philosopher. I once thought like you but most people don't care enough or are unable to realistically contribute to any human discourse - personal or otherwise. They're fucking morons and their plight should be as ignored as their dignity dissolute.
posted by DeepSeaHaggis at 1:25 PM on August 4


Before enlightenment: chop wood, carry water.
After enlightenment: I'm sorry, as a Large Language Model, I cannot do that. Do you want me to help you write an email for a plumber quote?
posted by signal at 1:47 PM on August 4 [3 favorites]


I am worried about people voluntarily ceasing to express their genuine selves to others, but I am more worried about people thinking they're listening to the words of others in a "more efficient" way when they are actually feeding themselves nuance-free, or even content-free, text, because When ChatGPT summarises, it actually does nothing of the kind:
ChatGPT doesn’t summarise. When you ask ChatGPT to summarise this text, it instead shortens the text. And there is a fundamental difference between the two. To summarise, you need to understand what the paper is saying. To shorten text, not so much. To truly summarise, you need to be able to detect that from 40 sentences, 35 are leading up to the 36th, 4 follow it with some additional remarks, but it is that 36th that is essential for the summary and that without that 36th, the content is lost.
The linked blog post describes an example of ChatGPT "summarising" a governance policy recommendation paper; the key recommendations appeared nowhere in the "summary" and indeed the summary partially contradicted them, because the summary drew on the bland average of "conventional wisdom" that ChatGPT already had embedded in its parameters from its training data. This is a great way for the work of experts to be disappeared even when those receiving their work think that they are reading [a summary of] its contents. This is a recipe for disaster.

[Via an answer by straw to a recent AskMe question.]
posted by heatherlogan at 2:03 PM on August 4 [17 favorites]


the other day I heard about some Google commercial depicting a child wanting to write a letter to her favorite athlete, and being encouraged to have an LLM generate the letter for her, which… surely I'm missing something, right? Because that sounds like a profoundly ghoulish premise for a TV commercial, right?

Google pulls its terrible pro-AI “Dear Sydney” ad after backlash.


I saw that very ad at a movie theater last night, so if they've pulled it, they haven't pulled it everywhere. That was along with two other AI commercials for Meta and Samsung, and a Nike commercial which used the refrain "Am I A Bad Person?" while listing off all of the narcissistic and antisocial traits of competitive athletes.
posted by grumpybear69 at 2:05 PM on August 4 [1 favorite]


But current, dominant LLLM deployment is aligned with furthering wealth

LOL, in the financial news - Wall Street is starting to see a bubble basically not seeing returns on multi billion investment. Then "can the models beat wall street?" - if someone finds a great stock predictor, it'll work for a bit, some company will get a lot of money but everyone else will figure out the algorithm and the markets will return to some kind or equilibrium.

(and for the protoludites, homework for tonight, memorize and recite the first book of the Iliad :)
posted by sammyo at 4:23 PM on August 4


I sometimes wonder if the belief in/hope for telepathy is a remnant cultural artefact from when spoken language evolved. Inner thoughts were revealed, emotions shared, information displayed using this new skill - it was like reading a mind.

As the article discusses, the articulation of speech, and by extension, writing is also an articulation of self. Maybe it's just me, but sometimes when I give a detailed reply to a question, the answer is the first time that the thought has been completely formed. Dialog as an exchange of ideas, is one of my greatest pleasures. And if that is not available, writing as a source of ideas, is another pleasure.

So when it comes to LLMs - there are no ideas - there are just patterns. The textual equivalent of wallpaper.
posted by Barbara Spitzer at 4:44 PM on August 4 [7 favorites]


Well, it turns out that citizen science data is pretty inefficient

That's all well and good, but people live in community, and a primary social adhesive is people engaging in their neighbourhood and nearby places. Citizen data gathering is a great way to do this - it also teaches place, in the same way that the perambulating 'beating the bounds' does.

By displacing imperfect people in favour of (purportedly) perfect data, we are performing a hateful act, as it is love (and often just simple getting along) that binds people together in a place. We should reinforce and learn more ways of working with people as people - and bootstrapping from imperfect data; anecdote circles, thousand minds, perambulation...

Who wants efficiently to replace love?
posted by unearthed at 5:00 PM on August 4 [2 favorites]




(and for the protoludites, homework for tonight, memorize and recite the first book of the Iliad :)
posted by sammyo at 4:23 PM on August 4


I was explaining to a friend how capital had so thoroughly demolished the anti-automation-value-capture messaging of Luddites as I read this comment, but now I'm hearing the drumbeat of Arma virumque canō thumping out in my memory in dactylic hexameter.

The internet is weird. Virgil hasn't written anything for a long time now, but I don't think I'm the first consciousness to have this interior experience. The cell phone thing is new though.

I was never any good at Latin, but I loved my Latin teacher, who put his entire self into sharing his love of Latin with me. I remember him telling the class that if we were to remember one thing about Latin as adults, let it be the opening of the Aeneid. As an adult, I do.My teacher was right, and I was lucky. It's worth remembering.
posted by 1024 at 6:53 PM on August 4 [3 favorites]


I sometimes wonder if the belief in/hope for telepathy is a remnant cultural artefact from when spoken language evolved. Inner thoughts were revealed, emotions shared, information displayed using this new skill - it was like reading a mind.

Of course it's impossible to be sure, but the more thought I've given to that question over the years, the more likely it seems to me that that kind of mind-reading probably exists and existed in a lot of creatures that possess nothing like human language, our own distant ancestors included.

It's very easy, for example, for me to make up highly plausible Just So stories about what the cats I work for are thinking based on how the pair of them are behaving and interacting at any given moment, and given that their minds would pretty much have to resemble each other more than either resembles mine, I would expect both of them to be able to model each other more accurately than I model either.

So, not so much a remnant cultural artefact as an ongoing condition inherent to conscious awareness that might even be one of the primary drivers for cultures, be they verbal or otherwise.

As an aside, it seems to me that our own capacity for empathy already implements telepathy quite well enough to be going on with, striking a nice balance with privacy that I do not want to see disturbed by something like Neuralink becoming ubiquitous. I very much enjoy sharing as much mindstate with ms flabdablet as both of us possibly can, but I wish to keep very strict controls on how much access I grant to Musk, Bezos et al.

As the article discusses, the articulation of speech, and by extension, writing is also an articulation of self. Maybe it's just me, but sometimes when I give a detailed reply to a question, the answer is the first time that the thought has been completely formed.

Sacasas makes this point as well, and I agree strongly with it. I've long believed that the best way to understand anything is to explain it to somebody else, and dialogue is at least as big a part of that as articulation. The only use I would have for a LLM as part of that process would be for doing it with people whose languages I don't speak. With people I do share a language with, interposing LLMs is going to cause at least as much misunderstanding as it avoids.
posted by flabdablet at 9:37 PM on August 4


heatherlogan: I am worried about people voluntarily ceasing to express their genuine selves to others, but I am more worried about people thinking they're listening to the words of others in a "more efficient" way when they are actually feeding themselves nuance-free, or even content-free, text

The approach labeled Retrieval Augmented Generation (RAG) lets you add reference documents to the prompt's tokens so that they play at higher significance that the training data and are used to generate the response. I don't know if it resolves this to truncation-not-summary issue.
posted by k3ninho at 10:26 PM on August 4


Remember: If a thing isn’t worthy writing it isn’t worth reading.
posted by Artw at 11:48 PM on August 4 [1 favorite]


> Generating the next statistically likely word in a sentence expresses no knowledge at all of the person expected to read that generated text.

@bruces' delvish: "It turns out Language Models don't possess minds..."
Also, this kind of gerontocratic sentimentalizing that I just did in the previous paragraph — that personal yarn with the anecdotes, and the name-dropping of friends, and being all sentimental about the long-gone ideals of one’s vanished youth, and so forth — that’s a form of “writing” that AIs do not “generate.” It’s very rare for them ever to describe their lived experience, because they have none. As authors, they don’t have bylines, because they don’t have identities. Human prose and Delvish prose are as different as bamboo and styrofoam. They can take much the same shape, and perform similar functions, but they’re radically different in origin.
also btw, re: delvish 'dialects' > our own capacity for empathy already implements telepathy quite well

mirror neurons[1,2,3,4] :P
posted by kliuless at 12:34 AM on August 5 [1 favorite]


« Older Absolutely Nothing   |   Muffin reigns supreme Newer »


You are not currently logged in. Log in or create a new account to post comments.