AI as Self-Erasure
June 24, 2024 8:21 PM   Subscribe

Humanity’s will to disappear is being installed in the omni-operating system. I was at a small dinner a few weeks ago in Grand Rapids, Michigan. Seated next to me was a man who related that his daughter had just gotten married. As the day approached, he had wanted to say some words at the reception, as is fitting for the father of the bride. It can be hard to come up with the right words for such an occasion, and he wanted to make a good showing. He said he gave a few prompts to ChatGPT, facts about her life, and sure enough it came back with a pretty good wedding toast.
posted by misterbee (75 comments total) 21 users marked this as a favorite
 
The fact that I managed to really enjoy this despite Jaron Lanier’s unexpected and extremely unwelcome appearance is an indication of just how great the rest of it was: the point of bothering to show up is that you cared, the point of tools is to reduce labor - the real message behind running with ChatGPTs toast would have been: feigning interest in you is labor. Caring about you is a chore.

I very much want to see LLMs the technology continue to grow and develop into the next phase of machine learning, but when it comes to their intrusion into personal matters… I think given the option I would prefer silence.
posted by Ryvar at 8:57 PM on June 24 [7 favorites]


If you read the article though, the next line is "But in the end, he didn’t use it, and composed his own"

(The article is superb!)
posted by carlodio at 9:01 PM on June 24 [12 favorites]


I took the blog post as moralizing philosophical wankery. Checking his other substack posts I see he was interviewed by Andrew Sullivan, LOL.

I think it's a mistake to think future trained generative content synths are going to merely average-out the rough edges of human thought, though the pointed-out lack of a 'first-person' perspective is well-taken.

I don't think there's any magic going on between our ears, so expect computers to be able to do the Commander Data thing sooner or later.

id + ego + superego, how hard could it be . . .
posted by torokunai at 9:54 PM on June 24 [3 favorites]


So, I think it's worth being aware that the author of this article is, amongst other things, a COVID denialist, writes a whole lot about how young people nowadays are obsessed with being victims, with diagnosing themselves with trendy mental illnesses, "institutional misandry", toying with alternatives to democracy, like "the upsides of a hereditary aristocracy", and engaging with a whole lot of people on what likes to consider itself the intellectual far right, in opposition to things like "woke" and feminism.

So just know what neighborhood you're walking into here, I guess is what I'm saying.
posted by Joakim Ziegler at 10:13 PM on June 24 [77 favorites]


Holy shit that’s a whole lotta yikes. Thanks for the heads up. Shame, because apart from giving Lanier oxygen I really liked the tone of this one.
posted by Ryvar at 10:41 PM on June 24 [3 favorites]


I have something in between a gripe and a question regarding the arc of the meta-conversation in these comments.

misterbee posted the link, there's some discussion about the linked post, and then there are a couple of comments raising some yellow/red flags about the writer. torokunai is turned off right away by the attitude/tone/angle and goes poking around for intellectual friends, and finds Andrew Sullivan. Joakim Ziegler runs down some lowlights, which as a collection paint a picture of a run of the mill, reactionary contrarian hack.

(I think it's relevant that my political alignment seems to be with torokunai and Joakim Ziegler, and generally I want to know as soon as possible if writers are in the neighborhood of Quillette/IDW "thought" because I hate having my time wasted by bad faith writing)

Phrased as a gripe: I wish that when people post a warning about writers being reactionary, bad-faith cranks, they would connect the dots between the content under discussion and the reactionary mindset. It's exactly because I have myself been hoodwinked by bad faith contrarians (literally Andrew Sullivan, lol, when I was in college and he was writing almost exclusively about marriage equality and (mostly) pro-Obama politics) that I want to hear more from people who hear the sour notes in the music right away. Phrased as a question to whoever wants to engage with it: is the resistance to generative AI reactionary? Is it just very new, so are there many attitudes against it, including reactionary, liberal, humanistic, Marxist, and others?
posted by mtthwkrl at 11:55 PM on June 24 [5 favorites]


So what's wrong with Jaron Lanier? What memo did I miss?
posted by Termite at 1:08 AM on June 25 [4 favorites]


Yeah, wow I guess I shouldn't have skimmed past the "declining birth rates" stuff that signalled we were in "the book the dude is reading in The Great Gatsby" territory.
posted by johngoren at 2:22 AM on June 25 [1 favorite]


> So what's wrong with Jaron Lanier? What memo did I miss?

i will return with more detail when i have time, but until then suffice it to say that he is an idiot.
posted by bombastic lowercase pronouncements at 3:07 AM on June 25 [8 favorites]


I just now tried searching Metafilter archives to understand why people don't like Jaron Lanier and all I found was stuff from 2018 saying he rocked.
posted by johngoren at 3:08 AM on June 25


It is interesting that the MetaFilter practice of placing the name of the author of posts below their content has strong support from the community, due to the belief that engaging with the content of the post without prejudice is valuable, but the community reliably has the exact opposite reaction to authors posted to the main site.
posted by Aethelwulf at 4:23 AM on June 25 [15 favorites]


I don't think there's any magic going on between our ears
speak for yourself
posted by HearHere at 4:47 AM on June 25 [5 favorites]


Lots of anti-democratic and pro-hierarchy notes in this article. I liked the thoughts about AI at the beginning but the appearance of the word "collectivism" and the repeated use of "democracy" as a perjorative finally soured me on the article. (This has been my experience reading anything from the Hedgehog Review. Something usually brings me up short -- there is a definite reactionary agenda hidden there.)
posted by Vegiemon at 4:49 AM on June 25 [9 favorites]


It is interesting that the MetaFilter practice of placing the name of the author of posts below their content has strong support from the community, due to the belief that engaging with the content of the post without prejudice is valuable, but the community reliably has the exact opposite reaction to authors posted to the main site.

(I figure comments are generally short so you get to the byline quickly, plus if you're a frequent reader and the commenter is a frequent commenter with a particular style or viewpoint it's often easy to pick up on who the comment's by while you're reading it. And there's no financial contribution, however slight, to someone commenting here when you read or link to their comment. And commenters don't get any vague halo of authority from commenting here, as opposed to a link to a separately published source that an OP is recommending that everyone read, and most commenters aren't attempting to build their personal influencer brand or thought leadership or whatever so there's less of an urge to say "wait don't fall for this person's brand/influence/whatever". And there are probably fewer commenters here overall with views that other commenters find deeply problematic to the point where they're wary of being influenced by them. That's my take, anyway.)
posted by trig at 5:01 AM on June 25 [3 favorites]


is the resistance to generative AI reactionary? Is it just very new, so are there many attitudes against it, including reactionary, liberal, humanistic, Marxist, and others?

The enemy of my enemy is not my friend.
posted by CheeseDigestsAll at 5:05 AM on June 25 [9 favorites]


Here's an example of how to use AI as Other-erasure.

(I asked ChatGPT about 45's shark/battery rant. I think this has potential.)
posted by ocschwar at 5:08 AM on June 25 [5 favorites]


I was curious about what that Talbot Brewer said about AI, and found this:
https://www.bu.edu/ipr/2024/02/20/word-by-word-writing-humanity-and-ai/

Haven't had the chance to listen yet.
posted by doctornemo at 5:11 AM on June 25 [3 favorites]


This describes a major vector of my depression. The internet has created a solid record of people who are better and more interesting and talented than you and your friends. Making new friends at work or through mutual acquaintances or whathaveyou means competing with myriad unseen prosocial relationships. Creating anything becomes meaningless when you have no community to appreciate AND an infinite pool of talented strangers to compare yourself to. Labor of all types is replaced by growing workplace efficiency and mechanization. The struggle to not feel redundant takes up a larger and larger part of each day. When you fail, there's always abnegation through drugs or the new access to more media than you will ever have time to engage with meaningfully.

Then you go to a party or a wedding or a conference to try to connect to 'piers' / people 'in your circle' and find that they're thriving in this environment, which makes you feel even worse and more alone.

as an aside, Bob's Burgers is fairly unique in exploring this problem and its solutions as a main theme of the show. I guess I'll go watch a couple episodes alone, sitting on my bed, while playing tetris in the background.
posted by es_de_bah at 5:36 AM on June 25 [21 favorites]


I'm not sold on the anecdote that opens the article (and leads this post). It sounds like the person writing the speech - for whatever reason - was not confident in his abilities to do so, used ChatGPT to generate a sample and then proceeded to write his own version of it. It doesn't seem terribly different from using any number of speech-writing books and templates that have been around for ages without this level of handwringing.
posted by gee_the_riot at 5:39 AM on June 25 [8 favorites]


Phrased as a question to whoever wants to engage with it: is the resistance to generative AI reactionary?

"Reactionary", like "conservative", is a relative concept. In Cold War days, it was possible to talk about conservative politicians in the USSR... who were attempting to conserve a status quo established by the Russian Revolution. In the UK, one of the ironies of the past decade or two is that the Conservative Party has been anything but, throwing out a 40+-year European status quo in a moment of revolutionary change. ("Revolutionary" isn't an unalloyed good, either, especially when you're living through it.) But part of their political armoury in the Brexit battles was drawing on a deep well of reactionary feeling among older Britons—a feeling not of wanting to conserve what is, but of going back to what was, a long time ago. A rejection of the status quo in favour of the status quo ante.

The reaction to the current AI moment might in part be driven by reactionary instincts among some—those who would reject not just ChatGPT, but who feel that the whole rise of IT and the Internet in the past 40+ years was a wrong turn, and that we should go back to some golden age of typewriters, letter-writing, and reading the newspaper at the breakfast table. I don't think that's most of what we're seeing, though, and it isn't most of what we're seeing here at Mefi among the LLM skeptics. Mefi has plenty of people who have been part of the IT/Internet age of the past 40 years who have misgivings about how LLMs have been created and what they're being used for, and that isn't an inherently reactionary position. You could call it a conservative one, perhaps, but as we're still in the middle of the AI "moment" I think that would be wrong, too. This is more a period of contestation and disagreement about what LLMs are or should be for, about what human activities they can or cannot and should or should not replace. None of that is decided yet.

It's a Luddite moment. People like to use that word as shorthand for reactionaries and conservatives too, but the Luddites were driven by an urge to protect people's livelihoods and resist the capture of the gains from technology by the rich few at the expense of the working many. If you were to map those attitudes on a left/right spectrum, would you put them next to the Tories and the far right, or next to progressives on the left? Maybe they don't map perfectly; maybe that's one reason entirely new phenomena upset traditional political differences.

We've seen similar upsetting of traditional politics by the rise of environmentalism over the past fifty years. In my home country of Australia, environmentalism was more commonly called "conservationism" when I was growing up, which is a clue to its complexities: conservationism has a lot in common with conservatism in that they both reflect an impulse to conserve; they differ in terms of what people want to conserve. Some aspects of environmentalism could be considered deeply reactionary, harking back to a more natural world free of modern technology, pollution and climate change. Some aspects are progressive, holding out hope of a more egalitarian future harnessing new green technologies. Political attitudes are matrices, not straight lines.

Which is why I wouldn't rush to condemn this article purely on the basis of the author's bedfellows and his other views. If there's a useful thread in it, it's worth unpicking from the wider fabric, as Ryvar did in the first comment here. I think a statement like this is insightful, whether or not the author thought mask-wearing was justified in 2020:

If we accept that the challenge of articulating life in the first person, as it unfolds, is central to human beings, then to allow an AI to do this on our behalf suggests self-erasure of the human.

As a complicated human being myself—like any of us—I'm allowed to agree with another complicated human being on some points and disagree on others. During the Brexit years, I was particularly irritated by the Tory MP David Davis, who bluffed his way through negotiations with the EU and threw away the UK's future largely, it seemed, because of views about trade he had formed while working for Tate & Lyle; but in the late 2000s and early 2010s I admired his stance on key issues of civil liberties. If someone suggested that I must be a Brexiter because I agreed with Davis that "foreign tech companies such as Palantir, with their history of supporting mass surveillance, assisting in drone strikes, immigration raids and predictive policing, must not be placed at the heart of our NHS", I'd just laugh.

Let's interrogate the ideas, and the questions being asked. "What would it mean, then, to outsource a wedding toast?" is a really interesting question, and the answers presented here are also interesting, despite references to "declining birth rates in the West" or the "impoverished quality of the language that these LLMs are being trained on"—one that struck me, as talk of "impoverished language" often accompanies a conservative mindset. And anyway, would we really disagree that a lot of the language that LLMs are being trained on doesn't exactly represent the pinnacle of human expression? How about the stuff that's been scraped from Stormfront? And would we deny that "declining birth rates" is an actual phenomenon, one that we might wonder about or attempt to explain, even if our own explanations may not match this author's, and even if we note that declining birthrates in a lot of countries not in "the West" suggests that there's a lot more going on than his qualification "in the West" implies? The opening sentence of this article doesn't do it any favours, but even the second half of that opening paragraph is interesting: "when the world feels already occupied, so there is no place for you to grow into and make your own"—doesn't everyone wrestle with that at times, when we're surrounded by billions of other people and billions more who came before us? And in that context, what does it mean to outsource our words and ideas to LLMs? How do we make our mark? How do we make a difference?
posted by rory at 5:54 AM on June 25 [28 favorites]


is the resistance to generative AI reactionary?

I don't think so. I'm definitely at the progressive/lefty end of the spectrum and I will be part of the Butlerian Jihad when it gets organized; I've seen plenty of other lefty types on Mastodon who are also resistant.

A lot of the AI hype is coming from people with right-wing associations who, one suspects, fantasize about using AI to replace a bunch of labor.

All that said, I know there are lots of people who are not confident writers; they don't trust their ability to express themselves. So they're quick to adopt a tool that promises to improve their writing. I have some sympathy for that, even though I wish they wouldn't.
posted by adamrice at 5:54 AM on June 25 [14 favorites]


Is it just very new, so are there many attitudes against it, including reactionary, liberal, humanistic, Marxist, and others?

I think this is closer to the truth. Though I’d probably say that the current iteration of “generative” AI is new enough that it hasn’t been well-understood from any political lens, and people of any philosophy are just responding to its immediate effects.

Sure, that can make for strange points of alignment, if not agreement — right-wing folks are also worried about LLMs, even if their reasons are different from folks on the left.

That doesn’t suddenly mean they’re correct about anything, or that right-wing worry about LLMs means the left has to suddenly embrace LLMs out of some spirit of contrarianism. It just means everyone is still figuring out what the hell is going on.
posted by learning from frequent failure at 6:12 AM on June 25 [3 favorites]


It doesn't seem terribly different from using any number of speech-writing books and templates that have been around for ages without this level of handwringing.

The author quotes Kierkegaard wringing his hands about exactly that. There's less of it now, because speech writing books aren't novel anymore, they've become normalised.



It is interesting that the MetaFilter practice of placing the name of the author of posts below their content has strong support from the community, due to the belief that engaging with the content of the post without prejudice is valuable, but the community reliably has the exact opposite reaction to authors posted to the main site.

We assume mefites are acting in good faith; it's one of the guidelines here. The real world teaches us that this assumption cannot hold as a general rule for people writing on the Internet at large.
posted by Dysk at 6:15 AM on June 25 [5 favorites]


Eh, I don’t know anything about the author, but much of the article seems questionable or hand-wavy. The casual dismissal of evidence-based medicine is a red flag. The subtle insinuation that hierarchies of authority are an important aspect in defending against the loss of the individual in collectivism is another. The suggestion that education in modern school systems doesn’t rely heavily on relations of authority between teacher and students is just silly. (Unless the author is harking back to an era where teachers were empowered to physically beat students to demonstrate that authority.)

Leaving the details behind, the overall suggestion that modern culture is more collectivist and less accommodating towards the individual than in the past, just doesn’t match either my lived experience of the last 60 years or my study of history.
posted by tdismukes at 6:26 AM on June 25 [10 favorites]


feigning interest in you is labor. Caring about you is a chore.

Imagine my horror when my dinner host admitted they had purchased all of the food at a grocery store. They hadn’t even gone to the trouble to slaughter the livestock.
posted by simra at 6:28 AM on June 25 [2 favorites]


Fathers who care use ChatGPT
posted by Phanx at 6:29 AM on June 25


Fathers who care pay for ChatGPT Pro.
posted by signal at 6:30 AM on June 25 [2 favorites]


Placing an inherent value into something made by human hands or written by a human hand is as old as the Arts and Crafts movement. Placing an inherent value into writing your own mediocre words rather than ChatGPT ( or a ghost writer) is no different.

feigning interest in you is labor. Caring about you is a chore.


That's the point. It's establishing that you value your relationships enough to sacrifice your time, attention and comfort. That's why you dress up to a funeral. You establish that you care enough to show up dressed appropriately and go through the established motions, and in exchange everyone allows you to be completely sincere or insincere about it.
posted by ocschwar at 6:47 AM on June 25 [5 favorites]


Religions posit dualism, the ghost in the machine.

LLM/GPT is now attacking that head-on (e.g.. IME passing the casual Turing Test usually pretty well), hence my Cmdr Data / "id-ego-superego" throwaways above.

Humans are supposed to be special creations . . . right now GPTs beneath the artifice have the personality of a JPEG but this may change.

heh, the FPP was about the speech being 'ghost written' by a machine.
posted by torokunai at 7:10 AM on June 25


I have obvious issues with AI hype coming from people who overstate its capabilities because they don’t really understand what LLMs are doing, but this is a currently-rare case in which someone is jumping on an anti-AI bandwagon in response to someone using an LLM for precisely the sort of thing it is good for.

The ability to compose a good toast is nowhere near universal, and people’s confidence in their ability to do so effectively is even less so. The mean number of times any of us does it in our lives has to be somewhere between zero and one, and it is certainly forgivable for people to experience anxiety when it falls to them. If I can outline my sentiments in whatever disorganized form, have ChatGPT organize it into something coherent when I don’t feel competent to do so myself, and then pass over that output to put it into language that comes naturally to me, then that’s a valuable tool. It’s morally no different from asking a friend who is known to be good at that sort of thing to help, and it would be odd to call that “not caring.” Doing so is getting into weird territory, like arguing that if you really cared about your finances you’d do the math with a pencil and paper.

But particularly on the subject of weddings, it occurs to me that writing your own vows is an option but it’s the alternative. The default is to use the vows provided by the officiant. Have people been arguing that brides and grooms who don’t turn out unique, poetic vows don’t really love one another? Much as we love to talk about “speaking from the heart,” if we are honest the head still has a huge role to play in that sort of thing, and it just doesn’t feature prominently in everybody’s skill set.
posted by gelfin at 7:29 AM on June 25 [10 favorites]


The author might like to peruse this book.
posted by warriorqueen at 8:01 AM on June 25 [2 favorites]


I asked ChatGPT about 45's shark/battery rant.
That is kind of amazing. But I've tried pasting your prompt into Microsoft Copilot and I get a much shallower, less serious, sunnier, more charitable aw-shucks-isn't-he-funny response. With a smiley emoji. As if this specific subject has been ... addressed.
posted by Western Infidels at 8:17 AM on June 25 [4 favorites]


I took a business implication for AI course at MIT and they said it best. Robots should do what is dirty, dull or dangerous.

The assistants I build help people in crisis, conflict, or change. We don't do fun projects. Those are for humans. We do the worst yucky calculations and planning for your horrible (hopefully not horrible, but aren't they all) divorce or child custody issues or conflict with a school or a business dispute.

My AI doesn't have TIME to replace humans.
posted by lextex at 8:25 AM on June 25 [3 favorites]


> doctornemo, thank you.
(intro, 8:00—1:04, q&a)
tldr? 52:00… “dwelling”

is the resistance to generative AI reactionary? Is it just very new, so are there many attitudes against it, including reactionary, liberal, humanistic, Marxist, and others?
seems more, i.e. computer power consumption threatening humanity/life as we know it: e.g. greater than the greater than the total annual electricity production for Italy or Australia, How do you cope with heatwaves ...
posted by HearHere at 8:56 AM on June 25 [2 favorites]


>But I've tried pasting your prompt into Microsoft Copilot

4o still generates the good stuff
posted by torokunai at 9:09 AM on June 25 [2 favorites]


I get the FPP issue of the machine taking the humanity from the situation, as if the father was going to perform a piano piece for the guests but just pushes a button on the stage piano and sits there.

My vision of the future GPT is Wikipedia-cubed, the Young Lady's Illustrated Primer in that Stephenson book . . . a 'multi-modal' encyclopedia:

courtesy ChatGPT I see that word is:
  • "ἐγκύκλιος" (enkýklios), meaning "circular" or "general," and
  • "παιδεία" (paideía), meaning "education" or "rearing of a child."
4o is already getting pretty good and understanding images (but still need work here and there) . . .

I see the positive utility of GPTs, like the father creating a 2nd language translation of his speech for a mixed-language audience.

I see humanity adapting, like we'll adapt to our vehicles not making loud engine exhaust noises as we travel around town later this century.
posted by torokunai at 9:25 AM on June 25


good, chatgpt can do the function of a dictionary now.
posted by sagc at 9:30 AM on June 25 [2 favorites]


Non-Sarcastic vs. Sarcastic Interpretation:
  • Non-Sarcastic: Genuine appreciation or approval of ChatGPT's dictionary-like capabilities.
  • Sarcastic: Mocking or highlighting a perceived inadequacy, suggesting that this capability is minimal or unimpressive.
posted by torokunai at 9:38 AM on June 25 [1 favorite]


es_de_bah: "This describes a major vector of my depression. The internet has created a solid record of people who are better and more interesting and talented than you and your friends."

Thank you for this. I'm 41 and I've also been diagnosed with different flavors of dysthymia/PDD/chronic depressive disorder/etc. (depending on the year/practitioner/APA flavor-of-the-day) and this tracks very strongly with the three-to-four week periods leading up to an extended vacation in the bad place.

I have found it very helpful to remember that while many, many people have great ideas, very few people actually execute those ideas. In other words, I am good at something that just about everybody is convinced they could "probably figure out" with their current skill-set.
The reality is that the work I do is almost indescribably exhausting and draining despite looking simple from the outside and my career has one of the highest burn-out rates and shortest professional tenures after graduate school.

I remind myself of this often. And often, the things I see others doing appear (to me, anyway) to be the sort of things that I could figure out how to do. Hell, I might even be good at them! And perhaps that is true - but there is a vast chasm of distance between "figuring out how to do something" and actually doing the thing. Apply this liberally to basically "all the things that humans do."

There are, of course, obvious exceptions to this. I am not certain I should perform surgery.

Wrt my own impending erasure and ChatGPT: I am supremely confident that I will remain prescient in the lives of my community precisely because I make a very serious and conscientious choice to be as weird as fucking possible. I do not aspire to win over the support of the masses. I offend. I could choose to be inoffensive. ChatGPT is inoffensive. It plots a trend-line that accommodates as many human opinions as possible.
I do not want to be inoffensive. I do not want to avoid risk.

I much prefer to be able to successfully change my mind, sincerely apologize when needed, and adjust course appropriately. If you invite me to give a toast at your wedding (and, in point of fact, I have presided at over 150 wedding ceremonies) you will not get a toast that "everyone is going to love." It will not resemble something that ChatGPT will produce. If you invite the cool kids to your wedding, they will probably describe my toast as "cringe" or "insufferable" or "over the top" or "unnecessary" (my own mother's favorite description for my public actions.)
And while it may not be life-changing or revolutionary or even particularly "memorable" - it will be unique and inimitable.

Incidentally, this is why I retain hope for my own professional future. Because I remain capable of making things very weird - and that is something that some of my most successful professional peers studiously avoid. I think AI will probably eat their lunch.

also, the worst thing about reactionaries is that they're frequently boring. they're often deeply unimaginative people. Trump, for example, is a profoundly uncreative person. I don't think he fits into the right/left binary that is daily gavaged into the American brain - but he is a practiced reactionary.
and any creative child is intimately familiar with his method of observation and analysis. Namely: "that's weird. weird is dumb and probably scary. you're dumb." Societies reward this style of analysis. ChatGPT is what this analysis looks like in software.

I thought the essay was a solid B+ and I'm unclear why we're angry. The author does "miss the trees for the forest" here and there - medical diagnostics and wedding speeches aren't exactly apples and apples, and he dramatically overstates the business about "liability phobias" (his too-clever-by-half attempt at decrying some 'cancel culture' boogieman, I presume. You can be an outlier without being a dickhead.)
posted by Baby_Balrog at 9:40 AM on June 25 [8 favorites]


I seem to have hit on something here
posted by torokunai at 9:43 AM on June 25 [1 favorite]


MetaFilter: magic going on between our ears
posted by adekllny at 10:23 AM on June 25 [2 favorites]


torokunai, what's it say? i'm not clicking a chatgpt link.
posted by HearHere at 10:24 AM on June 25 [1 favorite]


Accelerating the death of the biosphere for idiotic chatbots. Fuck the edge-lords fancy ass dictionary - which apparently needs to be integrated into every single human endeavor and screw as many people as possible. :

https://futurism.com/the-byte/ai-polluting-coal-plants-alive
posted by WatTylerJr at 10:37 AM on June 25 [1 favorite]


torokunai: 4o still generates the good stuff
I couldn't get the ChatGPT site to say anything but rosy aphorisms about the "MIT Shark Battery" speech, either. But both CoPilot and ChatGPT would tell me that the giver of the "My Uncle Was The Nuclear" talk has problems.

This is a derail and I'm going to drop it now, I swear.
posted by Western Infidels at 11:01 AM on June 25


Metafilter: the personality of a JPEG
posted by credulous at 11:10 AM on June 25 [1 favorite]


Well, if asking a computer program to write your wedding speech is "morally no different from asking a friend who is known to be good at that sort of thing to help, and it would be odd to call that 'not caring'" (gelfin) - why go yourself at all? You could stay at home and send your talented friend/robot to the wedding instead.
posted by Termite at 11:54 AM on June 25 [1 favorite]


Okay but seriously, what is wrong with Jaron Lanier? PLZ EDUCATE
posted by kensington314 at 11:55 AM on June 25 [2 favorites]


Short version.

Dude coined the term “virtual reality” and surfed that single act into a lifetime of being NYT writers’ go-to for nonsense futurist proclamations. Like if someone handed Yudkowsky a bottle of Xanax and better PR. Either of those two is a yellow flag that the piece you’re reading is a constructed narrative - something the author began writing initially based on their own knee-jerk reactions and then backfilled with supporting quotes from “public intellectuals.” Writers hit up people like Lanier when the actual experts either said things that didn’t support the narrative or were insufficiently headline-worthy, or they just couldn’t be bothered to track down actual experts.
[/LanierDerail], hopefully.
posted by Ryvar at 12:30 PM on June 25 [1 favorite]


Okay but seriously, what is wrong with Jaron Lanier? PLZ EDUCATE


Jaron Lanier is a Silicon Valley socialite and I can't blame people for thinking he is tainted with the same reek as many names we could name from that milieu, but that's unfortunate because so far as I can see he's a good guy.
posted by ocschwar at 12:30 PM on June 25 [1 favorite]


The internet has created a solid record of people who are better and more interesting and talented than you and your friends. Making new friends at work or through mutual acquaintances or whathaveyou means competing with myriad unseen prosocial relationships. Creating anything becomes meaningless when you have no community to appreciate AND an infinite pool of talented strangers to compare yourself to.
Hasn't that always been the case? There are always people who are better than you and it's always been very unlikely for anyone to become famous with their art. More people than ever are making art and I would say it's easier than in the past to find some kind of audience.
As another person with chronic depression making art is fulfilling to me and often fun. That was true even during the many years no-one looked at it. That’s not meaningless to me.
I think people are quite justified in their fear that they might loose their jobs if they are doing creative work in a corporate environment, but I think a fear that humans will stop making art is kind of silly.
posted by the_dreamwriter at 12:32 PM on June 25 [2 favorites]




SO the reason I put the thread at that risk of a derail was to make a simple point: any kind gesture you commit towards another human being will be appreciated for the time you put into it, regardless of how mediocre you might be as a wordsmith or artisan.

In the reverse direction, using AI to accentuate a snarky or critical critical gesture is something I'll probably doing until ChatGPT's power is disconnected for lack of payment. IT's just to good as a way of expressing utter contempt.
posted by ocschwar at 12:34 PM on June 25 [1 favorite]


why go yourself at all? You could stay at home and send your talented friend/robot to the wedding instead
You’d go yourself because you care. I’m not the one suggesting that people don’t. That is, by the way, the precise reason a person might seek help expressing themselves in a way that honors the people they care about when they don’t feel quite confident doing so all by themselves. Hell, plenty of wedding speeches consist primarily of reading an apt poem. Is your argument that people who do that should just stay home and hire the poet?

It’s not like people haven’t used assorted crutches to prepare speeches all along, or like anybody who has is a lazy, unfeeling bastard intentionally disrespecting the audience. I could just as easily say it’s lazy and inconsiderate to settle for whatever half-baked thoughts tumble out of your head when there are numerous resources available to help you refine it with just a little additional effort, but I won’t say that because this seems like an excellent opportunity to give people the benefit of the doubt for their intentions and for doing the best they can, whatever they take that to mean.
posted by gelfin at 2:27 PM on June 25 [6 favorites]


I am still chewing on this, but I will note that perusing the author's substack, he is a serious conservative who for example thinks (in my tendentious tl;dr here) white college students are feigning mental disorders so they can have a victim identity, nationalism is good actually, rule by aristocrats might be better than meritocrats because aristocrats supposedly have an obligation to the common good, elites pander to minorities at the expense of their majority populations... dude makes some good points here but I would calibrate accordingly.

One counter indeed might be that labour saving can be good, even if it is not in the examples he gives. But also, now that I understand the rest of his ouevre, when he quotes de Tocqueville on the "tutelary genius" that saves us the trouble of living, I suspect the author does see the assistance provided by the state to citizens as equally suspect. And it is, but whether you see having things done for you harming your freedom and agency from an anarchist or Burkean conservative perspective matters.
posted by i_am_joe's_spleen at 3:22 PM on June 25 [1 favorite]


I wouldn’t condemn anyone for using chatbots to write a wedding speech. I wouldn’t mind if the cake was from the supermarket, the band was a playlist and the whole wedding party was wearing sweatpants. If that’s what they want and everybody had a great time, good for them.

But I do think the difference between a well-cooked meal and frozen pizza is a distinction worth making.

Care, cultivation, a sense of occasion, a sense of quality… these are good things, and I’d rather expand people’s access to those good things than substitute them with something ersatz. (And I really, really don’t want to lose the ability to tell the difference.)
posted by ducky l'orange at 3:29 PM on June 25 [1 favorite]


I’m a bit surprised at all the hate on Lanier. He’s a bit of a Luddite and plays the ironic role of biting the hand that feeds, but has also played an important role in sounding alarms over the (privacy and social fabric) threats posed by social media. Maybe the issue is he has argued these points for so long with no real net positive impact?
posted by simra at 3:38 PM on June 25 [2 favorites]


Okay but seriously, what is wrong with Jaron Lanier? PLZ EDUCATE

He's a white dude with dreadlocks. That should be enough.

If that's not enough, he's now a white dude in his 60s with dreadlocks.
posted by Joakim Ziegler at 4:19 PM on June 25 [2 favorites]


Hey, fellow denizens of Mefi AI threads:

11 steps to keep Meta from stealing your data to train AI: You only have until June 26, 2024 to say no to Meta taking your personal photos and words and using them to train their generative AI. Here are step-by-step directions for opting out.

That's today. It takes five minutes, but they've put the link in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard'. These step-by-step instructions will take you there.
posted by rory at 12:24 AM on June 26 [4 favorites]


He's a white dude with dreadlocks. That should be enough.

So is it just that he gives off Adam Duritz energy?
posted by johngoren at 3:34 AM on June 26


I still don't get what's wrong with Lanier. Since he published his One half of a manifesto in 2000 he has been a constant critic of all the hype, bullshit and ideology in the computer technology business - from "the singularity" to artificial intelligence.

Had enough of "the mind is just like a computer, so we'll upload our brains to machines and live forever" bullshit? Lanier is the guy you want to read. He will make light bulbs go off in your head. His criticism is even more relevant, since he criticizes the technology business from within. He is old enough to remember that computers were tools that were expected make our lives more creative and liberating - instead we ended up with click farms in Kenya and China, where underpaid workers help "self driving" cars to avoid hitting people. Lanier has remained a rare, independent thinker - here is his piece about artificial intelligence for The New Yorker (archive link): There is no AI.

But sure, he won't win any beauty contests, I'll give you that.
posted by Termite at 4:55 AM on June 26 [3 favorites]


He is old enough to remember that computers were tools that were expected make our lives more creative and liberating -

Damn, as a Gen X'er this makes me feel sad thinking of kids growing up never knowing that they ever were.
posted by johngoren at 6:38 AM on June 26 [2 favorites]


hello MetaFilter

dunking on a person's looks instead of addressing their ideas/actions is usually shitty no matter where you do it
posted by elkevelvet at 6:40 AM on June 26 [1 favorite]


being a white dude with dreadlocks isn't a problem of appearance.
posted by sagc at 6:54 AM on June 26 [7 favorites]


do we seriously need a MetaTalk about this derail?
posted by elkevelvet at 7:10 AM on June 26


I feel like the Jaron discussion has been more valuable than driving engagement to this weird anti-democracy eugenics dude.
posted by johngoren at 7:39 AM on June 26 [3 favorites]


agreed

not so much Jaron's hair, age, skin colour, and the degree to which his appearance meets a norm re: aesthetically pleasing
posted by elkevelvet at 7:47 AM on June 26


To be more serious about Lanier, I actually think his choices about how he chooses to present himself are part and parcel with what I don't like much about him: It's all about cultivating an appearance that's idiosyncratic, rebellious, etc., while I can't find much actual content in his writings. He did actual, technical work at some point, but then seems to have mostly stopped, and now just kind of floats around writing mildly obvious, mildly provocative things about online culture and tech, without actually saying much that's new or innovative, but just getting by on his reputation as a radical innovator, which again is based on doing some very early work on VR which basically never got any traction or led to anything useful. So at least in my mind, he's kind of a bullshitter.
posted by Joakim Ziegler at 12:15 PM on June 26 [3 favorites]


He did actual, technical work at some point, but then seems to have mostly stopped, and now just kind of floats around

That makes it sound like he's just going around spouting opinions for money to keep a roof over his head. He has had other jobs since the days of VPL, including stints at universities and spending the last 15 years with Microsoft Research. He's written four books. If he's "float[ing] around writing mildly obvious, mildly provocative things about online culture and tech", how is that any different to what 99% of us do on Mefi tech threads? At least a book lasts a bit longer than a thread...

I dunno, his Wikipedia page suggests to me that he was saying things that needed to be said when everyone else was caught up in the hype:

In his online essay "Digital Maoism: The Hazards of the New Online Collectivism", in Edge magazine in May 2006, Lanier criticized the sometimes-claimed omniscience of collective wisdom ... describing it as "digital Maoism". He writes "If we start to believe that the Internet itself is an entity that has something to say, we're devaluing those people [creating the content] and making ourselves into idiots."
posted by rory at 12:40 PM on June 26 [2 favorites]


One telling aside in Lanier's "There is No A.I." article:

For years, I worked on the E.U.’s privacy policies, and I came to realize that we don’t know what privacy is.

So he was involved in shaping the policies of one of the world's main economic and political blocs in the realm of (presumably online) privacy. That isn't "floating around", that's important work.
posted by rory at 12:48 PM on June 26 [1 favorite]


Microsoft Research is the textbook example of “floating around”
posted by torokunai at 2:09 PM on June 26 [1 favorite]


I’d be super offended if I found out someone outsourced a wedding speech. A wedding speech is a textbook example of “ I can’t do this for you. You must reach deep inside yourself and think about what this person means to do, and grow in the process, even if it’s hard for you.” As often happens, we have a flawed author but have gotten into an interesting discussion, so thanks to the OP.

I am (was) a career technologist and my last conversation over beers with my genx brother started with AI is bad and ended with “the web and most related technology are bad”. So I said, what about better connection between scientists, awareness of other cultures, and spaces for marginalized communities? So he said, yeah, but on balance still bad. So I said, yeah probably. We are the last generation to be bored and have to solve that, or to overcome shyness/spectrum and learn to deal with people, or as stated above, to feel good about being a good musician at the local level, etc. I actually think being aware of what everyone in the world is up to and struggling with, and having to have a ready opinion on all that, and having that knowledge acquisition directed by for profit companies, is bad. Being an influencer is bad. Seeking a life as a YouTube moneymaker is bad. It’s all bad, and AI is just the latest bad thing. To their credit, my gen z kids think AI, like capitalism, is bullshit that old people have made and they didn’t ask for. They and the rest just need a good Luddite pied piper to burn down all this crap. I find myself sympathizing more and more with a certain villain in Station 11.
posted by caviar2d2 at 3:48 PM on June 26 [1 favorite]


If he's "float[ing] around writing mildly obvious, mildly provocative things about online culture and tech", how is that any different to what 99% of us do on Mefi tech threads?

Well, I don't make a living out of writing on MeFi, nor would I want to, and I don't get declared an Important Tech Genius for it either.

And, as torokunai said, Microsoft Research and the like is the definition of "floating around". Can you find a good summary of what supposedly important work he's done during those 15 years he's been there? Because in my experience, the kind of reputation Lanier has gets self-sustaining after a while: Companies like Microsoft like to be able to tell shareholders that they employ these Important Geniuses in their research department, while the amount and quality of actual work being done there is, shall we say, very variable.
posted by Joakim Ziegler at 6:19 PM on June 26 [1 favorite]


We are the last generation to [...] overcome shyness/spectrum and learn to deal with people

Have they developed a magic cure for autism recently or something? Pretty sure kids still go to school, still have to deal with shyness, still have to deal with navigating confusing and opaque social norms if autistic.

The internet had changed a lot of stuff, sure, but it's not like the kids today aren't people, with all the regular people problems that have always existed.
posted by Dysk at 10:58 PM on June 26 [1 favorite]


In these days of Musk, a Lanier still feels like a national treasure to me. Even if he were sitting around at MS collecting checks for 95% playing Minesweeper and 5% issuing the occasional based quote to reporters. I ageee that there may be cultural appropriation issues with his dreds.
posted by johngoren at 11:54 PM on June 26 [1 favorite]


Can you find a good summary of what supposedly important work he's done during those 15 years he's been there?

I feel like I'm being set homework... I haven't paid close attention to Lanier. I remember reading an article about VPL in Rolling Stone circa 1990. I did pick up a copy of Ten Arguments For Deleting Your Social Media Accounts Right Now but haven't actually read it yet (a victim of tsundoku). His 2017 book Dawn of the New Everything sounds as if it's half autobiography, so maybe that has the answers.

I do bristle at the idea that an organisation employing 500 researchers is the very definition of "floating around", but I dunno, I don't work there; maybe it is all like the TV show Silicon Valley and Lanier is just a Denpok Singh. Universities employ thousands of researchers, though, and academics are used to be being told we live in ivory towers, so my spidey sense is tingling.

If it helps, I did find this 2013 article on how "Jaron Lanier Got Everything Wrong" which from the vantage point of 2024 makes it sound as if he got a lot of things right. The article is also from a crypto site, so... well, it doesn't mean it's automatically wrong, but if we're going on vibes...
posted by rory at 2:35 AM on June 27


« Older UK's 2nd biggest city is so broke they can no...   |   “I shake the system and change it and evolve... Newer »


You are not currently logged in. Log in or create a new account to post comments.