“Technology is neither good nor bad; nor is it neutral.”
September 14, 2024 4:33 AM   Subscribe

We can’t live without air. We can’t live without water. And now we can’t live without our phones. Yet our digital information systems are failing us. Promises of unlimited connectivity and access have led to a fractionalization of reality and levels of noise that undermine our social cohesion. Without a common understanding and language about what we are facing, we put at risk our democratic elections, the resolution of conflicts, our health and the health of the planet. In order to move beyond just reacting to the next catastrophe, we can learn something from water. from Stop Drinking from the Toilet! [Coda] posted by chavenet (14 comments total) 25 users marked this as a favorite
 
I wrote rants about disinformation/misinformation and media literacy online before Facebook even existed. I feel like the situation got orders of magnitude worse thanks to engagement-for-profit, and AI crap has pushed it all along even more.

I despair at how many people trust large language models as if they're some kind of large truth model. It doesn't say anything good about media literacy or critical thinking.

The article's metaphor here is very apt.
posted by Foosnark at 9:12 AM on September 14 [5 favorites]


I'm probably screwing myself over by disclosing this, but...

the Facebook ad service effectively diagnosed me with ASD.

(What happened: a pharma company advertised a drug study for autistic adults. Lots of diagnosed ASD people clicked on the ad and enrolled. Facebook then served the ad to anyone whose Facebook data trail is similar enough to those who clicked, similar in a non-describable machine-learning sense. Myself included. I'm undiagnosed, so I was not eligible. And neither the Pharma company involved, nor the university involved, nor Facebook itself were doing anything new or creepy.)

Your data trail, plus a machine learning training run, can group you in with people whose neurodivergence is similar to you. Not with a high enough sensitivity and specificity to be useful to any medical or scientific outfit with actual ethics, but for outfits like the troll farms of St. Petersburg or the next Cambridge Analytic, very useful. And not just for ASD.

Obviously I can't test this myself, but I'm more than willing to bet that the data trail of young twenty somethings with incipient schizophrenia is discernible enough to existing machine learning algorithms that if you want to seriously fuck a country up, you can get a list of them and start using social media to whisper sweet nothings into their phones.

During the Cold War, copying a page of a West German phone book and handing it to a foreigner met the legal definition of espionage. We're rapidly approaching a time when people's data will have to be guarded with the same fanaticism, to the point that corporations will become increasingly inclined not to hold it at all.
posted by ocschwar at 9:37 AM on September 14 [3 favorites]


Mr. Brooker, I'm happy to sign over any copyright claim if you want to make this an episode of Black Mirror.
posted by ocschwar at 9:44 AM on September 14 [2 favorites]


>as if they're some kind of large truth model. It doesn't say anything good about media literacy or critical thinking

I greatly, greatly prefer interrogating ChatGPT vs. whatever Google thinks its doing in search now.

eg this recent question I had on NAFTA's replacement. Looks correct, and that's good enough for me (not really but it's clearly going in the right direction)
posted by torokunai at 10:05 AM on September 14 [1 favorite]


How humanity lost control - "The guru who made the most progress in building management cybernetics was the counterculture-era management consultant Stafford Beer, whose book Brain of the Firm explored how bureaucracies can be reformed so that the internal flow of information between deciders and decided-upon is kept in balance. Without that, a system will not remain viable and useful to humanity over time."[1,2]
posted by kliuless at 10:44 AM on September 14 [5 favorites]


Do not use ChatGPT to answer questions you don't already know the answer to. It is designed to produce output that "looks correct", not output that is correct. Even if it answers questions correctly some fraction of the time, even a high fraction of the time, its design is such that you cannot identify the source of its answers to determine whether any given answer actually is correct. Did it pull data from the primary text of the bill, available from thomas.loc.gov? Did it pull analysis from a reputable academic or journalist? Is it repeating talking points from a biased political operative? Unlike a search engine, it does not and cannot tell you this. This is what is meant by poor media literacy: accepting answers without any consideration of their source, based purely on how plausible they seem. ChatGPT and other LLMs by design deny you the ability to source their claims, and so are not fit for the purpose of answering real questions.

Reporting "answers" generated by ChatGPT here further promotes a culture of passively accepting authoritatively-worded claims without critically engaging with the sources, and I wish you wouldn't do it. We need more media literacy and critical thinking, not repetition of the output of an opaque algorithm under the control of billionaire-led corporations with hostile agendas.
posted by biogeo at 10:48 AM on September 14 [18 favorites]


Looks correct, and that's good enough for me

That is a huge, huge problem for society, for knowledge, and for life in general as we move into whatever this AI-powered future is going to be. I have so far tried not to engage much with AI things, while also trying to keep an open mind about them as they evolve. But the more I see statements like that, and the more the acceptance of "eh, it looks right" as a standard becomes a baseline, the more I am firmly convinced we can't really trust anything online any more and that the old-internet fear of context collapse was a quaint canary in a very, very dangerous coalmine.
posted by pdb at 11:19 AM on September 14 [9 favorites]


>I wish you wouldn't do it.

I agree with all of that, especially that OpenAI needs to show its receipts for every assertion it makes. I also firmly believe Elon is spinning up Qrok to be the 'conservapedia' alternative to OpenAI's 'wikipedia'.

When I do recollection on the knowledge I've stuffed between my ears, I generally remember where each factoid I "know" came from. Clearly OpenAI needs to incorporate that into the billions of parameters its is assembling into its models.

I do believe this parameterization is roughly the right direction to go, the stuff 4o is able to do for me on a daily basis is truly outstanding, well worth $1/day LOL.

I see a future where the open corpus of human writing and knowledge in general is accurately digested, cross-referenced, weighted . . . the whole business of being an intelligent entity. We're not quite there yet but I think that's where we're going this century if not decade.
posted by torokunai at 12:00 PM on September 14 [1 favorite]


Out and about on the internet, many of the strongest proponents I’ve seen for LLMs as a tool for “knowledge” are people who have previously railed about the importance of “doing your own research”. Of course, a lot of what such folks have provided as evidence of their own research has looked a hell of a lot like the first paper they came across that confirmed their pre-existing biases. The switch to “ChatGPT says so” isn’t entirely unexpected, but it still makes me sad and angry.
posted by rrrrrrrrrt at 1:34 PM on September 14 [7 favorites]


Kranzberg's original article is a lot more nuanced than Estrin's Lite treatment, and invokes scholars like Ellul and Winner with more precision. Note that Kranzberg treats technological determinism as an unresolved question, which to me suggests more of an instrumentalist perspective that Kranzberg lets on—and that Estrin doesn't really take on.

"Is technology neutral?" depends on context. Stephen J. Kline in "What Is Technology?" suggests four sub-definitions: technology as artifact (e.g., an airplane), know-how or techné (e.g., aeronautics), system of production (e.g., Boeing's factories and supply chains), and system of use (e.g., air travel, with its bureaucratic and legal and economic components). The canard of neutrality (one of my mentors used to say, "Calling something good or bad is the least interesting thing you can say about it") plays out differently when you're talking about a Hellfire missile as opposed to when you're talking about an LLM.
posted by vitia at 4:13 PM on September 14 [3 favorites]


Do not use ChatGPT to answer questions you don't already know the answer to.

Back when ChatGPT was the hot new thing, I had a long list of numbers I needed to put single quotes and a comma around, something I can do in Excel in about 2 minutes (really 1 minute, but Microsoft software still blows and it rather inexplicably takes 60 seconds for Excel to open.)

So I decided to play around with ChatGPT, and no matter what I tried, I could not get it to give me the results in a column, which wasn’t strictly necessary but it’s what I wanted! And then I realized: how do I even know these are the same numbers I pasted into it?

I noped out of using it after that and haven’t touched it since.
posted by rhymedirective at 7:56 PM on September 14 [5 favorites]


Good article. It strengthens my belief that I'm better off without any media that uses an algorithm to show me things. I want my media to be presented strictly chronologically or based on the best match to my search terms.

This is not always possible but it's a good thing to strive for and as a criterium for which platforms to use, and which ones to avoid. No recommendations also means less aimless scrolling. All in all I can recommend this approach.
posted by Too-Ticky at 5:15 AM on September 15 [2 favorites]


I don't want to discount your experience, ocschwar, but as someone who has run many FB ad campaigns, the targeting is way, way more nuanced and overwhelmingly customizable than what you are describing. It's far more likely you were targeted for characteristics that were unrelated to an individual data trail that might suggest a diagnosable medical condition.

What I mean is, there are so many other factors that a campaign could and would target in advance of a more obscure data behavior cluster that would achieve the same targeting results.
posted by jordantwodelta at 8:46 AM on September 15 [1 favorite]


It's far more likely you were targeted for characteristics that were unrelated to an individual data trail that might suggest a diagnosable medical condition.

More likely those characteristics are just numerical coefficients in a training model that are not explicable even to the developers who wrote the code and ran the training for it.
posted by ocschwar at 4:27 PM on September 15 [2 favorites]


« Older Teenage girls are our linguistic trailblazers   |   Just how much 2024 GOP platform is based on lies? Newer »


You are not currently logged in. Log in or create a new account to post comments.