don't be evil
May 28, 2024 2:03 AM   Subscribe

Once upon a time, Google would have encouraged users to verify its AI's claims with a quick Google search. Ironically, this now only works if users click through results to check information against primary sources—the exact practice Google is trying to shift users away from. [extremetech]
posted by HearHere (22 comments total) 17 users marked this as a favorite
 
A decade ago, Google seemed an immensely beneficial part of society, organizing all the world’s information like a modern-day Library of Alexandria, available to everyone with a connection to the web.

Our Library of Alexandria burned so slowly we didn’t even smell the smoke.
posted by Kattullus at 3:33 AM on May 28 [55 favorites]


Basically if the stench of not-actually-AI is apparent around anything on the internet, avoid and disbelieve as hard has you can. And no, it's not going to get better, it's going to degrade into a grey goo of nonsense feeding on itself.
posted by GallonOfAlan at 3:37 AM on May 28 [8 favorites]


Our Library of Alexandria burned so slowly we didn’t even smell the smoke.

This isn’t the best metaphor, since most of the information is still there; what they’ve done is replace the card catalog with a bunch of people yelling directions, some right, some wrong, and some gibberish.

So a lot of people have stopped using article indexes because Google was easier, and now Google has hamstrung itself. Will they go back? Or will this be other Xitter, a once-useful tool that people still cling to even though it’s rotting as we watch?
posted by GenjiandProust at 3:49 AM on May 28 [16 favorites]


Our Library of Alexandria burned so slowly...
> This isn’t the best metaphor
isn't it? it's still burning
posted by HearHere at 3:51 AM on May 28 [3 favorites]


Our Library of Alexandria burned so slowly...
this comment is !fire!

that's my dad-joke allowance for the day.

There are two issues that plague me vis-a-vis technology- one is a discrepancy between what it (whether it is an app or a device) wants me to do and what I expect it to do, given how it was advertised. Sometimes the usability I expected is hidden in a maze of near-indecipherable menus and/or behind language that is imprecise to the point of meaninglessness.

The second is the apparently fervent need to take a functional thing and turn it into a money-making juggernaut. Sometimes a thing, as it is, is the best version of itself. No-one actually needs an electronic hammer.

Google seems to be flirting with both of these - though in its defense, its ubiquity led to a lot of people gaming it so as to gain preference for their website(s). But lately I've been thinking of how to re-make a simple google. I want "Accordions in Alaska" give me all the versions of this on the web that you can find, maybe give me a layer of refinement where I can narrow down the findings... please leave algorithms out of it, let me find my own way because I'll do it better than they can.

I'm telling you, this whole "modern wonder of internet-itude!" is wearing very thin. Good enough, as opposed to cutting edge latest coolest most awesomest, is something I sorely miss.
posted by From Bklyn at 4:53 AM on May 28 [15 favorites]


I almost started using Bing. But the ai Answers are so sus i just close out when I see them

Now google wants in on that trash? Great
posted by AngelWuff at 5:05 AM on May 28 [4 favorites]


i don't often use google, but when i do, i &udm=14
posted by Aya Hirano on the Astral Plane at 5:10 AM on May 28 [45 favorites]


Clearly the sheer open ended-ness of the infinite possibilities presented by AI make traditional software testing obsolete.

(I bet there's been more than one idiot at google or meta who's said that with a straight face)
posted by RonButNotStupid at 5:54 AM on May 28 [4 favorites]


the infinite possibilities presented by AI make traditional software testing obsolete

Elon ought to replace all Tesla's QA people with AI. Nobody has been paying any attention to QA for years anyway, so this is a guaranteed dev-spend win that would shave years off slipping FSD past the regulators.
posted by flabdablet at 6:13 AM on May 28 [3 favorites]


I stopped using Google to search around ten years ago when they blocked the website of the small liberal protestant church I worked for. They created a business website for it, which resulted our actual website being forced to the third page of results, filled their own website for the church with incorrect information, such as a wrong main phone number and an image of a structure down the street instead of the building with the spire, and coolly informed me that if we entered into a subscription model with them, they would give us access to their site so that we could correct the information on it. As an added perk they would teach us search engine optimization to ensure that the web site they created for us would actually stay high in their search results.

That was when I realised that SEO meant that search had been sabotaged. I understood why, when I had been searching for the names of local people using search terms like "obituary Alexandre Dobblestyn" instead of getting the funeral home which was handling the services, I would get a long list of LinkedIn and other social media sites claiming to have information but which required me to log into them in order to find out if they actually did have anyone named Alexandre Dobblestyn. I was blackly amused to discover they frequently didn't have anyone with the precise name I was searching for, but they DID promise to make it possible for other people to find me, if I were Alexandre Dobblestyn and created an account for myself.

In the end I had to resort to adding the names of every funeral home in the province one by one to my search terms to actually get to, not the funeral homes' own websites, but obituary aggregator sites that might have the information I needed.

There followed two years where everywhere I went I encountered advertisements to small businesses offering to teach them how to use SEO. The users got some of the bugs out by complying - The funeral homes paid the fees to Google and took their SEO course, and figured out how to tag themselves so that they consistently appeared, if not at the top of the search results usually now on the first page of them. It will never work for small funeral homes, but the big chains can get enough hits to ensure that they are up at the top, sometimes even higher than the obituary aggregator sites.

The obituary aggregator sites were managing to hold their position at the top when I last did any searches for recently deceased people when I wanted to find out if their funeral service information was available yet. It was worth a lot of money to the aggregators. This is because, if you went to their site and found their service information for Alexandre Dobblestyn they would put the funeral home name prominently at the top, making it look you were at the site of the announcement the family had paid the funeral home to put up on the internet, and there would be a couple of buttons to click here to make donation in memory of Alexandre Dobblestyn. One click and you could make your donation to the American Cancer Society or the Heart and Stroke or Child Find. Very few people actually read all the way down to the bottom of the small print where the obituary finished up with "..The family are requesting donations to Sussex Search and Rescue or to the Alexandre Dobblestyn Memorial Fund for the Sussex Area Elementary School Lunch Fund." There was a big button with the name of a large non-profit right there implying that was where your memorial donation should go. I could only presume that the obituary aggregator sites are being paid a LOT of money by some big non-profits.
posted by Jane the Brown at 6:25 AM on May 28 [39 favorites]


> This isn’t the best metaphor, since most of the information is still there

Not true, sadly. A lot of the sites that made the Internet so vibrant died because they stopped getting traffic from users.

It wasn't just Google's fault, tho. The SEO folks also had a hand.
posted by constraint at 7:17 AM on May 28 [18 favorites]


I'm a fan of these new AI enhanced searches, but only when they provide adequate references. Google's SGE is still failing to do that. As the article notes there are sources listed at the bottom of the answers it gives but they aren't very tightly sourced. It reads more like an AI generated response and then some generic search results belowl.

Compare Phind or Bing, both of which provide footnotes on specific AI assertions to URLs to learn more. The references are way more useful than Google's. (They also both say "don't eat rocks" although I suspect these answers have been hand-tuned recently: Phind is already talking about the Google news from two days ago.)

LLMs are very good at synthesizing text into a gist, understanding natural language inputs, and producing natural language outputs. It's a real improvement over keyword search. But it's still only as good as the sources it's drawing from and it's essential for the person doing the research to evaluate those sources, the AI isn't going to do that well.

What's astonishing to me is how bad Google has been at rolling out AI. They had an enormous head start on this stuff with Google Brain and also the DeepMind acquisition, they were doing the foundational work on large neural networks just a few years ago. How did they miss the boat so badly in 2024? It's the first time in over 20 years there's been an opportunity for real competition for Google search in the West.
posted by Nelson at 7:48 AM on May 28 [3 favorites]


My hypothesis is that 1) Google was heavy into image processing/recognition and hallucination with DeepMind, which seemed like a reasonable place to start with LLMs because 2) they know that the text and factual input into these systems is unreliable, inconsistent, and dirty.

But now they're being surrounded by other companies going "fuck it, we don't care if this was trained on the corpus of Reddit, let's have it write pharmaceutical marketing text" and now they're trying to show parity.
posted by JoeZydeco at 8:09 AM on May 28 [6 favorites]


How did they miss the boat so badly in 2024?

Recall the leaked strategy memo from only a year ago suggested that LLMs were important, game changing, and completely vulnerable to non-profit OSS AI projects. In practice, it seems like OpenAI has been keeping ahead, but has been burning a lot of cash on GPU hours to stay there, which is either unprofitable, or a winner-take-all scenario at best.
posted by pwnguin at 8:28 AM on May 28 [1 favorite]


So, if you download Edge and use it with Bing you'll get an AI delivering you answers but basically with footnotes, showing where it got each piece of information in its summary, and you can click through on each of those to get primary sources.

Or, that's how it worked about a year ago. I haven't used it much since then. But I found that to be a good compromise between AI mediation and actually getting web results.
posted by hippybear at 1:07 PM on May 28


https://www.tomshardware.com/software/google-chrome/bye-bye-ai-how-to-block-googles-annoying-ai-overviews-and-just-get-search-results

Protips for those who are trying to retain sanity in this fucked up world.
posted by symbioid at 5:57 PM on May 28


Our Library of Alexandria burned so slowly we didn’t even smell the smoke.
The actual Library of Alexandria did not all burn up in one go, either.

So, to say that "it burned so slowly, you didn't even smell the smoke" is actually the appropriate metaphor.

Yes, the library was damaged by fire caused by battles and disasters in the city, but the event that many point to as the "Burning of the Library of Alexandria" -- the sack of the complex by Christian mobs in the 2nd century -- happened well after most of the knowledge in the library had already been lost. Papyrus is fragile and susceptible to both moisture and excessive dryness. Papyrus scrolls needed to be copied and recopied in order for knowledge to be preserved, and thus the Library of Alexandria succumbed to that universal bane of all libraries: budget cuts.

Our knowledge of antiquity was not lost because somehow an entire civilization made the titanic mistake of putting all of its documents in one building that was set on fire. It was lost because people took knowledge for granted and thought the work of preserving and maintaining history was someone else's job. Not unlike what we're experiencing today.
posted by bl1nk at 6:09 PM on May 28 [8 favorites]


Google’s “AI Overview” can give false, misleading, and dangerous answers is an article by Kyle Orland for Ars Technica, where he and his colleagues recreated many of the viral weirdnesses of Google’s AI Overview, and divided them into categories of error: treating jokes as facts, bad sourcing, answering a different question, math and reading comprehension, the same name game, and not wrong exactly.

What sticks out to me is that these are most of the same types of errors as were prevalent in the original ChatGPT. It’s fascinating that there doesn’t seem to have been much progress since then.
posted by Kattullus at 12:39 AM on May 29 [1 favorite]


The notion that making GPT LLM responses less likely to be false, misleading and/or dangerous is a matter of progress is fundamentally flawed. The entire design purpose of a large language model is to generate output that looks convincing by virtue of being presented fluently, thereby working around hitherto-reliable bullshit detectors; it's hard not to mistake the fluency of a GPT LLM's output for lucidity.

Progress on LLMs constructed along the lines of any of those currently available can only ever improve their fluency, not their lucidity. Text that actually means things requires that those things be parts of an underlying world-model, and workable world-models need to contain so much more than the static statistical relationships between words that a pre-trained transformer can capture.

A world-model that might possibly "progress" in order to allow a GPT LLM to emit genuinely reliable statements simply doesn't exist as a design feature of any GPT LLM. The perception that some such world-model must somehow have emerged in order to account for GPT4 working as well as it does is our hallucination, and the people who build these things are if anything more susceptible to that kind of hallucination than the rest of us.
posted by flabdablet at 1:28 AM on May 29 [4 favorites]


The AI summaries are useless *and* use like 30 times more electricity than traditional searches. They should be an opt-in feature, not something you need to hunt around in settings to figure out how to turn off (thank God you can turn them off, though - even if it's going to create a class of people who don't realize how bad it is for everyone else).

Bonus, now I distrust ALL of google's helpful search suggestions and not just the ones in the AI box.
posted by subdee at 7:24 AM on May 29 [1 favorite]


“Some Lessons From Google’s AI Mistakes,” Aaron Gordon, Proof, 29 May 2024
posted by ob1quixote at 6:35 PM on June 1 [1 favorite]


Jesus fuck, that article ob1quixote linked to is the most damning yet. Excerpt:
Much more troubling, AI Overview is occasionally willing to answer questions about racial superiority or inferiority. When I asked Google which race is the fastest, AI Overview told me Black athletes are faster. It would be one thing if it (correctly) pointed out that Black athletes are disproportionately represented among Olympic sprinters and major marathon winners compared to their share of the global population, while adding social and cultural factors play an important role in that observation. But it does no such thing. It says the difference may be “due to physical differences in their body structure.”

The basis for this statement, which was cited but not directly linked to from the AI Overview, is a 2010 study in an obscure scientific journal that doesn’t say what AI Overview says. All it did was tally the races of world record runners and then find studies that measured the sizes and limbs of various races. Many of those studies were done in the 1920s during the height of eugenics, when scientists were purposefully looking for evidence of differences between races. In no way does this study do what AI Overview says it does, proving that “black athletes” are faster because they have different body structures. Disturbingly, the paper’s authors suggested the study may have relevant findings for “the evolution of size and shape in dog racing.”

I received a similarly troubling response to a question about which race has the highest pain tolerance. The field of pain tolerance measurement across races is an important one for medicine not because different races have different pain tolerance but because different cultures have different values and levels of social acceptance for expressing pain, which is helpful for doctors and nurses to be aware of when asking patients how much pain they feel. This is utterly absent from the AI Overview response. Instead, it refers to one 50-year-old study and offers several other absurdly broad statements while providing no visible links to sources.
posted by Kattullus at 9:40 PM on June 1 [2 favorites]


« Older Let It Go   |   Kado is one of only three speakers of Ngalia Newer »


This thread has been archived and is closed to new comments