We are fucked
August 23, 2024 8:39 AM   Subscribe

Our basic assumptions about photos capturing reality are about to go up in smoke. The Verge reports on the Pixel 9, "a handheld telephone that spews lies as a fun little bonus feature."
posted by ominous_paws (99 comments total) 34 users marked this as a favorite
 
We are not fucked, we just can no longer trust digital imagery. Real film, which is already making a bit of a comeback with Instax cameras et al, will suddenly be relevant again.
posted by grumpybear69 at 8:43 AM on August 23 [37 favorites]


2004: Pics or BS
2024: Pics are BS
posted by pwnguin at 8:48 AM on August 23 [22 favorites]


real film is only useful insofar as you have the physical negative in front of you though, almost every photo humans actually see these days have a digital intermediary
posted by BungaDunga at 8:48 AM on August 23 [49 favorites]


Very interesting article. I have pretty much pooh-poohed such AI/image fears for a while, but the images shown in that article are kind of scary - and as someone whose job occasionally involves photos sent in by the public (news submissions), I'm becoming increasingly pessimistic.
posted by davidmsc at 8:51 AM on August 23 [3 favorites]


I've seen this article floating around, but this was the time I stopped to read it. Thanks!

I don't agree with the basic thrust of the article. The basic claim here is in some senses quite plausible, but this feels to me of a piece with dire claims about the state of U.S. housing, incomes, food prices, etc. that are predicated on the false idea that post-WWII norms were universal. It may feel a certain bad way to Boomers, Gen X, etc., but I don't think this kind of doubt and questioning is going to feel especially radical to many people who are teens or in their 20s now, and who grew up with the "primitive" versions of this tech all around them.

Many current MeFites, some in the U.S., have lived or live in a world currently where they must go to extraordinary lengths to prove various things to be true, reflective of their reality, and do so even while they hold in their hands (on a phone or otherwise) proof of their claims. This change has the potential to be pernicious and disruptive, but I think that it more generally means that we're moving back into a time where you can't point at one document and say "there, proof!" and expect to be believed without further discussion.
posted by cupcakeninja at 8:53 AM on August 23 [11 favorites]


If you have ever played around with photography then you know there are many ways to manipulate an image even before the shutter is pressed; and many more afterwards. These tools will definitely make people feel photographs should be viewed with some skepticism, but that has always been true.
posted by TedW at 8:56 AM on August 23 [19 favorites]


there is an "authentication standard" for photos but frankly I don't see it being very useful. In theory it will allow photographers to prove to publishers (and readers) that a photo came from a camera, and what edits (if any) were done to it. Which might have some marginal benefit, but with people willing to believe that Harris' crowds are AI, why would they trust some esoteric signature scheme? it's all run by the illuminati anyway, the moon is a hologram, etc
posted by BungaDunga at 8:58 AM on August 23 [9 favorites]


These tools will definitely make people feel photographs should be viewed with some skepticism, but that has always been true.

I'm not sure it's ever been this easy to manipulate and distribute photos widely. Those two things together are a sea-change. Manipulating photos well used to require a moderately expensive program and some real talent. Now it requires some voice prompts and the cell phone you already own.
posted by The_Vegetables at 9:02 AM on August 23 [28 favorites]


The sooner we internalize this message, the better: the default assumption is that unattributed photos are bullshit. Provenance will be the way we decide to trust photos, or not: who is showing me the photo, where did they get it, and do I trust these sources?

There has been CS research on signing digital photos with an unforgeable chain of custody that attests they have only been modified in approved ways (e.g. cropping, light balance, etc) but to make that work you need support for it in editors and OSes that are locked down, with trust rooted in TPMs. I wonder if we’ll get mass market support for that tech, similar to how browsers display SSL info for a web page.
posted by qxntpqbbbqxl at 9:05 AM on August 23 [32 favorites]


In the mid-1990s I was able to print film negatives and positives on digital imagesetters or slide printers, which could then be used to make prints. The images themselves could be wholly novel or just heavily-retouched.

*shrug* This cat has been out of the bag for a long time.
posted by wenestvedt at 9:06 AM on August 23 [10 favorites]


If you have ever played around with photography then you know there are many ways to manipulate an image even before the shutter is pressed; and many more afterwards. These tools will definitely make people feel photographs should be viewed with some skepticism, but that has always been true.

this stuff is basically the CRISPR of photo manipulation. photos (and genomes) could always be manipulated the hard, analog way, but now you can insert the precise edits you want with much greater ease, on-the-fly in realtime. If you find yourself in proximity to, say, some huge breaking news event, you can almost immediately post a realistically manipulated image to show whatever you want, you could even sell it to a less-picky news publisher. Don't have a perfect shot of the dramatic event? Well, you can just clean it up and make it a little bit more dramatic using AI and then post it within a minute or two.

It used to be that people who wanted to circulate fake images of breaking news events had to recycle old images, which were at least possible to spot. But now you can circulate completely novel, believable images that are tailored to whatever the exact breaking news incident is.
posted by BungaDunga at 9:07 AM on August 23 [32 favorites]


Everyone who is reading this article in 2024 grew up in an era where a photograph was, by default, a representation of the truth.

Wow, is this person behind the times. Among other things the verb "to photoshop" entered the English language in the early 1990s.
posted by Tell Me No Lies at 9:08 AM on August 23 [12 favorites]


to make that work you need support for it in editors and OSes that are locked down, with trust rooted in TPMs

my guess is that eventually people will just work out how to inject fake signals into camera sensors themselves, since the camera will presumably digitally sign anything that's coming off the sensor. I'm sure there are ways to make that very difficult but I imagine bulletproof DRM is not exactly most of these manufacturers' strong suit
posted by BungaDunga at 9:11 AM on August 23 [4 favorites]


a world currently where they must go to extraordinary lengths to prove various things to be true

For some reason I was reminded of the insurance industry.

It's easy to miss the ways in which complexity (or power) have increased our time and costs of doing things. What this means, generally, is that we do less. Or we assume more risk. Trust is valuable resource all its own, but you don't hear about it often.
posted by Reasonably Everything Happens at 9:13 AM on August 23 [13 favorites]


It may feel a certain bad way to Boomers, Gen X, etc., but I don't think this kind of doubt and questioning is going to feel especially radical to many people who are teens or in their 20s now, and who grew up with the "primitive" versions of this tech all around them.
--
Manipulating photos well used to require a moderately expensive program and some real talent. Now it requires some voice prompts and the cell phone you already own.


This isn't about photos, but rather about attempts to manipulate data and the skill on either side...

So, Mike "Pillow Guy" Lindell tried to "go undercover" at the DNC and stir up shit. But one of the people he tried to pick a fight with was a twelve year old kid: a content creator named Knowa De Brasco. And De Brasco cleaned the floor with Lindell, who was trying to spout lies about stolen elections and missing votes in Georgia. De Brasco repeatedly asked him for proof, Lindell failed to deliver, and De Brasco finally said "so your only proof is 'Trust me, bro'?" De Brasco also got in a dig about Lindell's bankruptcy at one point, before ultimately just walking away from the situation with a "whatever."

My point being: if Lindell had thought to make up a source to satisfy De Brasco's demands, it could have at least been a roadblock for De Brasco to have to say he needed to look into it. But Lindell didn't even think to do that. He had a tool available to him (pure bullshit) but didn't even think to use it. And a twelve-year-old kid was already savvy enough to immediately spot the bullshit, and also know that walking away from the situation was the wisest move because he knew he'd won.

These tools are worrying, but the tools themselves aren't the concern - the bigger concern is who would be smart enough to use them. And fortunately, not everyone is.
posted by EmpressCallipygos at 9:14 AM on August 23 [24 favorites]


I’m an amateur astrophotographer and online forums in this space routinely devolve into arguments about whether or not space pictures are “real”. For sure, 100% of the images you’ve seen of deep sky objects- andromeda, spiral galaxies, etc, are digitally manipulated. But then the question becomes - at what point is the image *not* manipulated and it quickly becomes turtles all the way down. I kind of feel quantum physics is like this too but that’s for another day.

For real-world imagery we all have our own bar for skepticism and burden of proof. Was Obama’s suit really tan? Maybe it was just a trick of the light or an artifact of the TV cameras. Were you actually there? Were you on the left side or right of the stage? Did you see him choose it and put it on? What qualifies as tan anyways?

We have a time tested (but also easily manipulated) burden of proof in the legal domain. It’s not clear how to extend that to online discourse and unfortunately too many people engage in motivated reasoning when they decide what to believe and what to question.
posted by simra at 9:15 AM on August 23 [6 favorites]


It feels to me like you could get evidence that a picture is real by getting multiple shots of the same thing from different angles. If you shoot a real-life scene from multiple angles, the details will line up, but if you took each of those photos and told an AI "add a wrecked car to this", wouldn't the wrecked car itself be inconsistent because it's being generated anew each time?
posted by wanderingmind at 9:19 AM on August 23 [18 favorites]


A grim start to the day, but feels true. Here’s hoping watermarks get a lot better…
posted by Going To Maine at 9:19 AM on August 23 [2 favorites]


For real-world imagery we all have our own bar for skepticism and burden of proof. Was Obama’s suit really tan? Maybe it was just a trick of the light or an artifact of the TV cameras.

Or relatedly: was the dress black and blue or was it white and gold?
posted by EmpressCallipygos at 9:22 AM on August 23 [10 favorites]


Everyone who takes a photo with a Pixel 9 is probably going to be immediately saving that picture to the cloud.

Why can't there be a digital equivalent of a poisoner's book where every photo that gets the AI treatment is recorded and made available for a reverse image search. Privacy would be a nightmare, but we're already used to handing everything private about ourselves to the likes of Google, Facebook, et all anyway, so what difference would it make? And anyone capable of subverting a scheme like that probably isn't using a Pixel 9 to manipulate their photos.
posted by RonButNotStupid at 9:33 AM on August 23 [5 favorites]


*shrug* This cat has been out of the bag for a long time.

Quoting myself from three or so years ago ...

am I allowed to brag here? About thirty-three years ago, the end of 1987, I wrote something in a local fanzine about how, given coming evolutions in digital technology, that you'd eventually be able to fake anything. Wayne Gretzky could be pictured mainlining heroin, Ronald Reagan could be shown f***ing a pig. This after a discussion with a tech-head friend who'd been keeping up on r+d in the digital recording realm. "If it can be done with a sound file (it has), it can be done with a picture file, because once you get down to that level, it's all just ones and zeroes." Or words to that effect.

One more reason, I guess, why we all need to get better at being confused.

posted by philip-random at 9:34 AM on August 23 [7 favorites]


It feels to me like you could get evidence that a picture is real by getting multiple shots of the same thing from different angles

For now, maybe. Eventually video inpainting will be good enough that if you shoot a video of a scene it will be able to inject realistic, consistent objects that will look correct as the camera moves around. Pull out frames from the video and blam, now you have a series of images of a consistent scene with fabricated content.
posted by BungaDunga at 9:35 AM on August 23 [8 favorites]


In the mid-1990s I was able to print film negatives and positives on digital imagesetters or slide printers, which could then be used to make prints. The images themselves could be wholly novel or just heavily-retouched.

These kind of comments are mind-boggling to me. Yes, photos could have been manipulated at any time throughout history, but that required skill and specialized equipment. The part you're missing is that the specialized equipment is already in the pocket of just about everyone and the skill is no longer needed. Do you not see how that makes it not at all similar to what you're talking about?

It's like, all of a sudden people can fly just by thinking about it and people are saying, "Yes, but planes have been around for my entire lifetime!"
posted by dobbs at 9:37 AM on August 23 [75 favorites]


Why can't there be a digital equivalent of a poisoner's book where every photo that gets the AI treatment is recorded and made available for a reverse image search.

The cat is out of the bag on this: anyone with the right hardware can download high-quality image models and run them entirely offline, with no telemetry and nobody looking over their shoulder. OpenAI reportedly does have an ability to detect ChatGPT's own outputs but has not deployed it as a service, because it would reduce the value of ChatGPT if anyone other than OpenAI could reliably detect its use.
posted by BungaDunga at 9:39 AM on August 23 [5 favorites]


and anyway even if you don't have the right hardware you can rent it online metered by the minute from someone in a different jurisdiction who doesn't care about recording hashes of AI images and reporting them to a centralized authority
posted by BungaDunga at 9:40 AM on August 23 [3 favorites]


This blasé attitude of “well, images were always fakeable” is missing the point of how plausible use cases expand rapidly as something gets increasingly more trivial to achieve. Think of all the ways that corporations can use automated tools to chisel every last fraction of a cent out of their customers—things they could in theory have done before but would never have been profitable or practical. Yes, we’ve long been sliding down a slippery slope on the reliability of images, but this feels like the plummet of a trapdoor opening.
posted by Horace Rumpole at 9:41 AM on August 23 [34 favorites]


When I was a kid, I lived in a (Global South) country that experienced its share of problems with poverty and political violence. Without fail, (Western) reporting on the country would employ imagery of people living in conditions of abject despair, and of menacing-looking men with guns in ill-fitting fatigues.

As a result, when I arrived in Europe, people were surprised to learn that my childhood had been largely uneventful. Although, yes, we did flee the country to escape the violence — in accordance with their preconceptions — otherwise nothing about the imagery they received prepared them for the reality that most people did indeed read books and watch television, and that most people didn't suffer from hunger or malnutrition.

It's a strange new world in which images can be generated and manipulated so easily and economically ("at scale"). But it's never been really true, as TFA rather breathlessly asserts, that photographs have always been "the linchpin of social consensus".
posted by dmh at 9:43 AM on August 23 [14 favorites]


This makes me worry about the ability of oppressed peoples to publicize what is going on in their regions. Outsiders rely on photos and videos (in addition to reports and interviews) to have a decent sense of what is going on in Gaza right now. Open source intelligence relies on geolocated footage and imagery to report on the Russian invasion of Ukraine. I'm sure governments and rubbing their hands in anticipation of being able to more easily dismiss such evidence as "manipulated".
posted by Grimp0teuthis at 9:48 AM on August 23 [21 favorites]


The cat is out of the bag on this: anyone with the right hardware can download high-quality image models and run them entirely offline,

Yeah, but are the majority of Pixel 9 users going to do that? Would they even know how? We're talking about a feature that is bundled with a consumer-grade phone. Some significant percentage of people using this feature probably wouldn't be manipulating images if it hadn't been provided to them.
posted by RonButNotStupid at 9:48 AM on August 23 [9 favorites]


The cat's been out of the bag on this technology for a while—SD infill has been a thing for a few years, has been getting better, and has been usable on systems roughly equivalent to "a mediocre gaming PC" for some time—but a major vendor shipping it on computers people actually use, that is to say phones, is the fancy fresh tipping point here. That's because the ready accessibility of the tool, combined with the fact that people are dicks, combined with the fact that people are gormless creates a situation.

It's not so much that anything's been let out of the bag so much as the bag's been made much, much easier for a larger population of assholes to reach into. As with anything used to propagate information, assholes are gonna use it to amplify lies. Happened with the typewriter, happened with the Internet, and we all fell for that, too.

Also: The photo of the dress is black and blue, but the dress itself? You'd have to have been there to know for sure...
posted by majick at 9:49 AM on August 23 [8 favorites]


Also does anyone else find the ads about casually manipulating photos of family and friends to be jarring? Even if the capability has been there, I haven't heard of most people cleaning up their personal photos except in exceptional cases like engagement photos taken in public.

If people end up using this feature regularly, our photos will represent an idealized, more minimalist view of the events of our lives, with fewer tourists in the background and other subtle and less-subtle changes that tend to make us look a bit more like the main character.
posted by Grimp0teuthis at 9:59 AM on August 23 [14 favorites]


Wow, is this person behind the times. Among other things the verb "to photoshop" entered the English language in the early 1990s.

Also, Jurassic Park came out in 1993. Forget still pictures, unless people believed Spielberg was directing real dinosaurs everyone was pretty well aware of the possibilities at that point.
posted by Tell Me No Lies at 10:02 AM on August 23 [1 favorite]


This is just to say that opening the Verge link resulted in a popup where they requested permission to share my info with their 873 partners. They bolded the number.

In what fuckin' universe do I need to give blanket permission to share my personal info with over 870 other companies just to read a goddamn article? (Oh right the universe where digital images are all lies and criminals have full immunity if they get elected, right right)

What I don't get is why they bolded the number and then detailed how opting out was a possibility. It's like a silent scream for help. "They're forcing us to share your data, please click 'FUCK OFF' to tell our advertising partners to fuck off"

Verge, are you OK? If you are not OK please present me with an opt out button
posted by caution live frogs at 10:02 AM on August 23 [15 favorites]


I'm wonder if the law-talking people can chime in on what this trend means for the courtroom, esp. with AI-generated video on the horizon. Will it be more difficult to authenticate evidence? And will this favor more powerful or less powerful parties?
posted by credulous at 10:12 AM on August 23 [6 favorites]


It's funny, just yesterday I was thinking about reading Michael Crichton's Rising Sun back in the 90s, where one character tells another, in a kind of blandly-stated, "didn't you know?" kind of way, that soon video evidence of any sort will be untrustworthy to use in prosecution and trial.
posted by The Pluto Gangsta at 10:13 AM on August 23 [2 favorites]


I mean we've known memory is unreliable forever, but we still allow people to testify what they think they saw, so...
posted by caution live frogs at 10:18 AM on August 23 [2 favorites]


If people end up using this feature regularly, our photos will represent an idealized, more minimalist view of the events of our lives, with fewer tourists in the background and other subtle and less-subtle changes that tend to make us look a bit more like the main character.

This, exactly. I have a Pixel 6, and magic eraser even on that is mostly as advertised. My teen (Iphone) already knows to send me a picture when she wants a background person removed. I don't really even have to do anything - I click "find people to remove", and it generally highlights all the people in the background and I can click whichever I want gone. Sometimes it leaves an artifact and I can circle it and remove it.

The resulting output isn't perfect, and wouldn't pass a close inspection - but she isn't trying to do that and more crucially the expectation of the people she's showing it to is that there's some baseline editing. Which I think is fine; what worries me is less the "phones can do good-enough image manipulation" and more "slightly better consumer devices / applications can do image management indistinguishable from reality".
posted by true at 10:20 AM on August 23 [7 favorites]


People have been manipulating images for years using Adobe Photoshop - the only difference is that now it’s a lot easier.
posted by tallmiddleagedgeek at 10:21 AM on August 23 [1 favorite]


Horse is already out of the barn no? The technology isn't the problem as much as people not being informed enough to check their sources before blindly believing them. Or happily accepting racist sources because they're already racists. This isn't as much a sea change as just another ripple in an ongoing barrage of lies.

Much like radio carried Mussolini and later fascists to power, the advent of the Internet enabled the rise of master-liars, the Putins and Trumps, via dim target audiences. The lowering of barriers-to-entry on the Internet gave everyone, especially bad actors, the ability to convey messages to gullible audiences in formats those targets had been trained to trust. Namely it at first looked like print journalism, which had at least some social control standards throughout the later 20th century (when a legit publication lied, they'd be called out on it). The Daily Mail and Later Fox News had already learned to mimic actual journalism to enable tax cuts for assholes, and lacking shame they made getting called out a sign of pride. People's gullibility and desire to hate led to the rest.
posted by Abehammerb Lincoln at 10:23 AM on August 23 [5 favorites]


the only difference is that now it’s a lot easier.

Great job! This is the main point of the article ❤️
posted by ominous_paws at 10:25 AM on August 23 [24 favorites]


I think the larger change that the post glosses over is that back in the day, WW II through the early aughts, people weren't carrying around digital cameras which instantly developed, photorealistic, and easily shared. The post casually mentions photographing a dent on your rental car, but that wouldn't have been a thing in say, 1975. You'd have a limited number of photographs in each roll, you wouldn't be sure how they'd turn out, you'd have to wait for them to be developed (unless you had a polaroid). And most people weren't carrying cameras with them all the time. The first time someone took a photograph of a scene including me with a phone, probably around 2005, I was shocked: could that be a thing now? They're putting cameras in phones? And people can just take a picture of you, like, just going about your life?

For pictures that document history, those would be vetted by a news organization, and there it seems the societal problem is the consolidation of news organizations and their cravenness. I may not have the skills to recognize if a picture of a crowd greeting Kamala Harris's plane is "AI'd" but a reputable venue could assure me it's real. I think the ease of faking photos is just a small part of a larger negative development, where we're flooded by unreliable communications, bots, scammers, and they know that a first impression can color everything that follows even if you later learn it's based on a lie.
posted by Schmucko at 10:29 AM on August 23 [3 favorites]


It feels to me like you could get evidence that a picture is real by getting multiple shots of the same thing from different angles.

But what of the Rashomon Effect?
posted by Rash at 10:37 AM on August 23 [3 favorites]


Instead of "we are fucked," I think a healthier attitude here would be "it's about damn time." Photography has never been trustworthy, not since its origins. The shift happening now is a long overdue correction to the idea that there has ever been anything reliable in "photographic evidence."

It's unfortunate that it's taken a biblical deluge of artificial imagery to drive that point home, and I'm not happy that all signs point to a future where we'll be neck deep in AI-generated sewage, but I don't think that people learning not to trust pictures they see is very high on the list of negative consequences here.
posted by angrynerd at 10:43 AM on August 23 [5 favorites]


It feels to me like you could get evidence that a picture is real by getting multiple shots of the same thing from different angles

After an exhaustive search consisting of opening /r/StableDiffusion/ and Ctrl+F "Kling" I present to you this. Perfect? No. But we are already well past that point without OpenAI releasing SORA to the general public. You can just browse the rest of /r/aivideo if you like - it's a mixed bag but you'll always find something new every day that just makes you say "oh, shit, guess we're here now."

For my money, the thing that is actually interesting in all this is stuff like the StableDiffusion img2vid AnimateDiff workflow that interpolates between existing images/prompts smoothly, lightly reconceptualizing the basic idea each frame but in a smooth, non-stepped fashion. The reason I like it is that it suggests something that was simply never practically feasible with human artists, ever: not merely smoothly morphing the visuals from conceptual object A to B, but morphing all conceptual objects in the scene smoothly between high-level prompts A and B. The house cutaway example (CW: extremely brief low-res female nudity for a few frames) is a particularly good example of this.

Photorealism with a temporal element is nice and all, and it's encouraging to see generative systems beginning to get a vague handle on aspects of reality like object permanence, gravity, reflections, etc. - but from an artistic standpoint I think the conceptual morphs are the only thing exciting here. YMMV.
posted by Ryvar at 10:45 AM on August 23 [2 favorites]


These tools will definitely make people feel photographs should be viewed with some skepticism, but that has always been true.
This is like arguing that because javelins and bows are ancient technology, nothing changes if we start having everyone carry around a machine gun. Quality and quantity can be a quantitative change: no darkroom tricks were ever this good and the most convincing required equipment, time, and skill. Photoshop made that easier but still required enough time and skill to be convincing that it wasn’t common.

Now we’re looking at an era where anyone can very quickly turn out fakes which would have made Starlin’s censors weep, with huge advances in automatically avoiding tells around lighting, reflection, etc. and it’s basically free and close enough to real-time that it can shape public perception. Think about when the Trump assassination attempt or the right-wingers rioting in England, and imagine what that’s going to be like when the first images millions of people see are highly-convincing fakes on social media with something like the shooter wearing a BLM shirt, and the real images which people see later in the news coverage are dismissed as part of the coverup. We really aren’t ready to have the societal conversation about provenance, especially when Google’s “solution” is a trivially-removed metadata entry.
posted by adamsc at 10:55 AM on August 23 [26 favorites]


OpenAI reportedly does have an ability to detect ChatGPT's own outputs but has not deployed it as a service, because it would reduce the value of ChatGPT if anyone other than OpenAI could reliably detect its use

Obligatory note before any educators get too excited: they have this for one model, a model which they control and are intimately aware of all details for the entire software/training/fine-tuning stack. There are tens of thousands of models, and with any open-weights model you can layer as many LoRAs as you like to modify the authorial "voice." If you're hoping for a general-purpose LLM text detection tool: don't. Unless somebody is putting up more funding than the entire R&D budget of all ML companies and open source teams, this is a war of attrition that was lost before it even began, and any detection tools are guaranteed snake oil.

If you are in education and you're not already shifting towards in-classroom work, in-classroom testing, and demonstrations of critical thinking skills: a year ago was the best time to start. Today is the second best.
posted by Ryvar at 10:56 AM on August 23 [11 favorites]


Ronald Reagan could be shown f***ing a pig

Slander. Everyone knows Margaret Thatcher was faithful to Sir Denis.


I actually thought of a different British PM.
posted by TedW at 10:57 AM on August 23 [1 favorite]


As Steve Bannon said in a different context, "The real opposition is the media. And the way to deal with them is to flood the zone with shit." There have always been manipulated photos, but we will now be faced with flood of them. If a reporter or government official receives a picture or video of police abuse today, they generally investigate. But what if they receive thousands of such pictures or videos, of which only a few are real?
posted by Mr.Know-it-some at 11:02 AM on August 23 [12 favorites]


Mr Know it some, that's what worries me.
We're already in a situation where a lot of people react to inconvenient facts by calling them"fake news".

People being more sceptical about photos as being the reflection of the truth unfortunately won't result in them being more likely to look more closely or carefully, or employ more sophisticated critical thinking and challenge their own biased thinking.

It's more likely that people will just lump any photographic evidence they don't already agree with into the "that must be fake" pile.
posted by Zumbador at 11:11 AM on August 23 [8 favorites]


Okay but I feel like that one at least has been happening for two decades already? Like, the topic of this thread as a game changer over Photoshop/Premiere, sure, granted there's an order of magnitude (or two) lower barrier to entry as far as skill goes, and that will only continue to decrease. But people writing any images/video off as fake if it clashes with their political preconceptions is at least as old as the Internet, and in a broader sense probably as old as language itself.
posted by Ryvar at 11:16 AM on August 23 [1 favorite]


We are not fucked, we just can no longer trust digital imagery. Real film, which is already making a bit of a comeback with Instax cameras et al, will suddenly be relevant again.

The famous example I can think of is...Stalin removing folks from an image as he did a purge.
posted by 922257033c4a0f3cecdbd819a46d626999d1af4a at 11:16 AM on August 23 [4 favorites]


Much as 'photoshopped' became a verb, I recall hearing common allegations that something had been 'airbrushed,' a loose term for changes made either to the film or the print.

Those changes would often be conducted in a photo shop, which, at least in the Southwest, could be inside an adobe hut.
posted by in_lieu_of_fiction at 11:18 AM on August 23 [17 favorites]


I don't think airbrushing or photoshopping are useful comparisons. Both of those take (or took, at the time they were the state of the art) a significant amount of equipment and expertise to do anything even remotely convincing. This tech can fabricate enormously complex and convincing images containing objects that do not otherwise exist, yet look as if they do. And a complete amateur can do it in seconds with a device that literally everyone has in their pockets.

Very soon it will be possible to create networks of devices generating fake imagery in sync from multiple perspectives. Imagine a pack of rightwing instigators sent to a Chicago neighbourhood to document fabricated crimes by Black people against white people. The use-cases are obvious to those shitheads. I think a non-trivial number of people in tech see this as furthering an ideological program (see Banon, above), not just an inevitable evolution of tech.

I wonder if there's a way to digitally sign unmodified images produced by a device that is incapable of in-device modifications? DSLRs could probably be outfitted with such a thing, at least until it becomes possible to hack that too.
posted by klanawa at 11:32 AM on August 23 [6 favorites]



there is an "authentication standard" for photos but frankly I don't see it being very useful.


It can be very useful, but there is a catch. The idea is you put a private encryption key in a tamper resistant chip in the camera. The camera then uses it to sign its output along with all the metadata, and you can be confident that each image really was taken by that camera at that time with those settings.

However..

Tamper resistant is a matter of degree. If a photojournalist you know is using such a thing, you can have confidence that he does not have the means of extracting the key from that camera to use for signing manipulated images.

But a nation state with enough hardware lab capacity could and would purchase such cameras, extract the keys, and use them for this purpose. So the question I have to ask is whether it's better to let only sovereign states have this kind of ratfucking capacity or whether it's better for all of us to have it.
posted by ocschwar at 11:33 AM on August 23 [4 favorites]


Apropos of nothing, my brother was taking believable double-exposures with Polaroid film back in the '70's.
posted by achrise at 11:35 AM on August 23 [1 favorite]


Those changes would often be conducted in a photo shop, which, at least in the Southwest, could be inside an adobe hut.

Adobe Creek, for which Adobe Systems, Inc. was named—it ran around the back of John Warnock's house—also runs right under El Camino around San Antonio. This is not the least likely place for a photo shop to have existed back in the day.
posted by majick at 11:55 AM on August 23 [1 favorite]


The cat has indeed already been out of the bag for some while, but what Google has done here is genetically engineer a strain of mutant cats that breed like rabbits.
posted by flabdablet at 12:08 PM on August 23 [8 favorites]


This is like arguing that because javelins and bows are ancient technology, nothing changes if we start having everyone carry around a machine gun. Quality and quantity can be a quantitative change: no darkroom tricks were ever this good and the most convincing required equipment, time, and skill. Photoshop made that easier but still required enough time and skill to be convincing that it wasn’t common.

A lot of the discussion about quantity and scope of availability made me think of the Maxim gun, its devastating power, and its outsized psychological impact.

And while I think it does make sense for people to understand that photography can be manipulated and has been manipulated for quite a long time, I think we're also in a weird state now for many different reasons. Photorealism is seductive; it LOOKS like things you see in real life in a way that textual journalism or illustrations don't, and so it's harder to feel intuitively that something could be doctored, if it's done well enough. Also, we're just bombarded with so much more media than we were in the past. It's still not super difficult to determine that something looks "unnatural" or off about a piece of media in isolation; it's another thing entirely to be on the alert constantly for everything you see and hear, from every source imaginable.

Finally, "now everyone will know not to trust any images or videos ever!" is not the amazing outcome I think some people want it to be; in the absence of an objective truth, the specter of conspiracy thrives. And with objective truth feeling further and further away, something you can only ever experience by being somewhere in person (an experience that is of course subject to your own inherent unexamined biases anyways), and with everyone being exposed to more and more information that cannot be easily verified in person, we'll just lose the concept of a generally agreed upon truth without anything to replace it. I'm sure you have some inkling of what that feels like already.

Of course, we didn't need the Pixel 9 to arrive at that reality, so maybe in that sense it still isn't the technology that drove the change; it's the destruction of mutual trust that did it. But technology like this does feel like an accelerant.
posted by chrominance at 12:13 PM on August 23 [11 favorites]


I appreciate the responses to my earlier comment, but I feel like people are paying more attention to the second part than the first. Photographs have always been manipulated views of reality, and photographers both consciously and have done this. Merely choosing where to stand and which direction to point the lens (and what lens to use) makes a difference in the story a picture tells. Most of us have seen pictures of the Great Pyramids of Giza majestically rising from the desert contrasted with a wider view showing them at the edge of urban sprawl (and a golf course) as shown here. Filters can be added to lenses to change colors and light. Depth of field can be manipulated to emphasize some objects and blur others. Even novice photographers quickly learn how to use forced perspective to create unreal images. And as I mentioned all these manipulations are done before the shutter is even released. They may not be as sophisticated as the digital manipulations now available, but the idea that photographs are an accurate view of reality with no editorializing on the part of the photographer has always been wrong.
posted by TedW at 12:20 PM on August 23 [4 favorites]


The industry’s proposed AI image watermarking standard is mired in the usual standards slog, and Google’s own much-vaunted AI watermarking system was nowhere in sight when The Verge tried out the Pixel 9’s Magic Editor.

Watermarking or metadata isn't going to prevent misuse, though it should deter most Pixel 9 users from passing off fakes as real.
posted by tommasz at 12:24 PM on August 23


I don't think the existence of chemical film will help. One can pretty easily manipulate a photo on your smartphone, then rephotograph it on film.
posted by Western Infidels at 12:26 PM on August 23 [3 favorites]


there is an "authentication standard" for photos but frankly I don't see it being very useful. In theory it will allow photographers to prove to publishers (and readers) that a photo came from a camera, and what edits (if any) were done to it. Which might have some marginal benefit, but with people willing to believe that Harris' crowds are AI, why would they trust some esoteric signature scheme? it's all run by the illuminati anyway, the moon is a hologram, etc

Photojournalism has been my line of work for the past 20 years, working on assignment for the largest magazines and newspapers in the world, and the above has always been my argument. There have been countless image authenticity initiatives throughout my time in the industry and they all miss the forest for the trees.

Without the trust in institutions, all the pixel verification or whatever the next thing is (there've been blockchain-based proposals, chain of custody-like things, etc.) won't make a bit of difference to the people who already don't believe factual reporting.

And also, all technical methods of verifying photos assume that the situation being photographed has not been altered/influenced before being photographed, and that's a gigantic assumption.

Real film, which is already making a bit of a comeback with Instax cameras et al, will suddenly be relevant again.

The Stalin example has already been brought up, but the fakery goes back to beginnings of photojournalism in the mid-1800s (lengthily investigated by Errol Morris, in one canonical (heh) example). Whether or not a person is using film has no bearing on the truthfulness of a photo. The film could be altered (in the Stalin example) or the situation could be altered before it is photographed.

The only trust we can have is whether our news institutions uphold the ethical standards they profess.
posted by msbrauer at 12:26 PM on August 23 [21 favorites]


A key difference between photo manipulation then vs now is time to dispersion. In the print era, experienced editors stood in the way of publication. In the internet era, a doctored photo can be seen by millions in a few seconds. It's the combination that's so dangerous.
posted by CheeseDigestsAll at 12:27 PM on August 23 [8 favorites]


It feels like advances in internet and digital technology are paradoxically sending us back to medieval times. For all of the information available to us, like images published on social media, the best way of getting reliable information is directly from another person who ought to know. You can't even be sure that you are talking to a well-known family member when they call you by phone. I live in a small town, like medieval village. And I have avoided 3 phone and text scams this year because I was able to walk 5 minutes to the bank or service in question and sort the matter out in person. I increasingly feel like the only way to win that arms race is not to play at all. Luddites unite! I'll start us a Facebook page!
posted by SnowRottie at 12:35 PM on August 23 [8 favorites]


Side note: I am really glad we’re doing this particular election without easily accessible tools for deepfaking politicians. Yes, you can do it at home now but it is currently some serious blood, sweat and tears to achieve half-plausible.

…also some other bodily fluids because a lot of the best practices for that were pioneered by generative photoreal porn enthusiasts. [involuntary shiver]

But yeah, that is not going to be the case after this election. Come 2028 [stop that], casual deepfakes are going to be part of the background radiation of our lives and as a lefty ML enthusiast I am so not looking forward to that.
posted by Ryvar at 12:35 PM on August 23 [4 favorites]


The potential corrosive consequences here are huge.

In effect, legal and bureaucratic systems still treat photos as credible evidence. As we continue into the age where anybody can trivially fake anything, these systems will be paralyzed by uncertainty or exploited by the unscrupulous.
posted by sindark at 12:39 PM on August 23 [4 favorites]


Watermarks and crypto serials can also endanger whistleblowers and journalists. Would you break the news if you knew someone could track the photo back to one specific device? They could also lead to automated suppression and censorship, similar to what Youtube pre-emptively does with alleged copyright infringement, with no due process.

Digital cameras already have a unique footprint in how their sensor response varies across the pixels. Jam up the contrast on a dark photo and you'll see the unique identifier all images you take mathematically hearken to.
posted by in_lieu_of_fiction at 12:43 PM on August 23 [12 favorites]


This is the price we pay for digitization I suppose. It's all just ones and zeros now. I take small consolation in thinking that this technology won't actually increase the demand for fake images, even though it will inevitably flood the supply side.
posted by grog at 12:47 PM on August 23 [2 favorites]


real film is only useful insofar as you have the physical negative in front of you

And a trusted chain of custody by people everyone trusts not to insert an artificially generated negative.
posted by sammyo at 1:02 PM on August 23 [5 favorites]


The potential corrosive consequences here are huge.

In effect, legal and bureaucratic systems still treat photos as credible evidence. As we continue into the age where anybody can trivially fake anything, these systems will be paralyzed by uncertainty or exploited by the unscrupulous.


Technically, this is still a thought exercise. We can't do evidence-level fakes with auto-software just yet.

All the examples images from the article read as clear fakes to the trained eye.

As other people have noted upthread, photo manipulation has been available for a very long time.

And, as also noted, the more salient point is how much easier it is for the general public to get to a pretty-good fake.

This could be a problem in terms of false, manufactured memes getting out of control and getting larger traction in internet culture, but probably won't be able to be used in court.

*probably won't be able to be used in court yet.
posted by ishmael at 1:02 PM on August 23 [1 favorite]


"We can't do evidence-level fakes with auto-software just yet."

I'm thinking in particular about stuff like landlord/tenant complaints. Photos are often the only evidence tenants can provide that repairs are needed or that a unit isn't being kept in a habitable state.

If any tenant can cook up phoney images of problems, landlords can use that fact to discredit real evidence. Of course, they can also cook up completely phoney contrary evidence.

How many other quasi-judicial processes outside the courts and police treat photos as evidence?
posted by sindark at 1:08 PM on August 23 [6 favorites]


>We are not fucked, we just can no longer trust digital imagery. Real film, which is already making a bit of a comeback with Instax cameras et al, will suddenly be relevant again.

Not to harp on this, but how will you be showing people your real film? Will it be in the Sunday paper? Mentioned in TV Guide? What non-digital form of publishing has the power to reach pretty much everybody, pretty much immediately?

>For all of the information available to us, like images published on social media, the best way of getting reliable information is directly from another person who ought to know.

Another person who you'll be speaking to directly, and not in some recorded form, who knows what a politician said in a speech they gave in another state, or what the news is from a war zone on a different continent?

We're so fucked that people aren't even able to grasp how fucked we are. We have outlived the age of verifiable information in mass media.
posted by Sing Or Swim at 1:22 PM on August 23 [6 favorites]


Weird that they're only yelling about it now. Magic Editor has been out for a year now, and while it couldn't just invent shit out of whole cloth before, it totally could be used to dramatically change context. I made a little kid in my niece's kindergarten graduation picture not be picking his nose. Then I disappeared him entirely. Then I took out the entire rest of the class and turned her into a giant. It was amusing.

Similarly, the feature that can fake group shots by inserting a person after the fact isn't much different than the one that let you swap faces from a series of images so that everyone has their eyes open or whatever. Or, more amusingly, using the worst version of everyone instead of the best. Niece also finds that hilarious.

My hope is that by demonstrating these things to her now that she will learn not to take any single picture or other pieces of evidence as definitive proof, /but really it's more about just having fun. That it happens to provide a useful lesson about the need for skepticism is merely a bonus.
posted by wierdo at 2:00 PM on August 23 [2 favorites]


We have outlived the age of verifiable information in mass media.

That has literally never existed. The first truly mass media was a Bible. Newspapers have throughout history peddled salacious lies. In one relatively recent era we had a name for it: yellow journalism. From essentially the moment of the invention of the photograph, retouching has been all but standard. Sometimes quite dramatically. Films have always been 24 lies a second with all their various effects. Captain Disillusion made a living for a while debunking stupid viral videos that were all over the place starting basically from the moment Internet video became a thing.

What I find more concerning (and I'm not really that concerned about it) is how it is becoming easier to fake actual people's voices in a very realistic way. It is becoming increasingly common for the kind of stuff that's funny when There I Ruined It uses it to make Johnny Cash sing Barbie Girl to be used to scam people and companies by making the scammer sound exactly like some muckety muck and tell somebody in the finance department that they need to send a wire transfer right now so the million dollar deal doesn't fall through.

In short, mass media has at best only rarely been reliable, so AI tools aren't really changing much there. They can make financial fraud way easier, though, and that's worth at least some concern.
posted by wierdo at 2:14 PM on August 23 [2 favorites]


"If any tenant can cook up phoney images of problems, landlords can use that fact to discredit real evidence. Of course, they can also cook up completely phoney contrary evidence."

The usual defense for this kind of thing in the law is a very steep "find out" if it is discovered that there has been any "fuck around". Fake evidence tends to fall apart quickly when it doesn't have real context: additional witnesses also willing to perjure themselves, cell phone records, etc.
posted by kaibutsu at 2:28 PM on August 23 [5 favorites]


We've been able to create fake photographs during almost the entire existence of film. Now, it's just easier. The only real measure of a photograph being bogus or legit is do we trust the photographer or the institutions that vouch for them. News organizations face this issue daily and the solution is to publicly disavow the faker. It doesn't always work but it's what we've got. We cannot trust anything digital on its own, because if it's digital, it's ripe for fakery. Not that its easy, but in all instances, doable.
posted by JustSayNoDawg at 3:04 PM on August 23


We are not fucked, we just can no longer trust digital imagery. Real film, which is already making a bit of a comeback with Instax cameras et al, will suddenly be relevant again.

Film recorders have existed for decades. They aren't consumer devices, but if you want a picture of Kamala Harris high fiving Osama bin Laden on 35mm film, the technology exists.
posted by It's Never Lurgi at 3:07 PM on August 23 [2 favorites]


Also does anyone else find the ads about casually manipulating photos of family and friends to be jarring? Even if the capability has been there, I haven't heard of most people cleaning up their personal photos except in exceptional cases like engagement photos taken in public.

If people end up using this feature regularly, our photos will represent an idealized, more minimalist view of the events of our lives, with fewer tourists in the background and other subtle and less-subtle changes that tend to make us look a bit more like the main character.


This is where I really feel despair. The easy ability to edit anything less-than-perfect out of any image for any reason is going to rob us of a lot of our flawed human-ness and life is going to become a lot more boring. Remember in the past, if you hated your ex, you just cut or cropped them out of a photo? That told a story, and it was true of your life and it was messy but kind of funny and might prompt some introspection some years down the road when you revisit that album. Now you can be the star of every scene you're in (eyes open, angles flattering) and with no undesirable characters in the frame (even if you were in love at the time, and they were by your side for the whole vacation).

The other suggested use for this technology, which is in one of the Google Pixel commercials, is an adult tossing a small child up in the air. With Pixel 9, you can edit the shot so that the child isn't just 1-2 feet away from you in the air--this person edited the shot to show the child a full 6-8 feet in the air, literally flying out of the arms of the adult below. We all know that's funny and fanciful, but to take it at face value, a child should not be tossed high up into the air at all, ever!! Don't set that benchmark! There are a lot of showboat dumbasses out there that will take things like this as challenges. A lot of people are about to be in danger if they're competing with fictional throwing/jumping/diving/climbing distances as portrayed in photos going forward.
posted by knotty knots at 3:12 PM on August 23 [4 favorites]


Remember in the past, if you hated your ex, you just cut or cropped them out of a photo?

Now you can spend 10 seconds and add them into a photo with drugs and guns and a dead prostitute and let the police or their work harass them.
posted by The_Vegetables at 3:23 PM on August 23 [5 favorites]


I have heard that people are also creating books that have made up stuff in them, and some readers don't realize it and take these books very seriously. Some of the books even contain pictures depicting events that never happened. Some believe this has been happening for a number of years now.
posted by dsword at 3:56 PM on August 23 [1 favorite]


I'm interested in how reliable the chain or web of custody will be. You've got a picture for the rental car company of a dent that was already there in the fender. Can you prove what day you took it on?
posted by Nancy Lebovitz at 4:38 PM on August 23 [1 favorite]


I’ve encountered a number of people saying “well actually, Photoshop has existed for years,” but I think there is a profound and meaningful difference between “an expert can produce a reasonably convincing fake in an hour or two” and “anyone can trivially produce a real-looking fake in a minute or two.” It’s one of those situations where a quantitative difference becomes a qualitative difference, like comparing Napster to taping songs as they play on the radio.
I have heard that people are also creating books that have made up stuff in them, and some readers don't realize it and take these books very seriously.
just the other day I saw a story about someone being poisoned by mushrooms a generative-ML-written book had told them were safe to eat (the provenance of the book was not known until it was too late)
posted by DoctorFedora at 4:49 PM on August 23 [11 favorites]


Any word on the cat vis a vis the bag
posted by Ray Walston, Luck Dragon at 5:06 PM on August 23 [2 favorites]




any word on the cat vis a vis the bag

Out, but it’s not a bag, it’s a human body, and it’s not a cat, but a chest burster. We’re at the “oh good god that was horrifying, but what a wee little guy it was, we can probably just squish that in a ventilation shaft” stage, imo
posted by Jon Mitchell at 6:44 PM on August 23 [1 favorite]


Now we’re looking at an era where anyone can very quickly turn out fakes which would have made Starlin’s censors weep

Infinity Crusade beat them to it.
posted by star gentle uterus at 6:51 PM on August 23


Which is funny because a lot of test prompts are like "what if X but with xenomorph face hugger"
posted by credulous at 7:33 PM on August 23


photographs proved something happened.

I mean, I wish. But we ve been struggling against oil industry damages to the environment for more than a generation, and I wish this had been true for the past 40 years

The trick they're pulled us to create a 'certified' class of observation, as opposed to a 'non-certified' class, and non certified is the default.

Air pollution regulations are filled with this stuff.

So luckily, we have a long standing precedent for how the powerful will use uncertainty to delegitimize observations. Study how the oil industry does it.
posted by eustatic at 8:37 PM on August 23 [4 favorites]


Manipulating photographic images from the set-up to the final print is nothing new, of course. For example, the monster shot from pre-digital era photography. Simple and very effective distortion created by dropping the flash to waist height when taking the pic. The resultant changes in shadowing makes the subject's face look threatening. No more alteration required.

But the digital era means that the signal-to-noise ratio and effectiveness of our experienced based reality filters are taking a hard turn in the wrong direction. The ability we now possess to efficiently manipulate single pixels at will, and with minimal skill, means that any media or record that relies on a digitised form at any point in its production, storage, and distribution is an unreliable witness, unless we can come up with a whole new robust system of provenance and chain of custody, which I doubt.

I'm sure governments and rubbing their hands in anticipation of being able to more easily dismiss such evidence as "manipulated".

And private interests. Maybe even more so.
posted by Pouteria at 9:32 PM on August 23 [1 favorite]


Now you can spend 10 seconds and add them into a photo with drugs and guns and a dead prostitute and let the police or their work harass them

The way to counter that is to make it easier still, perhaps making "add drugs, guns and a dead body" a standard filter so that putting those in photos doesn't even require leaving the camera app, then giving police the tools to detect watermarks that the standard filters encode into all the pixel regions that they modify.

The harm done by mass-market generative photo manipulators comes from the way they allow vast numbers of people with no technical skills to make a convincing fake. Adding even the tiny technical speed bump of a watermark whose removal requires at least an awareness that watermarks are a thing would reduce the new problem back to roughly its existing proportions: if convincing fakes made with zero technical skill remain reliably detectable as such, much of the harm is mitigated.
posted by flabdablet at 9:46 PM on August 23 [5 favorites]


Guns aren't new, people have been killing each other since before people were even people!

But this may well be like the difference between industrialized warfare and Greek Hoplite armies squaring off.
posted by VTX at 10:56 PM on August 23 [2 favorites]


I’ve encountered a number of people saying “well actually, Photoshop has existed for years,” but I think there is a profound and meaningful difference between “an expert can produce a reasonably convincing fake in an hour or two

Sure, and there is also a profound difference between knowing that and saying "Everyone who is reading this article in 2024 grew up in an era where a photograph was, by default, a representation of the truth."
posted by Tell Me No Lies at 9:20 AM on August 24 [3 favorites]


quantitative difference becomes a qualitative difference

this is the most important takeway
posted by lescour at 12:37 PM on August 24 [1 favorite]


All the examples images from the article read as clear fakes to the trained eye.

And all the GPT-3.5 and GPT-4 slop that is obviously generated by those godawful models reads just as blatantly. The problem is that untrained, inexpert people will be making decisions—including low-level legal type shit—based on their lack of technical expertise. That's not a new phenomenon: most people have very little technical expertise, occasionally less than is necessary to navigate their everyday lives let alone important decision-making processes.

Some of us might have the basic technical literacy to do things like examine pictures for extra fingers or weird shadows, to be skeeved out by glaringly skymie spam and phishing attempts, to ask questions like "who made/wrote this, how, and why?" A ton of people just don't, and never will have that technical or media literacy.

That's what makes mass-market availability of "good enough" infill on cheap, widely-available hardware a daunting prospect. It's a reminder that, while it kinda sucks now, it doesn't kinda suck as much as expected. And it'll suck less later.
posted by majick at 3:37 PM on August 24 [3 favorites]


Google's product manager for the Pixel Camera: altering photos to match your subjective memory is good actually:
To Reynolds and the broader Pixel Camera team, it's not necessarily the photo that's important, but your memory. If you want to fudge with the image by altering the color of the sky to be sunset or use the new Autoframe feature to recompose the image so that it's more striking, then that's OK.

“It's about what you're remembering," he says. “When you define a memory as that there is a fallibility to it: You could have a true and perfect representation of a moment that felt completely fake and completely wrong. What some of these edits do is help you create the moment that is the way you remember it, that's authentic to your memory and to the greater context, but maybe isn't authentic to a particular millisecond.”...

Raskar’s new prediction? A “non-camera.” No sensor, no lens, no flash, just a single button. All it does is capture your GPS location and time of day and you can say, “Hey, I’m with my family and friends” in front of the Eiffel Tower, and it will compose the picture since it knows the time, date, location, and weather.
posted by BungaDunga at 11:20 AM on August 25


only vaguely related but interesting: Inside the company transferring digital film onto 35mm
posted by BungaDunga at 12:00 PM on August 25



The problem is that untrained, inexpert people will be making decisions—including low-level legal type shit—based on their lack of technical expertise

Yes spend a few moments on Facebook these days to forever lose any faith in most people's visual literacy.

I mean, a "photograph" of a proud boy standing next to his creation, a flying helicopter made completely of full cocoa cola bottles. Even the rotors!
posted by Zumbador at 12:11 PM on August 25 [1 favorite]


Hello, you’re here because you said AI image editing was just like Photoshop
Let’s put this sloppy, bad-faith argument to rest.

“We’ve had Photoshop for 35 years” is a common response to rebut concerns about generative AI, and you’ve landed here because you’ve made that argument in a comment thread or social media.

There are countless reasons to be concerned about how AI image editing and generation tools will impact the trust we place in photographs and how that trust (or lack thereof) could be used to manipulate us. That’s bad, and we know it’s already happening. So, to save us all time and energy, and from wearing our fingers down to nubs by constantly responding to the same handful of arguments, we’re just putting them all in a list in this post.
posted by rambling wanderlust at 2:35 AM on August 27 [4 favorites]


One last one from Ars Technica...

Due to AI fakes, the “deep doubt” era is here
As AI deepfakes sow doubt in legitimate media, anyone can claim something didn't happen.

The concept behind "deep doubt" isn't new, but its real-world impact is becoming increasingly apparent. Since the term "deepfake" first surfaced in 2017, we've seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump's baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried "AI" again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend years ago, coining the term "liar's dividend" in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.
posted by rambling wanderlust at 1:36 PM on September 18 [2 favorites]


« Older Stopped clock, etc   |   The Story of Lichess Newer »


This thread has been archived and is closed to new comments