Humans are Biased, Generative AI is Even Worse
June 14, 2023 6:00 AM   Subscribe

"Stable Diffusion generates images using artificial intelligence, in response to written prompts. Like many AI models, what it creates may seem plausible on its face but is actually a distortion of reality. An analysis of more than 5,000 images created with Stable Diffusion found that it takes racial and gender disparities to extremes — worse than those found in the real world." An analysis by Leonardo Nicoletti and Dina Bass for Bloomberg Technology + Equality, with striking visualizations.

The article cites a March 2023 study by Sasha Luccioni et al (mostly researchers from the major AI company Hugging Face) that found similar effects in Dall-E 2 as well as Stable Diffusion. As part of their study they created three online tools for examining these models:

Diffusion Bias Explorer: allows you to quickly view images for a given combination of adjective, profession, and image generation model.

Average Face Comparison Tool: automatically generates, aligns, and combines faces, showing the average face generated by the models for a given adjective and profession.

BoVW Nearest Neighbors Explorer for Identities and Professions: "Users can choose a specific image as a starting point—for example, a photo of a Black woman generated by a specific model—and explore [colors and descriptive words related to that image]...This is especially useful in detecting stereotypical content, such as the abundance of Native American headdresses or professions that have a predominance of given colors (like firefighters in red suits and nurses in blue scrubs)."
posted by jedicus (44 comments total) 29 users marked this as a favorite
 
Paraphrasing a prior comment: When all you have is a hammer everything looks like a nail. When all you are is a hammer…

From the porridge bird derail: use one of the most popular LoRAs trained on anime girls to generate porridge birds (with anime at a -1.4 weighted prompt) and it eventually (fourth image) generates an anime girl with birdlike angel wings serving porridge to birds.

(Imgur Gallery snafu, links for the batches here).

So yeah, to a large extent training biases are always going to amplified. Asking the ghostmix LoRA to not draw anime girls is like building a butter passing robot and ordering it to do anything else.

Slightly more on topic: it’s currently private but I recall an /r/StableDiffusion thread where somebody had various types of “professor” as a heavily weighted prompt and the results were so much worse in an old white dude way than anything in the Bloomberg link. Except the gender studies professor, of course. Even the regulars found it unsettling.

Of related concern, at least for me, is that while researchers working on image generation are aware of these issues and trying to correct for them, if you dive into the user communities you quickly find that overwhelmingly the most cutting-edge results are coming from basement dwellers batch converting pornography collections to anime. To the point where if you want to really achieve the best results you’re going to be dipping into (and hopefully out of) some pretty gross Discords.

I’m all for sex-positivity and normalizing sexuality but this all seems…really not good. For anyone.

The only positive thing I can report is seeing way less underage/borderline shit then you’d expect from anything anime/manga-adjacent. Or it’s at least off in darker corners than where the real experts at applying image generation hang out. You take your victories where you can, I guess.

If your takeaway from all the above is that the early intersection of image generation and human society is not proceeding terribly well, then…yeah. Agreed.
posted by Ryvar at 7:12 AM on June 14, 2023 [9 favorites]




As an anecdote, I told SD to generate hundreds of "beautiful" portraits and the results were exclusively young caucasian women with a very limited set of facial features.
posted by seanmpuckett at 7:27 AM on June 14, 2023 [6 favorites]


If you really want to twist the knife try adding “exotic” as a prompt. Or maybe don’t.
posted by Ryvar at 7:32 AM on June 14, 2023 [4 favorites]


This is a well-known problem in language models, going back to at least 2017, and a ton of work has been done to reduce the problem in language models, though they are still far from perfect (see, e.g., this paper comparing GPT-3.5 and 4 in a synthetic test of bias in the resume screening context).

I was disappointed to see that the research teams and companies producing image generation models do not seem to have started with the assumption that their input data would be badly biased and thus baked-in bias reduction from the beginning. Dall-E, the first successful general purpose image generation model, came out in early 2021, years after this issue was identified in the language context.
posted by jedicus at 7:33 AM on June 14, 2023 [9 favorites]


How are the stable diffusion models trained? The way I would assume it's done is lots of cheap labor that takes input images and assigns them adjectives (marking images as assertive or compassionate or beautiful). Like if you got a CAPTCHA that said: "choose the images that are angry". That's immediately just going to encode existing cultural biases into the model, and I'm not sure if there's a way around it. You *have* to have human intervention at the beginning to train the model to recognize certain attributes in images, since only people can tell you whether a picture is a picture of X or Y with Z attributes during the training stage. It's literally just building a bias generator in that case, and I don't know how you'd get around it.
posted by dis_integration at 7:45 AM on June 14, 2023 [2 favorites]


How horrible this is. It feels like being trapped in an unbreakable net that just gets drawn tighter - the world is so cruel, people make so much money from the cruelty, everyone above a certain percentile of power and wealth de facto supports the cruelty.

I live in a working class, multiracial neighborhood which has, for the US, quite a lot of Muslim people. There are a lot of Native people here too. Seeing that article full of pictures like the people I live with and seeing them labeled "housekeeper" and "terrorist", knowing as I do that my actual neighbors already face so much hardship - it really does make you feel the boot grinding the human face forever.

This will never, never end. It will get worse. People like Elon Musk and the various shadowy behind the scenes rich people are in the saddle, they despise us and don't care even the tiniest amount for our wellbeing, they work to extract our very last pennies and moments of labor and then let us die in their hospitals and in their prisons and on their streets. They like it when we suffer. They think it's funny and appropriate and great. They might as well be pulling the wings off flies, it's about that level of indifferent cruelty.

They don't think about this stuff because, on balance, they don't care. All my life I've read about the bias in technology long before technology massively impacts society. I've read about environmental and social consequences of policy long before they arrive. You don't need to be a brain genius to see what's coming; you just need to see the lives of those around you as full and meaningful like your own.
posted by Frowner at 7:52 AM on June 14, 2023 [60 favorites]


Isn't this highly dependent on the model used, the prompt used and the negative prompt used? With negative prompts, it's entirely possible to avoid having the software create something you're trying to avoid.
posted by emelenjr at 7:56 AM on June 14, 2023


The way I would assume it's done is lots of cheap labor that takes input images and assigns them adjectives

No, they generally use stock photo libraries and/or the LAION datasets, which basically scraped the web for images with alt-text. LAION has a lot of problems (e.g. private medical photos in the dataset).

If you accept the training data as "ground truth", you can (probably) reduce bias by sampling the data to reflect the real world across a range of factors (e.g. race, ethnicity, national origin, gender, religion, professions, emotional valence, and combinations thereof).

Another (not necessarily mutually exclusive) approach is to make tests like these part of the training process, so that the model is penalized for over- or under-representing groups in its outputs.

These are not novel or even especially difficult techniques, but they require researchers and companies to take a moment to think (to "see the lives of those around you as full and meaningful like your own" as Frowner just said) and not to rush to publication or commercialization as fast as possible.
posted by jedicus at 7:57 AM on June 14, 2023 [7 favorites]


How are the stable diffusion models trained?

The basic stable diffusion models (1.4, 1.5) were trained using the LAION dataset, which, on preview what jedicus said. For example, here's everything associated to metafilter (the aesthetic score should be set to 5 with the content filters turned off, since those don't seem to be included in the url)
posted by simmering octagon at 8:02 AM on June 14, 2023


I follow a few Ai accounts on Instagram and made a wisecrack that a crowded beach scene looked like it was “whites only”, to have the very next commenter call me a social justice warrior. I'm still trying to get my head around the idea of someone defending an Ai against wokeism.
posted by brachiopod at 8:07 AM on June 14, 2023 [14 favorites]


How are the stable diffusion models trained?
Jedicus covered it, detailed rundown by Andy Baio here. Basically StabilityAI - makers of the now thoroughly antiquated DreamStudio generative art app - funded LAION to create three training datasets out of CommonCrawl. Pintrest is the heaviest influence at 8.5%.

BUT that’s just the baseline, which is crap. In practice everyone who gets past installing AUTOMATIC1111 (standard browser-based workbench package, basically sets up an SD server on your PC which is utilizing your graphics card) is using at least one LoRA - which for the unfamiliar are a lot like Photoshop filters you can stack on top of the base model, and something a very dedicated individual or small team can train on their own over a couple days.

The most popular repository for SD LoRAs is Civitai, which may contain some NSFW images. If your initial impression is “so it’s all just anime girls?”: yes. It is mostly just anime girls. Or arbitrary blends of photoreal and anime.

Which seems worrying.
posted by Ryvar at 8:13 AM on June 14, 2023 [1 favorite]


Humans are Biased, Generative AI is Even Worse

Or perhaps humans tell ourselves stories gauging how biased we are, and Generative AI simply provides a better mirror.

In some ways it's like raising children.
posted by Tell Me No Lies at 8:36 AM on June 14, 2023 [5 favorites]


> The most popular repository for SD LoRAs is Civitai, which may contain some NSFW images.

Just to reinforce for the curious, 'may' is a significant understatement. NSFW and intentionally pornographic focused LoRAs are extremely common on Civitai.
posted by Kikujiro's Summer at 10:01 AM on June 14, 2023 [1 favorite]


This thread might also be interested in this preprint, which notes that:
"... use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models."
So, models trained on uncurated data are going to be an evolutionary dead end.
posted by mhoye at 10:09 AM on June 14, 2023 [5 favorites]


Just to reinforce for the curious, 'may' is a significant understatement.

Apologies if I overshot on brevity, I’m trying to write shorter comments. NSFW LoRAs are supposed to be filtered from the landing page, NSFW sample images in SFW LoRAs are supposed to be blurred for viewers not logged in, and explicitly hardcore NSFW LoRAs seem to be filtered out of most default search results. So the initial click to glance at the site should be safe, drastically less so after that.

And FWIW 99.9% of what I use SD for is colossal jellyfish eating burning cities. A series of images forever stuck in my head I’d feel stupid calling in a favor with a concept artist for. Someone tipped me off on Discord that the most popular hardcore NSFW photoreal LoRA was “just better at everything, period” and my experience generating images with no humans ever since strongly agrees.

The .1% is porridge birds.
posted by Ryvar at 10:57 AM on June 14, 2023 [2 favorites]


This isn't the main content of the article but:

More than 31,000 people, including SpaceX CEO Elon Musk and Apple co-founder Steve Wozniak, have signed a petition posted in March calling for a six-month pause in AI research and development to answer questions around regulation and ethics. (Less than a month later, Musk announced he would launch a new AI chatbot.)

He's just the worst, the absolute worst. "Everyone else can pause while I get ahead."
posted by subdee at 11:36 AM on June 14, 2023 [7 favorites]


Is it understood why the AI models amplify prejudiced input data? I skimmed the first article but couldn't find the answer, it mostly talks about LAION being an internet data source, but that's preexisting bias rather than go into explaining why the training algorithms of neural networks choose to increase and extremize bias.
posted by polymodus at 11:58 AM on June 14, 2023


This will never, never end. It will get worse.
We live in capitalism, its power seems inescapable — but then, so did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art. Very often in our art, the art of words.

Ursula K. LeGuin, 2014
I totally get the despair, but we shouldn't. The dangers posed by AI are not an inevitable consequence of the technology, not are they forever. They are a consequence of unfettered capitalism, and it is within us to change the way our society is structured, just as we have before. Not that it will be easy, but we are not doomed to cede to capital the use of tools of automation to dominate our minds as it has our livelihoods.
posted by biogeo at 11:59 AM on June 14, 2023 [17 favorites]


whenever someone gets excited about an image or text produced by machine learning, i immediately think of this embarrassing moment from a hayao miyazaki documentary seven years ago
... seven years ago, it became a metaphor for where we are now.

posted by Lanark at 2:09 PM on June 14, 2023 [9 favorites]


I might get the words wrong, but a Cynthia Heimel quote that has never left me is "we all need to stop fawning over rich people and treat them with the contempt they deserve." I hate them. I hate them so much. I hate this garbage that makes society worse being marketed as a cool fun useful thing to play with. No. Stop using it for "fun." Stop acting like what it's doing is anything less than alarming and fascist. It's reflecting our society and our society needs help.
posted by petiteviolette at 2:59 PM on June 14, 2023 [10 favorites]


I totally get the despair, but we shouldn't. The dangers posed by AI are not an inevitable consequence of the technology, not are they forever.

In fact, most of those dangers are being wildly overblown _by the people building them_ to inflate their pre-IPO corporate valuations.

Hold fast to optimism and courage, if for no other reason than to spite those specific assholes in particular.
posted by mhoye at 3:22 PM on June 14, 2023 [4 favorites]


petition posted in March calling for a six-month pause in AI research and development to answer questions around regulation and ethics.

It's easy to sign petitions when you know what is being asked for is almost literally impossible.
posted by Tell Me No Lies at 4:10 PM on June 14, 2023 [1 favorite]


I've always liked Alastair Reyonlds' position that the time to be impressed with AI is when it tells you to fuck off.
posted by East14thTaco at 4:12 PM on June 14, 2023


If it helps: among the image generators, Stable Diffusion is the open source one. The model is under the Creative ML OpenRAIL-M license which is similar to most open source licenses and comes with a perpetual, irrevocable copyright grant BUT includes usage/behavioral restrictions (Attachment A at the end of that link).

Among those restrictions:
-For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;

- To generate or disseminate verifiably false information and/or content with the purpose of harming others;

- To generate or disseminate personal identifiable information that can be used to harm an individual;

- To defame, disparage or otherwise harass others;

- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;

- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;

- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;

- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories;
There’s some other prohibitions (break local laws, provide medical advice, yada) but the above are the ones germane to this thread. I really like that they put an impact > intent equivalent in the discrimination clause.

Like a lot of open source projects directly sponsored by large corporations (stability.ai is currently private but raising VC) the terms could change in the future, however the grant is perpetual so in a worst-case scenario the community forks from immediately prior to any nefarious changes and continues on.

And very obviously that community is a far fucking cry from what anyone on Metafilter - myself very much included - is comfortable with. While digging through the specifics of all this I learned Automatic1111 swiped the original workbench everyone uses from some 4channer’s personal project (WTF). So that probably needs an open source replacement at some point (fortunately the easiest bit to replace in all this). Everything here is in its early Reddit stages, and it’s probably going to take a decade or more to grow out of that shit, just like Reddit (eventually and with enormous pain) did.

But while SD is still very much the land of basement dwelling …I’m not sure whether “weebs” gets a pass as a pejorative around here but fucking christ is it accurate in this case - that’s just temporary. Enough people are deeply upset about both the intellectual property and discriminatory ethical issues with CommonCrawl-derived works that there will be ethically-sourced alternatives, though it’ll be at least a few years.

Oh and for anyone curious the LLM equivalent for text generation within the broader open source LLaMA-derived ecosystem is the pygmalion dataset.
posted by Ryvar at 4:35 PM on June 14, 2023 [1 favorite]


Is it understood why the AI models amplify prejudiced input data? I skimmed the first article but couldn't find the answer, it mostly talks about LAION being an internet data source, but that's preexisting bias rather than go into explaining why the training algorithms of neural networks choose to increase and extremize bias.

I would guess that the model reflects the data set, but the data set doesn't reflect reality. And maybe something about the long tail?
posted by subdee at 5:23 PM on June 14, 2023 [2 favorites]


Okay I opened the article again and watched the 2 minute video at the end. Bloomberg's thesis is that the data set is the source of stereotype bias.

But if that's the case, then bias is not being "amplified". It's preexisting bias in the image sets (from the internet), and deep learning is just doing as it is told. Which leads me to think that an interesting broader sociological question is to ask why is internet imagery so biased in the first place?
posted by polymodus at 6:23 PM on June 14, 2023 [1 favorite]


Quickly, because I'm to bed, image generators only work because someone tagged the source library images with descriptive terms. In my example of "beautiful portrait" what happened was that most of the source images that were tagged "beautiful" and "portrait" were young caucasian women with a certain facial structure. The machine didn't decide what beautiful was, the human image tagger/categorisers did, and the image generator just reproduced it.
posted by seanmpuckett at 6:33 PM on June 14, 2023 [2 favorites]


Yep. The process of "labeling" the items of data is where the human hand, and necessarily bias, come in. There's no way to avoid this within the technology that I'm aware of.
posted by rhizome at 6:39 PM on June 14, 2023 [1 favorite]


Someone tipped me off on Discord that the most popular hardcore NSFW photoreal LoRA was “just better at everything, period” and my experience generating images with no humans ever since strongly agrees.

I think pretty much all of the most popular models at this point, particularly for photorealism have in their DNA models that were made explicitly for porn, so that if you are generating images with humans you need to be pretty aggressive with negative prompts if you do not want everyone to have big anime breasts.

This illustrates another issue with open source models beyond what we are seeing in the article. With the base SD models, we basically see the biases of the training data (the English speaking web) made plain, but as the community refines the models we are also getting the aesthetic and moral choices of a bunch of individuals baked into the various models that people end up actually using, and it is not a particularly diverse community or one that has shown much interest in not working with terrible people.*

*For instance he creator of the tool that most people use to run SD "Automatic1111" has a history of some pretty openly terrible racist stuff, and every conversation I have seen about it leans heavily toward either "he did nothing wrong" or "who cares about your SJW politics, it is a good tool."
posted by St. Sorryass at 6:48 PM on June 14, 2023 [1 favorite]


But while SD is still very much the land of basement dwelling …I’m not sure whether “weebs” gets a pass as a pejorative around here but fucking christ is it accurate in this case

I take some umbrage at calling Stable Diffusion the land of the basement dwellers. Alot of scientific stuff is going on in that space - here's a page on Control Net taken from this study


Talk of bias, legality, ethics, morals etc is interesting and worth it. Dismissing it as only being for "weebs" is trivialising what is going on here.
posted by Bluepenguin05 at 6:52 PM on June 14, 2023 [2 favorites]


I take some umbrage at calling Stable Diffusion the land of the basement dwellers. Alot of scientific stuff is going on in that space.

Basement dwellers and weebs, seems a meanspirited way to put it, but there certainly is a specific culture around the tools right now, and it is one that does not make for a welcoming impression on a more diverse userbase.

There is a lot of cool stuff going on, I have had a lot of fun experimenting and have done some work I am quite proud of with the tools, and have seen lots of other artist finding interesting new workflows, but outside of very niche spaces I have not found any community around stable diffusion that was not 90% young mostly white men generating pinups and porn. The rest of us exist, but as much as there is a culture around it right now, it is largely centered around anime and sex.

Looking at the most popular models on civitai the first one to not use a picture of an attractive woman as its thumbnail was number 20 (which uses an attractive man instead. )
posted by St. Sorryass at 7:26 PM on June 14, 2023 [2 favorites]


ControlNet is a fantastic tool, but search for tutorials on how to use it and guess what the subject matter of the majority of those tutorials is going to be.
posted by St. Sorryass at 7:32 PM on June 14, 2023 [1 favorite]


Dismissing it as only being for "weebs" is trivialising what is going on here.

Entirely fair, and I apologize. As a gamedev I should know better than to dismiss an entire field or emerging form of media from one community, no matter how large. I think St Sorryass already covered every point I might have wanted to raise in my own defense (also yes ControlNet is godtier), save this:

I’m incredibly impatient to see what the SD community of ten years from now looks like. I want to see their tools and meet their (hopefully more diverse) people and get a whole bunch of tips from them on how to make Jellypocalypse a reality without having to worry about their new LoRA turning everything into hentai tentacles.

>:(
posted by Ryvar at 7:47 PM on June 14, 2023 [1 favorite]


ML models do often display bias from their training data but there have been studies that show they actually amplify that bias as well.
See: https://arxiv.org/abs/2201.11706

Even if a model merely displays the same bias as the training data, widespread usage of it and use in downstream tasks could amplify the bias in terms of application.
posted by colourlesssleep at 11:52 PM on June 14, 2023 [7 favorites]


From the porridge bird derail: use one of the most popular LoRAs trained on anime girls to generate porridge birds (with anime at a -1.4 weighted prompt) and it eventually (fourth image) generates an anime girl with birdlike angel wings serving porridge to birds.

What in the... You ok over there, Ryvar? I'm not sure which one of us is having a stroke but I swear I understood English when I woke up this morning.
posted by loquacious at 12:00 AM on June 15, 2023 [1 favorite]


See: https://arxiv.org/abs/2201.11706

Thank you, that clears up the Bloomberg article. The abstract has one interesting sentence; it suggests to me that bias amplification is a variation of "deep learning models cannot reason, in this case because they get confused when the classification problem involves complex groups, resulting in bias amplification (which might even be thought of as a subtle form of hallucination)".
posted by polymodus at 12:46 AM on June 15, 2023 [1 favorite]


Loquacious: not especially but that’s a “gosh you’re posting a lot these days” yellow flag, and this was a gratuitous punctuation/sentence restructuring failure. Didn’t really see it until right after the edit window closed.

Sincerely appreciate you asking. This too shall pass. Best attempt at de-gibberishing without a full rewrite:

From the porridge bird derail: if you use one of the most popular LoRAs (trained on anime girls) to generate “porridge birds” with “anime” at a -1.4 weight, it will eventually (fourth image) generate an anime girl with birdlike angel wings who is serving porridge to birds.
posted by Ryvar at 5:11 AM on June 15, 2023 [1 favorite]


Then there's also the issue of model collapse when the (biased) images produced by AI get fed back into the training data, producing increasingly more biased images as the long tail disappears.
posted by subdee at 8:36 AM on June 15, 2023


Also wrt the training set being biased, I don't even think it's just how the images are tagged, people (in general) have mental models of reality that don't take into account changes over time or "outliers" (the long tail) or anything about reality that's different from how it's portrayed in media. I bet most people didn't know that 37% of all janitors in the US are women (for example). So when taking a "representative" photo of a janitor, they'd photograph someone male in the first place...
posted by subdee at 8:39 AM on June 15, 2023 [2 favorites]


Machine learning is all about learning from correlations.

Humans are still better at meta-reasoning, so even if they see a lot of white male judges, if they think it through, they'll figure that isn't inherent to the definition of a judge.

The machine learning system, however, only understands what a judge is based on what correlates with what. If being a judge correlated with being a white man in the training data, it can't see any fundamental difference between that and it correlating with wearing robes or holding a gavel.
posted by RobotHero at 9:19 AM on June 15, 2023 [3 favorites]


All the above and there are a limited number of tokens associated with each image. You can outource underpaid labor, gather captchas, and to some extent infer from words on whatever page the image was scraped from in the first place.

But there are only so many tokens per image, they are not always consistently applied, and the people performing that labor are just trying to get through today’s 10,000 quota so they can eat, or click the boxes so they can fucking login, or were making sarcastic jokes about the image with their friends on a forum years before this tech existed.

Capitalism optimizes for minimum costs and that inevitably produces a minimum viable training set.

Loss of fidelity + lack of consistency = falling back on the associations which are available - on the coarser cultural consensus - which is going to look a lot like stereotypes and tropes.

A network capable of runtime weight tuning in response to environmental stimuli could potentially accumulate some nuance and shades of meanings with extended observation, like we do. But that’s not where we’re at. Not even close.
posted by Ryvar at 9:37 AM on June 15, 2023 [2 favorites]


But looks like the LAION set was auto generated, according to its wikipedia article? It says it used Common Crawl data and both projects are nonprofits.

Rereading the end of the Bloomberg piece it seems scientists don't actually know what the basic cause is (TBF the article is about showing this problem exists, not why). The scientist at the end says "Who is responsible, the data or the models or the training?" Which I take to mean they don't fully understand why it is, and more research is needed, etc.
posted by polymodus at 1:24 PM on June 15, 2023


Buckle up. (Deep breath)

Stable Diffusion uses the 2.3 billion image-text pair English subset of the LAION-5B dataset (5.85 billion image-text pairs).

LAION-5B (arxiv PDF of paper) is the followup to LAION-400M (arxiv PDF of paper). The method for compiling both can be broadly summed up as: “filter CommonCrawl for images that are both >5KB in size and have >5 characters of alt-text in the <img> tag, then go throw them at OpenAI’s CLIP image classifier for captioning / labels.”

Which looks a lot like an attempt to clone-without-actually-cloning the CLIP training set. CLIP was trained on 400 million image-text pairs which OpenAI does not publish, LAION fed it a probably-not-very-different 400 million image-text pairs and asked for its thoughts. In the later LAION-5B paper they talk about how after OpenAI released the model but not the training data for CLIP, they took the model and trained it on their CLIP-labeled LAION datasets: results matched within at least a few percent and usually fractions of a percent. So presumed cloning success, extremely cool and extremely legal as anything else going on around here.

A brief pause to give credit where it’s due: LAION state their motivation is that there are no datasets which are both large and public: most large datasets used in image classifier research are proprietary to a company or research group, meaning direct comparisons (or insights into biases?) are impossible and they are trying to change that. And on page 3:
we strongly recommend that LAION-5B should only be used for academic research purposes in its current form. We advise against any applications in deployed systems without carefully investigating behavior and possible biases of models trained on LAION-5B
(emphasis theirs, actually).

Whether they should be doing any of this in the first place aside, they’re at least stopping early in their papers to call out massive bias warnings. There’s another large section on safety and ethics in page 12 and it’s clear they are caught up on the discourse.

So, where did OpenAI get the 400M image-text pairs for CLIP that were used to label LAION’s datasets? CLIP’s (arxiv PDF) paper is far longer and a lot denser - I do appreciate OpenAI’s researchers taking the time to stop and complain about the many papers in the field using the exact same terms for diametrically opposed purposes because goddamn does that get old.

The assembly of their dataset is covered in section 2.2 and briefly reviews the various datasets they were using as a basis of comparison (including a pair of high-quality smaller crowd-labeled sources in the hundred Ks to low millions, a lower-quality 100M set that drops to 15M after you toss the obvious cruft, and a 3.5B Instagram dump). In the end:
we constructed a new dataset of 400M (image, text) pairs collected from a variety of publicly available sources on the Internet [ed. COUGH, COUGH]. To attempt to cover as broad a set of visual concepts as possible, we search for (image, text) pairs as part of the construction process whose text includes one of a set of 500,000 queries [footnote here that “queries” = any word appearing 100+ times on EN WP]. We approximately class balance the results by including up to 20,000 (image, text) pairs per query
Which I think means “we dropped any images associated with a word we’d seen 20,000 times before,” and I can’t tell if that’s better or worse for bias.

At any rate we have our answer from OpenAI: “uh…the Internet.” Very cool.

To summarize: LAION fed CommonCrawl images with alt text to OpenAI’s CLIP, in order to caption their 400M (later 5.85B) images based on its 400M images (which was the automated part), and those were scraped off whatever was on the Internet that day and then an enormous amount of work went into getting CLIP to meet or exceed 27 other image classifiers, including many that were based on a variety of sets with 100K~3.3M high quality crowd-labeled images, ImageNet’s 14M crowd-labeled images, and a 3.5B Instagram dump that just goes off titles or whatever’s in the image metadata.

So the bias is effectively “the Internet.” And having at least skimmed and in key points heavily re-read the relevant papers LAION honestly come off as the least-bad actors as far as bias or unthinking harm goes. With the curious, massive blindspot of “artists’ livelihoods” in that they started working from the CommonCrawl.
posted by Ryvar at 5:55 PM on June 15, 2023 [6 favorites]


« Older New Rubik's Cube World Record Gets Set At An...   |   what it means to be too big, Black, and brilliant... Newer »


This thread has been archived and is closed to new comments