Divergence: Compute vs Capability
October 18, 2022 12:21 AM   Subscribe

Jim Keller on AI Generated Software - "It won't be that long until you start looking at all the software that has ten years of legacy in it and go 'Why would I want old software? Why wouldn't I want software that was generated this week?'. And there are a whole bunch of really interesting changes that come with that."[0,1]

Computer architecture's past (and future) - "How are we going to use AI to build CPUs?"[2,3,4]

also btw :P
  • Can OpenAI Codex Create AI? - "OpenAI Codex is the follow-up model to Github Copilot. OpenAI Codex is a GPT based model that generates code. In this video we test if it an write AI / ML code in Python. As it turns out, it works fairly well even for machine learning code!"[5,6,7]
  • AlphaCode Explained: AI Code Generation - "AlphaCode is DeepMind's new massive language model for generating code."[8]
  • How AI Image Generators Work (Stable Diffusion / Dall-E) - Computerphile - "AI image generators are massive, but how are they creating such interesting images?"
  • Stable Diffusion - What, Why, How? - "Stable Diffusion is a text-based image generation machine learning model released by Stability.AI. It's default ability generated image from text, but the model is open source which means that it can also do much more. In this video I explain how Stable Diffusion works at a high level, briefly talk about how it is different from other Diffusion-based models, compare it to DALL-E 2, and mess around with the code."[9]
  • The AI Unbundling - "AI is starting to unbundle the final part of the idea propagation value chain: idea creation and substantiation. The impacts will be far-reaching." (previously)
  • This image, like the first two in this Article, was created by AI (Midjourney, specifically). It is, like those two images, not quite right: I wanted “A door that is slightly open with light flooding through the crack”, but I ended up with a door with a crack of light down the middle and a literal flood of water; my boy on a bicycle, meanwhile, is missing several limbs, and his bike doesn’t have a handlebar, while the intricacies of the printing press make no sense at all. They do, though, convey the idea I was going for: a boy delivering newspapers, printing presses as infrastructure, and the sense of being overwhelmed by the other side of an opening door — and they were all free. To put in terms of this Article, I had the idea, but AI substantiated it for me — the last bottleneck in the idea propagation value chain is being removed.[10]
  • Singularity: Planet-Scale, Preemptive and Elastic Scheduling of AI Workloads - "Lowering costs by driving high utilization across deep learning workloads is a crucial lever for cloud providers. We present Singularity, Microsoft's globally distributed scheduling service for highly-efficient and reliable execution of deep learning training and inference workloads. At the heart of Singularity is a novel, workload-aware scheduler that can transparently preempt and elastically scale deep learning workloads to drive high utilization without impacting their correctness or performance across a global fleet of AI accelerators (e.g., GPUs, FPGAs)."
  • Our mechanisms are transparent in that they do not require the user to make any changes to their code or require using any custom libraries that may limit flexibility... Singularity is designed from the ground up to scale across a global fleet of hundreds of thousands of GPUs and other AI accelerators. Singularity is built with one key goal: driving down the cost of AI by maximizing the aggregate useful throughput on a given fixed pool of capacity of accelerators at planet scale.
  • Pathways: Asynchronous Distributed Dataflow for ML - "This paper describes our system, PATHWAYS, which matches the functionality and performance of state of the art ML systems, while providing the capabilities needed to support future ML workloads. PATHWAYS uses a client-server architecture that enables PATHWAYS's runtime to execute programs on system-managed islands of compute on behalf of many clients. PATHWAYS is the first system designed to transparently and efficiently execute programs spanning multiple 'pods' of TPUs (Google, 2021), and it scales to thousands of accelerators by adopting a new dataflow execution model. PATHWAYS's programming model makes it easy to express non-SPMD [single program multiple data] computations and enables centralized resource management and virtualization to improve accelerator utilization."[11,12]
  • Introducing Pathways: A next-generation AI architecture - "Today's AI models are typically trained to do only one thing. Pathways will enable us to train a single model to do thousands or millions of things."[13,14,15,16]
posted by kliuless (76 comments total) 24 users marked this as a favorite
 
'Why would I want old software? Why wouldn't I want software that was generated this week?'

Because it does the job it's designed for, effectively.

There's a tendency for toolmakers to forget at the end of the day, what you're making are tools to accomplish tasks, and it's those tasks that are important.
posted by NoxAeternum at 2:16 AM on October 18, 2022 [34 favorites]


Because it does the job it's designed for, effectively.

With the assumptions that were current when it was written. It's not exactly the week-to-week changes Keller mentions, but architectural changes in compute do mean that what was effective or efficient in the past might not be anymore.
posted by Dysk at 3:51 AM on October 18, 2022


Having worked in the software industry for enough decades to have become thoroughly acquainted with its principles and practices, about the only thing I can think of that's less likely to generate reliable software than software engineers is software. And it seems to me that the more software-generating software gets interposed between any design intent and the implemented product, the less reliable that product is going to be.

I'm agin it.
posted by flabdablet at 4:01 AM on October 18, 2022 [45 favorites]


Yeah, as a software engineer I could not be less worried about AI software replacing me. Reason being, I'm a software engineer.
posted by Tom Hanks Cannot Be Trusted at 4:25 AM on October 18, 2022 [14 favorites]


Weekly new versions of Windows??!

I haven’t delved into the links yet so quite possibly that’s not the context of the pull quote, but I imagine it would be tricky to ensure that software-generated software isn’t introducing changes that affect functionality for end users. Adapting for newer architecture is one thing, but most users don’t have the time or desire to deal with frequent user interface or subtle functionality changes. Especially if poorly documented, which seems likely in the case of software-generated software.
posted by eviemath at 4:28 AM on October 18, 2022 [2 favorites]


In the past few weeks,

An artist was streaming their work, and someone took a screenshot of the incomplete image, processed it through AI software, and posted it on social media before the original artist was done with their work. And when confronted about it, the AI user tried to claim the original artist was referencing the AI image based on the timestamp.

Artist Kim Jung Gi passed away. His family asked that in remembrance, people draw flowers no matter their drawing ability. A few days later someone ran his work into stable diffusion and then asked for credit, which many found insulting.

Thread of artists seeking out ways to prevent their art from being used this way , though no clear solution yet.

I'm wondering what other industries and communities will be affected in this way?
posted by picklenickle at 4:30 AM on October 18, 2022 [8 favorites]


As training GPL-3 costs like 3 GWh, we seemingly do not have anywhere near right computing model for human-like machine learning goals. It'll be ironic if each billionaire who uploads themselves starves millions by sucking up the energy required make the fertilizer to feed them.

As a species, we largely solve problems by throwing more energy at them, but doing so cannot scale even another 100 years, really not even a few more decades.

"At a 2.3% growth rate .. we would reach boiling temperature in about 400 years. And this statement is independent of technology. Even if we don’t have a name for the energy source yet, as long as it obeys thermodynamics, we cook ourselves with perpetual energy increase."Tom Murphy

We could perhaps harness incredible amounts of solar energy of course, but we actually need energy to become less continuously available or maybe less reliable, because otherwise we'll never redesign our tools to be as efficient as nature evolved us to be.

“No civilization can possibly survive to an interstellar spacefaring phase unless it limits its numbers” (and consumption) ― Carl Sagan

We need to better train people to do whatever real tasks need doing, like writing software or applying statisticl methods, instead of turning them into engines of consumption doing bullshit jobs.
posted by jeffburdges at 4:31 AM on October 18, 2022 [2 favorites]


“And it seems to me that the more software-generating software gets interposed between any design intent and the implemented product, the less reliable that product is going to be.”

If that's your argument against it, it's a valid argument. If it's your prediction for why this won't happen, then the last 50 years refutes it.

It's been almost thirty years since I first learned about cleanroom software engineering and really started ruminating upon software engineering in that paradigm versus the status quo. And the status quo has gotten so much worse over this time period. Everyone, it seems, is willing to trade reliability away for other benefits. I don't see this trend reversing, only accelerating, and this is probably an inflection point.

Without (yet) watching any of these numerous videos, I can easily imagine development environments where AI code is readily available and regularly used. What I've seen so far, such as code written by GPT, would seem to me to be only the first step. What I'd expect would be models specifically trained on source code, then utilized in GANNs to optimize the code, by some metric.
posted by Ivan Fyodorovich at 4:34 AM on October 18, 2022 [1 favorite]


So, we know that computational generative systems are driven by (and limited by) their inputs. Kyriarchic systems produce image recognition software or text generators or recommendation algorithms that contain the same flaws. Given the state of most software, I for one welcome our new buggy software overlords.
posted by kokaku at 4:40 AM on October 18, 2022 [2 favorites]


Is this model's big advance over Copilot going to be that it knows how to not output the GPL license from the code it's copying?
posted by polytope subirb enby-of-piano-dice at 4:45 AM on October 18, 2022 [9 favorites]


Why wouldn't I want software that was generated this week?'

No more 1966. Bring us some fresh wine. This year.
posted by hwyengr at 4:53 AM on October 18, 2022 [4 favorites]


At a 2.3% growth rate .. we would reach boiling temperature in about 400 years. And this statement is independent of technology.

So the days of cloud computing are numbered — soon to be replaced by orbital computing? It's cold in space — with plenty of sunlight to create daily versions of Windows 2095, forever. Everything will be fine.
posted by UN at 4:57 AM on October 18, 2022 [4 favorites]


We should just assume all automatically generated software is Affero GPL v3 then, right?
posted by jeffburdges at 5:00 AM on October 18, 2022 [4 favorites]


If it's your prediction for why this won't happen

It's more of a rationale for my own neo-luddism than anything else. I look at the speed with which the general public has been encouraged to make an absolute necessity of hitherto undreamed of superfluities and I think, you poor, poor deluded bastards, you are just setting yourself up to be ripe for the picking.

My list of Seriously, Do Not Want, Why The Fuck Would Anybody Think I Would Ever Buy This got started pretty much at the first appearance of "smart" wifi-enabled light bulbs and "smart" app-driven door locks and has just grown ever larger; I bitterly resent the juggernaut of touch screen interfaces and don't get me started on voice-activated "assistants".

A friend and I were musing recently on the number of robots providing conveniences for us in our homes. There's one that does the dishes, and one that cleans the clothes, and one that plays music completely at random, and one that keeps the interior at a nice temperature, and for sure these are all lovely things to have about the place and I'm glad I'm in a position to afford them. But it occurred to me that robots that won't stay where I put them - robots that I don't need to bring my stuff to in order to have it dealt with - would be way way way over on the Do Not Want side of my own list. It seems to me that for most of us, "labour saving" tech has already got ludicrously past the point where convenience has shaded over into structural disempowerment.

And then I read this, from some presumably super smart Googler:
AI is poised to help humanity confront some of the toughest challenges we’ve ever faced, from persistent problems like illness and inequality to emerging threats like climate change.
Hey, Siri? How do we rid the world of billionaire personality disorder?
posted by flabdablet at 5:01 AM on October 18, 2022 [14 favorites]


Counterpoint: Neural nets are decision trees.
In this manuscript, we show that any neural network having piece-wise linear activation functions can be represented as a decision tree. The representation is equivalence and not an approximation, thus keeping the accuracy of the neural network exactly as is. This equivalence shows that neural networks are indeed interpretable by design and makes the black-box understanding obsolete. We share equivalent trees of some neural networks and show that besides providing interpretability, tree representation can also achieve some computational advantages. The analysis holds both for fully connected and convolutional networks, which may or may not also include skip connections and/or normalizations.
... and they propose that in fact there are significant computation advantages to approaching these generators as decision trees.

Followup counterpoint: The Github Copilot Investigation:
Microsoft char­ac­ter­izes the out­put of Copi­lot as a series of code “sug­ges­tions”. Microsoft “does not claim any rights” in these sug­ges­tions. But nei­ther does Microsoft make any guar­an­tees about the cor­rect­ness, secu­rity, or exten­u­at­ing intel­lec­tual-prop­erty entan­gle­ments of the code so pro­duced. Once you accept a Copi­lot sug­ges­tion, all that becomes your prob­lem:

“You are respon­si­ble for ensur­ing the secu­rity and qual­ity of your code. We rec­om­mend you take the same pre­cau­tions when using code gen­er­ated by GitHub Copi­lot that you would when using any code you didn’t write your­self. These pre­cau­tions include rig­or­ous test­ing, IP [(= intel­lec­tual prop­erty)] scan­ning, and track­ing for secu­rity vul­ner­a­bil­i­ties.”
posted by mhoye at 5:03 AM on October 18, 2022 [11 favorites]


I'm sure if you handed this AI a set of totally exhaustive, in-depth, crystal-clear, unambiguous requirements for software, it could build you something passable. If you have really good requirements, any idiot can code it. Meanwhile, I've been in the biz for about 15 years, and have yet to see a requirements list that checks even one of those four boxes, so you let me know when you find someone who can create that clear a specification, and then we'll go train an AI on them instead.
posted by Mayor West at 5:08 AM on October 18, 2022 [9 favorites]


UN: "It's cold in space "

It's also a vacuum in space, meaning getting rid of heat is hard.
posted by signal at 5:10 AM on October 18, 2022 [8 favorites]


What I see is some sort of automated Test-driven development on steroids, where you define a series of acceptance conditions, and the GANN generates human-illegible code that satisfies your requirements to an arbitrary optimal level.
And then humans use the system and discover all the edge cases not covered by the original conditions, except you can't really debug the code (because it's not human readable) so they add some more conditions, and people discover more edge cases, repeat until a Tesla chooses to plow into fifty kindergartners to avoid denting another more expensive Tesla that is parked in a bicycle lane.
posted by signal at 5:13 AM on October 18, 2022 [13 favorites]


I'm much more excited by the advent of ultra low powered asynchronous spiking neural network chips than I am by what's going on in the world of GPU farms running lockstep floating-point multiply-accumulate at ever more insane speeds and energy burn rates.

If there are genuinely new computing paradigms to be found, I think there's a good chance that this is where they'll arise from; BrainChip's Akida SoC might well be this century's 6502.
posted by flabdablet at 5:14 AM on October 18, 2022 [2 favorites]


I imagine it would be tricky to ensure that software-generated software isn’t introducing changes that affect functionality for end users.

You're making an implicit assumption that people who sell software, and in particular people who sell software subscriptions, and in particular particular people who sell subscriptions to services that conglomerate into proprietary tech ecosystems, care in any way about breaking existing functionality for their locked-in livestock end users. I'm not convinced that this assumption is supported by experience.
posted by flabdablet at 5:29 AM on October 18, 2022 [1 favorite]


Expired: people generating software
Tired : software generating software
Wired: software generating software that generates software
posted by newdaddy at 5:32 AM on October 18, 2022 [3 favorites]


Fired: all of the above
posted by flabdablet at 5:33 AM on October 18, 2022 [4 favorites]


'Why would I want old software? Why wouldn't I want software that was generated this week?'

Hell yeah, I love to spend all my time learning to use a new tool instead of sitting down and getting stuff done, it’s great to be constantly asking “how do I do this basic thing that this new tool wants to do in some crazy way for no obvious reason”. Plus you get to find all kinds of new bugs! Maybe lose hours of work because of them! Maybe there’s even a bug in the part that saves your work so you can lose everything despite being cautious, fresh software is great!

Shit, I don’t touch a new release of anything any more until it’s had that .1 release that fixes every insane bug that crept in.
posted by egypturnash at 5:35 AM on October 18, 2022 [11 favorites]


At the moment I have two robots wielding pens with software-generated stroke instructions to draw images created by stable diffusion.

I designed and built the robots. I designed and wrote the stroking software. I chose among the myriad possibilities of stroking technique and decided which pens and colours to use. I gave stable diffusion the prompts and curated the results to show my intent.

The machines are doing what I told them to do, creating what I told them to create.
posted by seanmpuckett at 5:45 AM on October 18, 2022 [4 favorites]


I find this whole thing very disturbing because, “Do you want Skynet? This is how you get Skynet.” The thought of Photoshop changing daily (or even weekly), because some AI has internalized developers’ institutional inability to leave well enough alone, is a horrifying notion.

That said...
As an artist who has watched helplessly over the years as coders/devs/etc. have pointed their efforts at replacing artists’ and their work via AI generation, the idea of AI replacing coders/devs/etc. work...well...it’s hard not to feel just a wee bit giggly.
posted by Thorzdad at 5:53 AM on October 18, 2022 [5 favorites]


I'm sure if you handed this AI a set of totally exhaustive, in-depth, crystal-clear, unambiguous requirements for software, it could build you something passable. If you have really good requirements, any idiot can code it.

This is... I don't even know what this is, but it bears very little connection to any reality I've ever been a part of.
posted by mhoye at 5:54 AM on October 18, 2022 [4 favorites]


I've been writing software for a living for 40 years. I use a *lot* more software to do that now than I did when I started. High level languages. Compilers. IDEs. DB Engines. Endless libraries and frameworks. And on and on. So getting software interposed between my design intent and the finished product is nothing new - it's all just tools, and the developers job is to explore and understand the design intent deeply enough, and know how to use the tools well enough, to express the intent as well as possible in the "finished" (i.e. next iteration of the) product.

The idea of having AI write software seems far fetched now, but no more far fetched than, say, DALL-E 2 results seemed very recently indeed (to an old fogey like me).

AI generated software has a long way to go but I suspect it will go there very quickly. When it does, though, just as with DALL-E and its ilk, the results will be all about the prompts, and filling in the gaps, and correcting the misunderstandings (and training the model) so writing good prompts will become part of the software developers skill set, or training the AI if you want to be analogous to a compiler developer.

The real skill of a software developer is understanding the problem domain and translating it to the solution domain. That's not going to change soon.
posted by merlynkline at 5:59 AM on October 18, 2022 [5 favorites]


understanding the problem domain and translating it to the solution domain

That Venn diagram near the end of Jim Keller's talk, the one where instead of machine learning being a small though substantial bubble completely containing inside artificial general intelligence, it's a way huge bubble that overlaps with AGI to a fairly substantial extent? That's exactly how I would expect those problem domains to be viewed by somebody whose focus and fascination for forty years has apparently been the pace of technological advance.

My personal view is that the overlap idea is correct, but that the AGI bubble is monstrously, stupidly huge compared to the ML one. That's because my focus and fascination for forty years has been the extent to which legacy systems so utterly dominate all organized activity and yet manage to fade into almost complete imperceptibility while doing so; and human intelligence, being an outcome of countless years of evolution, is quirky legacy systems all the way down.

I designed and built the robots. I designed and wrote the stroking software. I chose among the myriad possibilities of stroking technique and decided which pens and colours to use. I gave stable diffusion the prompts and curated the results to show my intent.

The machines are doing what I told them to do, creating what I told them to create.


I can see the attraction of going that way because bending tech to one's will can be super fun, but I'm quite curious about how likely you think you'll be, ten years from now, to be looking at the fruits of your labours with genuine enjoyment.

For a long while I was deeply interested in the idea of building the world's best and most flexible programmable music generation system. Turns out, hitting poorly tuned skins with relatively fragile wooden sticks is way more fun and generates at least as many pleasing insights. Hard real-time analog interfaces for the win!
posted by flabdablet at 6:03 AM on October 18, 2022 [7 favorites]


If you have really good requirements, any idiot can code it.

Q: "How many consultants does it take to successfully complete a SAP upgrade?”
A: "Trick question. No one has ever successfully completed a SAP upgrade.”
posted by Tell Me No Lies at 6:05 AM on October 18, 2022 [14 favorites]


That said, I think we can take "all of them" as a fairly conservative lower bound for a decent guess.
posted by flabdablet at 6:09 AM on October 18, 2022


"It won't be that long until you start looking at all the software that has ten years of legacy in it and go 'Why would I want old software? Why wouldn't I want software that was generated this week?"

Why, indeed, would we want to leave our children with proven, understandable technology, when we could leave them with a trash fire of zero-accountability, actively acomprehensible causality-destroying epistemological hamburger and waste heat instead. Why would we want to walk our families over that decades old bridge made of legacy materials like "stone" or "steel", when we could cross the new one, assembled just yesterday where last week's bridge used to be?

It's a thinker.
posted by mhoye at 6:09 AM on October 18, 2022 [8 favorites]


My previous artistic endeavour was large format finger painting, essentially. Large, vibrantly coloured abstract reliefs shaped by the movement of my fingers through thick acrylic mediums. Paint sculpting with my fingertips, palms, back of my hand, gently stroking, tapping, smashing and splashing. People bought them, called it art, and say they love it.

Presently, people buy what I / my robots & software create almost entirely without my physical touch, call it art, and say they love it.

As for me, I feel that the creation of art is a deeply personal expression of self and whether my flesh is directly involved or not has very little to do with my enjoyment of the process or result.
posted by seanmpuckett at 6:12 AM on October 18, 2022 [3 favorites]


I will say however than performance; adding a time component to the creation, and an audience, raises the stakes considerably. Creating music and theatre with people, in front of people, is a whole other thing unrelated, in my mind, with the visual arts. I'm not even sure how AI would affect live performance, so that's much more interesting to me.
posted by seanmpuckett at 6:18 AM on October 18, 2022 [1 favorite]


I'm sure if you handed this AI a set of totally exhaustive, in-depth, crystal-clear, unambiguous requirements for software, it could build you something passable

I’m sure too, as any sufficiently precise requirements are indistinguishable from code.
posted by Jon Mitchell at 6:21 AM on October 18, 2022 [23 favorites]


It's a thinker.

A thinker indeed. And I look at the Firefox instance that's currently installed on my cheap, commodity Android phone, a Firefox instance that's pinned a few versions back because an auto upgrade messed up the video rendering code and made three quarters of every YouTube video frame invisible on this phone, quite likely in an attempt to stop it from screwing up in some other way on some other phone. And I think about just how many layers of abstraction and processing are involved in getting an image from the sensor in a creator's camera to the screen on my phone, and the absolute impossibility of ever testing all achievable combinations of those, and the frankly mind boggling number of people required to make any of it work. And it occurs to me that "proven, understandable" IT has not really existed in any consequential sense since the demise of the Apple II.

Consumer-grade IT in 2022 is already pretty much all spray and pray. Which is, of course, absolutely no good reason to spray even harder and pray even louder.
posted by flabdablet at 6:34 AM on October 18, 2022 [6 favorites]


The reason why people reasonably want older software rather than recent, constantly refreshed software is that the older software has had a real-world test for compatibility with existing software.

iI'm not saying to never update anything, but there are non-obvious costs to updating.
posted by Nancy Lebovitz at 6:39 AM on October 18, 2022 [2 favorites]


And obvious costs as well. If there were not so many obvious costs to making arbitrary changes, there would never have been a market for pernicious works like "Who Moved My Cheese" designed to convince us all that the real problem is a "fear" of change that we all just need to get over.
posted by flabdablet at 6:41 AM on October 18, 2022 [3 favorites]


To be clear, I don't think it's that code is somehow ineffable in a way that art isn't. It's that I think code and art are both ineffable, but—due to both the inherent mediums and staggering social biases—we are more prepared to overlook the shortcomings of AI-generated art than we will be able to overlook the shortcomings of AI-generated code.

DALL-E 2 results were really fun and exciting to me for, like, a day and a half, and then the interest dulled. It's hard to separate the results of the project from the insane computing energy that's required to achieve something like that, because once you factor in "insane computing energy," what you're left with is a not-terribly-interesting computational idea and a genuinely-very-interesting supercomputer that we're using to do stuff that only seems really neat because the vastness of the machine behind it has been hidden behind a curtain.

This might be a terrible analogy, but it just makes me think of supply chains. The part where everything shows up in your grocery store is really neat, but it pales before the unbelievable industrial complications of making that happen to begin with. (I have friends who are Google engineers who describe Google's search algorithm the same way: there is elegance and complexity and confusion and pandemonium there that never meets the eye.)

The tricky thing about extending that to code is that, while I could totally see a natural-language system for taking plain-English requests for functions and turning them into actual working applications, that's already what the whole of programming aims to do. Everything ranging from Behat specifications to articulate "expected behaviors" to UX "user stories" that describe what a person intends to experience, from CMSes and WYSIWYGs, is an attempt to find the most effective way to let someone articulate to a computer what they wish that computer did.

You might be able to handwave away some complexity by abstracting out certain patterns—I could see "Instagram-like interface" being something that a computer could understand, or "Photoshop-style toolbar," in the same sense that... app-building software already lets you do exactly that. But at some point you'll find yourself having to come up with the very specific set of instructions that'll make an AI spit out exactly what you want it to spit out, and: congratulations! You've just invented what programming languages are.

The question then becomes whether AI will ever be easier to work with than a genuinely well-designed computing abstraction. And I suspect the answer is: it'll probably do some things much better than what programmers consider to be well-designed, because programmers have an awful lot of blind spots. But I see it more as something that, in unintentional and glitchy ways, inspires better human-designed approaches, rather than something that renders the human practice obsolete.

That's also true of DALL-E et al, in my opinion. I think that what it does is artistically very simple, but it's masked by the fact that computers are capable of a technical precision that humans aren't. We are capable of conceiving of drawing Muhammad Ali in the style of Doctor Seuss, we just can't emulate Seuss accurately. But we could technically do a lot of intense Photoshop work, reappropriate existing Doctor Seuss photos, and wind up with something that looks kind of like Muhammad Ali, which was in fact the big popular fad back in the Photoshop Phriday days. "Use Photoshop" is easier than "learn to illustrate or paint," but it's still harder than typing a prompt into AI; the AI, though, is basically doing the same trick the Photoshoppers were doing. Pastiche is fun, but it's pastiche.

(And I'll note that even there, folks are basically developing an impromptu programming language consisting of 200-word prompts in order to get the AI to behave right. Computers are like history: they repeat themselves, folks.)

The flat-out most interesting thing I've seen to come out of the whole AI art thing was unsurprisingly Alan Resnick's thread about it. And it's unsurprising because Alan is both a legitimately talented artist and someone who understands computers really well. He didn't aim for pastiche, he aimed for a prompt that the computer would struggle to understand, figuring that the computer goin' a little wild with it would yield extremely interesting results. And he was right! That, to me, is where the most fun will happen: not in the "successes," but in the breakdowns. (The successes are neat, but again, it's hard for me to divorce their neatness from the invisible behemoth that makes them run.)
posted by Tom Hanks Cannot Be Trusted at 6:54 AM on October 18, 2022 [7 favorites]


I think code and art are both ineffable

I think a lot of both has been effed fairly thoroughly.
posted by flabdablet at 7:03 AM on October 18, 2022 [4 favorites]


I'm posting on MetaFilter on a window right next to the broken program that I'm on Hour 21 of trying to fix, so I will admit that "code and art are both ineffable" might be my way of trying to deny to myself that I or my code actually exist rn
posted by Tom Hanks Cannot Be Trusted at 7:10 AM on October 18, 2022 [4 favorites]


It's also a vacuum in space, meaning getting rid of heat is hard.

Didn't seem like much of an issue for HAL 9000, and that was the 1960s.
posted by UN at 7:15 AM on October 18, 2022 [3 favorites]


the broken program that I'm on Hour 21 of trying to fix

Have you tried turning it off and on again?
posted by flabdablet at 7:26 AM on October 18, 2022 [2 favorites]


My list of Seriously, Do Not Want, Why The Fuck Would Anybody Think I Would Ever Buy This got started pretty much at the first appearance of "smart" wifi-enabled light bulbs and "smart" app-driven door locks and has just grown ever larger; I bitterly resent the juggernaut of touch screen interfaces and don't get me started on voice-activated "assistants".

Oddly enough, being in IT my entire career, I've gone entirely the other way, with some precautions. I fitted some zigbee lightswitches, and various amazon echos. I see these as little different than other labour-saving robots I already have in the home. Yes, they're really as dumb as a box of rocks, and that's fine. The smart functionality is layered on-top of the core function of being an actual light switch - if the whole system died tomorrow, it'd carry on working as a dumb system just fine (I have checked!). Being zigbee, it's efffectively a local parallel wireless network that talks directly to the local control hub and wifi/internet access is not required. Alexa does use internet access for voice processing and to talk to the zigbee hub of course; I do check the recordings periodically, and unless it's outright lying as to when it triggers (and the light does go on on the wake word) then it doesn't appear to be picking up stuff it shouldn't. But still not in the bedrooms, just in case!

The girls love being able to have an impromptu dance party in the living room with music, or have it tell em a BBC storytime. We like it just to have music on when cooking, or white noise when I'm studying, or just set a quick timer without getting flour on something.

I also get the ability to turn lights on and off when my hands are full - surprisingly useful daily, when we carry dinner plates through from the kitchen to living room. I can control 4 different light fittings around the garden as one group, including the main switch out in the garage - *before* I go outside in the dark and step on a slug. (a godsend when taking out the bins instead of tracking down where the kids have hidden the torch). I can turn down the lights in the living room from the couch instead of getting up and walking over to the back of the room for the dimmer switch. I have time-based lighting in my study so it's bright and cool in the morning to wake me up, and warm and low in the evening. It turns on a couple of motion-activated night lights at the plug so my daughter doesn't get scared going to the loo in the dark, and off again at sunrise because the built-in daylight detection is crap.

I also have a smart heating system; when we leave the house (tracked via smartphone app) it turns off the heating automatically, so I don't need a super complicated work-schedule timer to save money. I can still control it via the controller that is hard wired if needed, and the schedule is stored on the local hub so still works if the internet goes out. I do have different schedules for different rooms based upon wireless radiator valves. (it doesn't work if power goes out, but then neither does my boiler!).

Yes, if it all failed tomorrow it'd be somewhat annoying, but I intentionally picked stuff that talks directly to a local hub and we can still operate the same way we used to - manually. This should get easier to do with cross-platform Matter support. Yes, there's some small security risk if someone targets me directly, but being able to turn my LED light switches on and off remotely doesn't strike me as a hugely consequential problem, especially since worst case I can just disconnect it. But yeah, smart locks seems like a security flaw too far.

I very much doubt adding AI-driven development, making the software (more of) a black box to the developers would actually improve it any though.
posted by Absolutely No You-Know-What at 7:29 AM on October 18, 2022 [1 favorite]


Think about this not as generating a new Windows UI, but as generating a new Google search algo, a new CAPTCHA, a new rental price optimiser, etc. When the goal is not to create something that humans would consider good, all the objections upthread become benefits.

Personally, I am including Copilot learning traps in my code already.. Bits of code that, when copied and regurgitated without the full context lead to infinite loops.

It's a pity that corporations are driving things in this direction. The experience of collaborating on software development with a strong, flexible type checker can be really neat.
posted by joeyh at 8:16 AM on October 18, 2022 [2 favorites]


All the AI-stuff is still sold at a premium to non-AI stuff (except TVs maybe) so I don't get the consternation about it. If you don't want, don't buy it and save a bunch of money. Like a regular light switch is $2.50 and a regular thermostat is $15 and a vacuum is $35 bucks. There, I just saved you like $1000 vs a programmable one, a Roomba, and a Nest.
posted by The_Vegetables at 8:23 AM on October 18, 2022


I did not read TFA but think I have a good idea of the flavor from the pull quotes and the thread.

I'm with flabdabet. Wake me up when they stop talking about AI-generated code and start talking about AI to tease requirements out of users. Not losing any sleep over my programmer job security until then.
posted by Aardvark Cheeselog at 8:35 AM on October 18, 2022 [5 favorites]


The girls love being able to have an impromptu dance party in the living room with music, or have it tell em a BBC storytime.

Wait.

WAT?????

posted by Thorzdad at 8:45 AM on October 18, 2022


I really really want to know how we're going to design our software. It seems like the AI is going to be building software with absolutely no guidance on how software ought to be built, and I think it's only a matter of time before we're looking for more people with AI-building skills.

When you look at the big data analytics and machine learning projects going on today, and the state of software development in general, I see no way out of this but the introduction of new tools that codify the human-intuitive practices that are being lost. It's possible to code like a duck, and then turn around and design software as if you were writing a poem.

But it still makes me nervous to think about code that has no direct human input.

[written by GPT-NeoX]
posted by The Half Language Plant at 8:58 AM on October 18, 2022 [3 favorites]


Wait.

WAT?????


On the off chance this wasn't a joke: BBC.
posted by Dysk at 9:01 AM on October 18, 2022 [1 favorite]



The girls love being able to have an impromptu dance party in the living room with music, or have it tell em a BBC storytime.

Wait.

WAT?????


I have twin 7 YO daughters. "Mummy, Daddy, come to the living room". "Alexa, play Firework" combined with their sound-activated disco light. Wild Flailing ensues.

Also, an Echo skill, BBC bedtime story when we don't let them watch TV. (children's arm of the British Broadcasting Corporation)

Not at the same time, thankfully. We do read them real books at bedtime, we're not monsters!

On review, I suspect BBC means something else in the US? I probably shouldn't ask.
posted by Absolutely No You-Know-What at 9:13 AM on October 18, 2022 [2 favorites]


Hell yeah, I love to spend all my time learning to use a new tool instead of sitting down and getting stuff done, it’s great to be constantly asking “how do I do this basic thing that this new tool wants to do in some crazy way for no obvious reason”. Plus you get to find all kinds of new bugs! Maybe lose hours of work because of them!

I wrote a screenplay years ago in which this sort of dilemma was key. A utopian community devolving into a dystopia as those responsible for keeping its infrastructure functional kept either going mad or committing suicide ... as they tried to keep up with the whims of what turned out to be a sort singularity driven AI.

But that was all background. What was going on in the foreground was good old fashioned human sacrifice and cannibalism
posted by philip-random at 9:26 AM on October 18, 2022


I feel like there's a bit of failure of imagination in this thread. The kind of source code that GPT and the like produce is not really representative of the possibilities and utility.

The layers closer to the metal could be abstracted to models of execution environments within which generated object code could be evaluated for fitness in a GANN. This could be one portion of a multi-modal model which include requirements parsing, object code generation and optimization (per requirements and optimization params), and decompilation to maintainable source code.

An IDE could include this functionality, along with an "AI" (sorry, I still feel the need to scare quote) generating other connective source code along with documentation. It's an extension of the development environment we've become accustomed to, where the developer is working at increasingly higher levels of abstraction. Except that with "AI" a whole bunch of this could both be optimized much more than before and the development process accelerated. The price would be much more obfuscation and less reliability, absent deliberate design to be otherwise. (Which I think it could be if those are priorities.)

On the tooling side of things, things like those linked in the OP could make the computing resources necessary for training to be much more widely accessible. In fact, as another of the links discusses, an "AI" could make the development of these tools itself more accessible.

I don't see software engineers becoming extinct, but I see certain specialties in software to become deprecated, just as relatively few people write assembly today.

I imagine a virtual machine carefully designed to be appropriate for this purpose.
posted by Ivan Fyodorovich at 9:27 AM on October 18, 2022 [1 favorite]


As with entropy, there are countless millions ways to go from useful ⇒ useless, but only very few ways to go from useless ⇒ useful. Indeed, this wealth of ways to produce garbage seems to be the roaring engine of economic productivity, which transforms inherently useful resources like land, water and energy, into baubles for for the rich. And so, like we are surrounded by industrial flotsam in the material world around us, I don't doubt that we will also be surrounded by industrial flotsam in the mindworld within us. AIs are simply the latest, and potentially the most efficient, means of converting useful energy into heat + derivative, lopsided baubles.

But I am optimistic. You can submerge the human spirit but not drown it altogether. Amid the shit there is still the subjective experience of creativity and learning and love, of striving towards beauty and truth. Art beyond algorithm. Software that actually works.
posted by dmh at 9:45 AM on October 18, 2022 [5 favorites]


Around this, LLVM or Rust+LLVM produce crazy slow WASM, which necessitates stunts like hand written WASM.  Awful lot of money chasing this problem too, but so far nobody fixed the underlying LLVM issues.
posted by jeffburdges at 9:47 AM on October 18, 2022 [1 favorite]


As training GPL-3 costs like 3 GWh, we seemingly do not have anywhere near right computing model for human-like machine learning goals. It'll be ironic if each billionaire who uploads themselves starves millions by sucking up the energy required make the fertilizer to feed them.

As a species, we largely solve problems by throwing more energy at them, but doing so cannot scale even another 100 years, really not even a few more decades.


One big hurdle coming is the end of Koomey's Law. KL is a sort of parallel to Moore's Law: it describes the tendency of computation to become more energy efficient over time, with the amount of electric math you can squeeze out of one joule doubling every 1.5-2 years. Your latest smartphone is probably far more powerful than your first, but runs no hotter and has comparable battery life; you have, in part, the engineering advances predicted by Koomey's Law to thank for that.

Thing is, the Law is opposed by a hard thermodynamic limit called Landauer's Principle: any act of (traditional) computation must pay a minimum tithe of entropy. We've had a good 75 year run with Koomey, but that end is in sight: the doubling rate is starting to wobble, and estimates for the year we hit the Landauer limit range from 2048 to 2080, depending on how quickly the KL returns diminish.

Now, that's a good ways off, and it's possible (if not yet proven practical) for quantum computing architectures to ignore Landauer. However, a lot of the people proposing long-haul, civilization-reshaping systems right now (ML for everything, POW blockchain stuff for everything) don't seem to realize that the required increase in computation may not come with the proportionate increases in efficiency we've enjoyed up to now. We cannot continue to trade energy for convenience forever.
posted by Iridic at 9:54 AM on October 18, 2022 [7 favorites]


It seems like the AI is going to be building software with absolutely no guidance on how software ought to be built

So really no change then.
posted by Tell Me No Lies at 11:52 AM on October 18, 2022 [3 favorites]


The history of programming is one of self-automation, of code producing code. Beneath the hype, what is ML code generation other than just a fancier compiler? Programmers will compensate the way they always do, by expanding the complexity and scope of their projects.
posted by Pyry at 11:56 AM on October 18, 2022 [1 favorite]


A decade ago I saw a presentation on automated bugfixing by perturbing a fairly low-level representation of code until the new version passed the test suite. It had already found real-world fixes of old sticky bugs in some big OSS, one of which had been transposed back up into more-human-readable source.

I expected this to become more relevant to Big Buggy Useful Software with Good Regression Tests, leading to a world with more testers and fewer programmers. (William Kahan: "Don't code on your best days. You need those for debugging.")

(If this is in the "really interesting changes" in the above-the-fold, sorry, I am not having a video day.)
posted by clew at 11:57 AM on October 18, 2022 [1 favorite]


"this wealth of ways to produce garbage seems to be the roaring engine of economic productivity"

Maximizing entropy may be the organizing principle of life itself. (Not that I want to live through the equivalent of a Red Beds extinction, either.)
posted by clew at 12:00 PM on October 18, 2022 [1 favorite]


'Why would I want old software? Why wouldn't I want software that was generated this week?'

If I were a black-hat hacker or white-hat pentester, I'd SALIVATE over software generated this week. Zero-days galore! YOU have a zero-day and YOU have a zero-day and WE ALL HAVE ZERO-DAYS!

Ain't we learned nothing whatever from the last few years' supply-chain security incidents?!
posted by humbug at 12:15 PM on October 18, 2022


The history of programming is one of self-automation, of code producing code.

There is a very big difference between self-automation and code producing code. You can write code to write code to write code, but it begins with a human and at no point does the code just sort of figure out how to create the next layer.

Beneath the hype, what is ML code generation other than just a fancier compiler?

It is code that no human knows the how or why of its creation and may well be beyond human understanding. There is every reason to believe it would be undebuggable.

ML is great for arriving at squishy solutions to huge problems. It is horrible at tight accuracy.
posted by Tell Me No Lies at 12:18 PM on October 18, 2022 [3 favorites]


It is code that no human knows the how or why of its creation and may well be beyond human understanding.

Paging Dr. Susan Calvin! It's interesting (that is, terrifying) that robopsychology may actually be a thing.
posted by SPrintF at 1:39 PM on October 18, 2022 [1 favorite]


This kind of thing comes up every so often, and I'm just going to recycle a comment I made elsewhere for efficiency:
This morning's annoyance: watching people comment in a discussion about the future of technology to the effect that no one (especially kids) should be learning to code right now, because "AI and machine learning will make programmers obsolete within a couple of decades".

No, for fuck's sake, they won't.

This isn't a grumpy-old-man "They'll never tear down my buggy whip factory" rant, by the way. Look, I've been doing this a long time, and the promise of "coding without coding" (see: Visual Basic, et al.) or completely automatically generated code has always been just twenty or so years away as long as I can remember. It hasn't happened yet, and almost certainly won't happen even in the jetpacks-and-flying-cars version of the future the people who say these things seem to have in mind.

Now, I'm positive we'll get shiny and spiffy AI-based assistive tools the likes of which I can only imagine. Development tools are already a lot smarter than they were back in the 90s--I can send a tool off to introspect a database structure and generate code for interacting with it, for instance, with almost no intervention required on my part.

But the second anyone comes up with an AI tool that will be able to handle things like "The client hasn't fully specified the requirements, but the delivery deadline is next month", or "I know version 2.7 of this library fixes that bug we've been having, but upgrading from 2.6 breaks these three other things", or "We have no idea why this old code works, and the guy who wrote it quit/retired/died ten years ago"....I will parade myself down the middle of the nearest tech conference between the vendor booths, playing a flute made from Alan Turing's femur and wearing only a loincloth made from reams of tractor-feed printer paper.
posted by Mr. Bad Example at 1:43 PM on October 18, 2022 [10 favorites]


“Everyone, it seems, is willing to trade reliability away for other benefits. I don't see this trend reversing, only accelerating…”

I am an engineer in the software reliability field and I approve this statement.
posted by majick at 2:23 PM on October 18, 2022 [4 favorites]


Peter Watts continues to be prescient about the march of technology. At least describing the evolution of self-modifying, replicating, gene-selecting autonomous agents where humans "grow" code fit for a purpose without the understanding of specifics, only the phenotypes they want to select for and how many generations they're willing to monitor.

These are tools to reduce toil. One reason software is so precariously built today is the enormous tedium and cost of change as systems get larger. We convince ourselves that process or regulation or democratization will make that problem tractable, but it usually turns out it's a game of attrition: the last developer standing inherits the problems and the users reach comfort with the vagaries because most of the time their needs are mostly met. Then we throw it out and rediscover what problems persist that were quietly addressed by legacy code and what new problems we've invented that need new half-measure solutions.

Generated code could have a lot of patience and better memories than humans. Those incompatible libraries might get a generated shim until the comparability issues are resolved, it might only cover exactly enough of the API that you can unblock and move forward. It might break in novel ways and rather than try to understand it you just regenerate a new shim. In my world, this is a huge net win for both programmers and users. Even with a billion failure modes (automatically generated malware, supply chain attacks at the training data, on it goes).

The problem isn't SkyNet, it's regulating and innovating faster than Peter Thiel monopolizes and weaponizes how cheap it's about to be to create commodity software with even less oversight than the current paucity.
posted by abulafa at 2:33 PM on October 18, 2022


Ivan Fyodorovich: I feel like there's a bit of failure of imagination in this thread. The kind of source code that GPT and the like produce is not really representative of the possibilities and utility.

You'll go further and faster with pattern libraries and visualisation tools, even graphical data transformation lego-block tools, and test-driven interconnect contracts (aka schemas so we don't accept bad input in any component).
posted by k3ninho at 3:00 PM on October 18, 2022 [2 favorites]


It is code that no human knows the how or why of its creation and may well be beyond human understanding. There is every reason to believe it would be undebuggable.

I mean, this is something people said (still say) about optimizing compilers as well, that they produce assembly that's difficult for human beings to interpret and debug. Certainly, once you've seen LLVM turn a non-trivial loop into a closed-form solution, it gets harder to argue that existing compilers aren't doing black-box magic, at least from the perspective of the average programmer.
posted by Pyry at 3:04 PM on October 18, 2022 [6 favorites]


I can send a tool off to introspect a database structure

Aww. It's like Socrates said, "the unexamined life is not worth living".
posted by Pyrogenesis at 10:25 PM on October 18, 2022


the promise of "coding without coding" (see: Visual Basic, et al.) or completely automatically generated code has always been just twenty or so years away as long as I can remember.

So the last shall be first, and none shall last: for many be marketed, but few deliver.
posted by flabdablet at 5:20 AM on October 19, 2022 [1 favorite]


"the results will be all about the prompts…"

I don't particularly disagree with that: the quality of most software product is bounded at one end by the quality of the requirements. Ultimately the job of a software engineer is to translate requirements into product.

Of course, the product will be prompts.

Structured prompts that ensure the result is as similar to the requirement as possible.

Structured, possibly textual prompts.

Like the kind we feed to compilers.
posted by majick at 9:47 AM on October 19, 2022 [3 favorites]


Yeah, as a software engineer I could not be less worried about AI software replacing me. Reason being, I'm a software engineer.

As an engineer in AI, I'm even less concerned. Deep learning systems are trained with a variety of randomized processes, some of which we can stabilize by recording PRNG seeds, some of which we cannot eliminate without making the entire enterprise infeasible. Specifically, it turns out back propagation at scale is a distributed systems problem and distributing model weight updates is a massive bottleneck. The explanation I read just last night is that the chosen solution is to just admit race conditions will randomly perturb the final weights between training runs of the same model with the same hyperparameters, same training data, same test set.

This breaks so, so many things. As long as you are okay with software only acting correctly 97 percent of the time, and getting a different 3 percent failure every training run, its not an issue! For gmail recommending canned responses to me, not a huge deal. For gmail sending SMTP, losing 3 percent of the messages I send is a huge deal! Your only fix here is a massive regression test suite and the willingness to retrain until it passes. This will probably break your weekly update plan unless you further increase parallelism, which exacerbates the problem.
posted by pwnguin at 10:31 AM on October 19, 2022 [4 favorites]


The explanation I read just last night is that the chosen solution is to just admit race conditions will randomly perturb the final weights between training runs of the same model with the same hyperparameters, same training data, same test set.

Interesting. It does seem like you could record the order that race conditions resolved just in case you wanted to recreate the exact process for a particular result later, if you wanted to for some reason. I guess the big win there would be to use the seed and a checksum of the order of operations as a unique label for the results.
posted by Tell Me No Lies at 1:59 PM on October 19, 2022


Adversarial machine learning always meant simply bypassing spam filters, or maybe extracting model data to violate privacy, expose company secrets, etc.  It becomes way deeper if you poison models to create software supply chain attacks. lol

If you ask who'd run a supply chain attack with so much collateral damage, then remember the NSA backdoored Juniper routers, OPM installed Juniper routers, China stole the backdoor, and then China hacked OPM.  OPM tracks all personal secrets of everyone with a security clearance, except CIA agents, so OPM knows every time they cheated on their spouse, etc.
posted by jeffburdges at 4:24 PM on October 19, 2022 [1 favorite]


Ivan Fyodorovich: I feel like there's a bit of failure of imagination in this thread. The kind of source code that GPT and the like produce is not really representative of the possibilities and utility.

k3ninho: You'll go further and faster with pattern libraries and visualisation tools, even graphical data transformation lego-block tools, and test-driven interconnect contracts (aka schemas so we don't accept bad input in any component).


I will admit to being biased here, because my passion project banks pretty heavily on a frontend-over-algorithm approach to enabling powerful computing applications—though on the flip side, the reason I'm placing such a heavy bet is because I've thought this through and come to some pretty staunch conclusions.

Procedural generation has been a pipe dream for game designers for decades. I don't mean "pipe dream" in the sense that it doesn't exist: I mean that there have been fantasies of games that basically generate themselves, in ways that make them competitive in terms of level design or storytelling or even gameplay itself.

And what game designers discover, over and over again, that procgen can deliver compelling results, but only to the extent that you manually develop elegant systems architecture that determines where and when that work is used. Even to procedurally generate, say, a nice map, you wind up developing not one but a dozen different generative systems, each of which handles one particular component and is further designed to react to and anticipate data from all the others. At the end of the day, you have a system that more-or-less takes the exact same kind of work that just making the damn map would, it's just a little more flexible in terms of what it outputs.

Similarly, I've been obsessed with Vladimir Propp since I was 16, and expressed that passion through a lot of different procedural and algorithmic coding projects. And what I find, again and again, is that even with a rigorously mapped-out idea of fairy tale beats, you can essentially design two kinds of systems. One churns out things that roughly approximate fairy tale-ness, but are shapeless even as they adhere to all the rules. The other is more akin to Steve Jobs' "bicycle for the mind:" a tool to help guide human decisions, using its "procedure" to make complex thought-processes easier and more intuitive.

I am not an AI expert, but I get the sense that combating that shapelessness is akin to a "hard problem" in computing: you can emulate the surface pretty easily, but the inner workings are exceedingly complex. Philip Glass was asked to talk about an AI program that was designed to write minimalist compositions, and his feedback reflects basically all skilled feedback I've seen on all AI applications everywhere, including artists commenting on DALL-E 2: the shape is the hardest thing, and the internal workings of "shape" are extremely hard to handle algorithmically.

(One more example: the architect Christopher Alexander, whose early book literally defined the phrase "pattern language," provides in The Nature of Order an instance of choices which an architect has to make when composing a small room within a house. He demonstrates that, even if those choices seem numerable and therefore computable, they're so inextricably contingent on one another that it's virtually impossible to "settle" on a choice, which remains true if you pare down the variables to the point of absurdity. Non-hierarchical and highly-contingent variables always lead to this phenomenon, which is the fundamental principle behind chaos theory; it's why a game like Go has been so much harder to design AI around than chess. And while Go was eventually surmounted, Go is an extremely simple and literally binary game that exists on a regular grid—and the fact that computers struggled with it anyway gets at why this particular kind of work just isn't "solvable" in any but the absolute simplest cases.)

Algorithmic creative modes work best in circumstances where shapelessness doesn't matter. Sometimes you want to generate something pseudorandom: I really enjoy using AI engines to generate random pixel art variations on a subject matter, because I'm bad at pixel art and love having a rotation of lo-fi things to look at and get ideas from. Sometimes you're just looking to make new connections, which is why chess and Go players will play against computers: their choices might trigger an insight in the player, and help them learn something for which nobody has a specific formal theory about. Happy accidents abound.

But the AI applications I've seen in image editors are used for things like resampling, intelligent color mapping, and content-aware fills: things that are highly technically finicky, where you benefit from having a machine go through the thousand little nudges you'd otherwise have to go through yourself. It's like replacing paper spreadsheets with Excel: computers are best when you give them something rote and tedious to do, because computers are very fast. There are things which even the smartest computers can't do a fraction as well as even a half-trained human, though, and the most interesting software evolutions tend to do with thinking of new ways to give tools to those humans.

I've worked at social networks and start-ups, I've done some work in digital advertising and marketing, and I'm still occasionally asked to develop various kinds of "intelligent" content systems. And my conclusions are almost invariably the same: UX goes further than algorithm in all but highly-specialized situations. Even if you're Google, and have thousands of people working on one of the most complex algorithms known to humankind, an algorithm which hypothetically produces extremely simplistic results, it's hard not to return astonishing mediocrity. (Which is why Google has increasingly pivoted towards changing the UX of their search results, finding ways to analyze search strings and return robust human-designed data.)

As a programmer, the things that affect my line of work are GUI-based applications and other programming languages. Can a program give you sophisticated-enough tools to do a part of my job without being a programmer? Does another programming language offer serious enough advantages that it's worth abandoning one entire system for another? Those are ongoing concerns, and they have been concerns forever. And they, too, run into the fact that systems logic is extraordinarily complex—which isn't to say that you don't ever replace languages, or that user-friendly systems can't map onto a fair amount of what programmers do, but those changes happen slowly anywhere that's already seen significant investment. (Over time, if anything, I find that programmers drift backwards—I work with older technologies now than I did when I started programming. Not always, but often.)

AI wants to be an exception to all of this, but I harbor serious doubts that it ever will. Most of what we call "AI accomplishment" is, like I said, brute force. And brute force is really neat, but Moore's Law only gets you so far. Evolutions in human-oriented design get you a hell of a lot further.
posted by Tom Hanks Cannot Be Trusted at 8:12 AM on October 21, 2022 [7 favorites]


As a systems analysis/design person, i tend to agree, THCBT. The best technology serves to automate repetitive, tedious & error-prone human activities. The best systems serve to make it easier for actors within the system (humans, robots, technology) to do the right thing vs. the wrong (mistake) or easy (lazy) thing. This is why e.g. a guardrail that prevents you from driving off a cliff is better than a sign warning you about the cliff. It takes a human to design a system, because it takes a human to determine what's right. (Unfortunately the decision about what's right is often in the hands of people who do not have the best interests of the users of the system or those affected by its externalities in mind.) There's a lot of philosophy about technology condensed in this paragraph; which is really why anyone who's implementing systems and the technology that supports them needs a firm grounding in psychology, sociology and the humanities. That we don't do this is probably a big reason why everything is so fucked up right now, and that we intentionally don't do with this appalling separation of arts and science is probably down to capitalism. Anyway y'all can go ahead and pooh-pooh this comment but it's true, and Chris Alexander was a saint, a saint I tell you.
posted by seanmpuckett at 8:31 AM on October 21, 2022 [4 favorites]


seanmpuckett: There's a lot of philosophy about technology condensed in this paragraph; which is really why anyone who's implementing systems and the technology that supports them needs a firm grounding in psychology, sociology and the humanities.
The Calvin Center for Robo-Psychology bacons beckons.
posted by k3ninho at 2:56 PM on October 30, 2022


« Older Phantom Forests   |   FilePizza Newer »


This thread has been archived and is closed to new comments