The approaching tsunami of addictive AI-created content will overwhelm us
In March 2016, the Go champion Lee Sedol of South Korea shaped up against Google/DeepMind’s AlphaGo, a machine learning-based system that had already shown itself capable of beating Europe’s best professional Go player. That last part in itself was remarkable: while computers have beaten top players at chess since Garry Kasparov lost to Deep Blue in 1997, the game of Go, played on a 19×19 grid with simple black and white stones (and don’t, ever, look at a Go board and say to the players “oh is that Othello?”) had eluded computer analysis. It’s far more complex than chess. A Go program had never beaten a professional in an even game.
That all changed. AlphaGo crushed Lee: in the seven-game match, he won just one. That made him the only human ever to beat the program, but it wasn’t much compensation. There’s a terrific film about the whole enterprise, and watching it you get a glimpse of why Lee lost. When you play a human, you can see them looking at different parts of the board; their body language betrays how they feel about a move you’ve made, or that they’ve made. Anxiety, cockiness, surrender: we can read them as we read the board.
Against the computer, Lee had none of those indicators. The implacable machine just spat out moves to which he had to respond, with a strategy that was literally beyond human understanding. He was a man trying to discern meaning in a brick wall. The asymmetry was heightened in the second game, when the 37th move, by AlphaGo, stunned Lee and every human observer for its audacity and improbability. (If you play Go, there’s some explanation here.)
“Move 37”, as it became known, marked a key moment in machine learning, because it really was a point where the system went beyond what humans could devise.
I sometimes wonder how often companies that rely on future technologies get their smartest people together in a room to sketch out future scenarios. Because they must do, right? When Apple came up with the first iPod, there was a certain sketchiness to it: the product was put together in a matter of months. But immediately after that, the march of product improvement (smaller form, then flash storage, then no screen, then touchscreen) showed that the executives must have sat down in a room and mapped out what would be in reach, both financial and technical, as the years rolled on.
So let’s do the same, but for machine learning and content. We’ll just put a bunch of elements here and see what shakes out.
• TikTok rose to having more than a billion users faster than any content network. (It’s not “social” in that you don’t intentionally follow people, you follow content, rather like YouTube.)
• TikTok decides what to show you based on incredibly sophisticated algorithmic observation of what you do and don’t spend time on: do you pause on this video, swipe past that video, and so on. It takes perhaps a few hours to decide what will keep you glued to the screen.
• TikTok, again, because it’s important, has colossal engagement. The average user spends 52 minutes on it per day, more than 6 hours per week, 90% use it daily, 60% use it more than 10 hours per week.
• Midjourney, an AI-based graphics app, can produce stunning graphics based on a short textual description of what you want it to show. Andres Guadamuz had a thread on Twitter about how he went from four OK-ish frames generated from the prompt “futuristic city under a dome digital art deviantart high detail high definition octane render”
to the one he liked, which was this:
You have to agree that that’s very impressive. “We have invented magic spells”, commented Alex Hern, the Guardian’s UK Technology Editor, which is a very apt way of putting it. If you’ve ever been through the agony of discussing a book’s cover design, you’ll see immediately that this looks like a fantastic way of generating lots and lots of options. That picture above looks as though it could be used in any number of science fiction covers, content irrelevant.
• There’s a new open source tool for generating video from a text input. Early stages, but within a few years it might be doing what MidJourney is doing in the above section.
• We have already developed algorithms that are really powerful at directing people down a path because we’ve got their targets slightly wrong. This is what I wrote in Social Warming about algorithmic problems. (Breakout was used by DeepMind to train one of its first AIs; the “book algorithms” competed on Amazon to sell the same book, but ignored price; the aircraft carrier system was an evolved AI system that stopped planes on landing but killed the pilots.) YouTube discovered that it was causing radicalisation among its viewers because the targets were not about quality, but time spent:
• In 2016 an AI wrote a screenplay – a pretty bad one – and it’s even been made into a short film (definitely a bad one).
• GPT-3 is able to write prose that’s pretty hard to distinguish from what most people write. It might be wrong, but that’s not different.
• In late 2017, there was a mild panic about YouTube channels aimed at kids which seemed to generate their bizarre content from some sort of algorithmic system that looked at what similar channels were doing and turned it up to 11.
• Facebook has put a bot, Blenderbot 2.0, on the open internet; the bot insists that Trump is still president and that Mark Zuckerberg should be in prison. It gets its content from “the internet”, which tells you a bit about “the internet’s” relationship with the truth. Facebook says it’s going to leave it online.
• GANs—generative adversarial networks—work antagonistically to produce better and better outcomes: like two people having a productive argument, where one says “does this look like it?” and the other says “no, the ears should be rounder and the jawline stronger, like this” and they go around and around until they’re not making a difference. They can produce convincing fake faces, as seen at This Person Does Not Exist. For example, this nonexistent person:
You can also create a fake who is like you, but isn’t you, at Generated.Photos. Its purpose is to “give people an idea of your appearance, while still protecting your true identity.”
• Companies whose whole business is built around capturing attention
• AI systems capable of producing limitless amounts of content
• AI systems capable of producing believable-looking pictures of humans (and lots of other things)
• Algorithmic systems which will pick content that humans find compelling
• Humans who like spending time watching content they find compelling
• GAN-generated photos already being used for fake profile pics for marketing or, worse, disinformation and espionage.
• You could hook up GPT-3 to MidJourney and get it to try incantations to produce pictures, and feed the output to GANs tuned to pick output that humans will like
• Once that’s working, try doing the same with the text-to-video generator hooked up to GANs tuned to pick output that humans will like
• Don’t worry if people can spot that they’re algorithmically generated. It’s early and they’ll improve. Fast. AlphaGo went from zero to beating a top-ranked champion in two years.
• The system might be able to unlock things that we don’t even know exist. The best Go players in the world gawped at Move 37 in the second game. It just wasn’t a thing a human at that level would have considered. Stick all these untiring systems, constantly pumping out content that is more and more tuned to keep you (and everyone else) hooked, together, and we might be witnessing a colossal change in how content is produced.
One of the lessons I absorbed from a few decades of technology journalism is that conceiving what will happen when things scale up is really, really difficult. We can see a lone tree and grasp it; but imagining how a forest of them will change the ecosystem is incredibly hard. The iPhone and Android made it easy to get email out of the office! But they also prompted an explosion of apps. Which created a new economy of people making apps. Which encouraged apps that weren’t restricted just to doing things on the phone, but were useful in the physical world, such as Uber. Meanwhile, the connectedness meant that photos and videos could be uploaded and even streamed—for good, for bad.
The point being that all the disparate bits above might look like, well, disparate parts, but they’re available now (and that’s without mentioning deepfakes). The trees are here, and the forest might be starting to take shape. Here’s an example: a 40-page comic book about monsters, free for download (PDF), by Steve Coulson, in which all the images are drawn by MidJourney. It’s very, very impressive.
I suspect in the future there will be a premium on good, human-generated content and response, but that huge and growing amounts of the content that people watch and look at and read on content networks (“social networks” will become outdated) will be generated automatically, and the humans will be more and more happy about it.
In its way, it sounds like the society in Fahrenheit 451 (that’s 233ºC for Europeans) though without the book burning. There’s no need: why read a book when there’s something fascinating you can watch instead?
Quite what effect this has on social warming is unclear. Possibly it accelerates polarisation, but rather like the Facebook Blenderbot, people are just segmented into their own world, and not shown things that will disturb them. Or, perhaps, they’re shown just enough to annoy them and engage them again if their attention seems to be flagging. After all, if you can generate unlimited content, you can do what you want. And as we know, what the companies who do this want is your attention, all the time.
Remember Arthur C. Clarke’s comment that “any sufficiently advanced technology is indistinguishable from magic”. The magic is among us now, seeping into the everyday. The tide is rising. But the real wave is yet to come.
• You can buy Social Warming in paperback, hardback or ebook via One World Publications, or order it through your friendly local bookstore. Or listen to me read it on Audible.
You could also sign up for The Overspill, a daily list of links with short extracts and brief commentary on things I find interesting in tech, science, medicine, politics and any other topic that takes my fancy. Currently on a short break; back on Monday August 22.
via Stephen’s Web ~ OLDaily http://www.downes.ca/
September 12, 2022 at 07:10PM