Nick Bilton, writing for Vanity Fair, updates us this week on the progress of Artificial Intelligence in areas normally thought to be safe bastions of human creativity: journalism, literature and the arts:
Robots, machine learning, and other A.G.I.s (artificial general intelligence) will soon be writing daily news stories about stocks, sports scores, weather-related events and a variety of breaking news alerts. In the next decade or less, artificial intelligence will make huge strides in their capability to produce high-quality written news. This will bring an end to the era of the journalist as we know it. The difference between A.G.I.s and journalists is the ability to comprehend and absorb information, not just to report it. An A.G.I. will be able to understand the “what” of a story, as well as its “why,” without breaking a sweat over details, because it simply doesn’t care about them in terms of story-writing.
Interesting, even a little bit captivating in a dry sort of way, huh? Except Bilton didn’t write the above paragraph. It was written by an AI program called Shortly AI. Bilton simply tasked the program to write about algorithms replacing journalists and artists, and the this was what the program came up with. Some of us have already seen the emergence of this type of “bot” writing in analyses of sports, stocks and financial information, where the basic facts are usually fairly cut-and-dried. Right now they’re fairly easy to spot.
But AI is learning how to be more creative. Bilton uses an example from Sudowrite, a creative writing app touted with the capability to “bust writer’s block” by utilizing the platform, OpenAI. In that example, Bilton simply typed the first sentence of a proposed short story (in bold, below) — the program does the rest, using a process Bilton describes as “word-scraping” and natural language processing.
It was late when I first heard the loud clicking noise coming from outside. As I looked out of my bedroom window, the tall grass swayed in an unseen breeze. And then, a shadow passed over it—which is when I saw it. A creature standing just outside the window, staring right at me. Its eyes were deep red with a venomous glow; they seemed to burn with a fire that made me shudder. The creature’s body was black as coal, with a large, thin tail near its rear. It hissed at me and tapped its long, sharp claws impatiently on a tree trunk. I grabbed the nearest weapon I could find—a baseball bat—and headed outside. The creature’s breath reeked of sulfur, and its hushed breathing carried an echo of a crackling campfire.
Not exactly world-class literature, but probably sufficient for a bare-bones screenplay treatment of an episode of Stranger Things at least (For his article Bilton also tasked the ShortlyAI program to find him a quote from a reputable researcher about his article’s topic. The algorithm settled on a Columbia University AI scientist named Hod Lipson, combed through all his known work (including videos) and produced a quote that Bilton included in his final VF piece).
This is actually Bilton’s writing:
While no one can agree on exactly when the robots will take over, or how many jobs they will swallow up, the assumption has generally been that garbage collectors, bus drivers, and interstate truckers will be among the first to lose their livelihoods to A.I. Lately, however, it’s starting to look as if people like me—creatives—are even more imminently in danger. Over the past few months, new advancements in A.I. have made it clear that writers, illustrators, photographers, journalists, and novelists could soon be driven from the workforce and replaced by high-tech player pianos.
Artificial intelligence has also begun to encroach on the more traditionally subjective visual arts. A platform called DALL-E 2 is described as a “new AI system that can create realistic images and art from a description in natural language.” By typing in a suggestion (“Elon Musk riding a horse”) the algorithm will essentially comb through billions of artistic renderings and images and produce a
result to your specifications, even in a certain requested “style” such as Monet or Andy Warhol.
As Bilton writes:
For example, if you ask it to draw “an astronaut riding a horse in a photorealistic style,” it will create several options to choose from. If you tell it to instead make a “pencil drawing,” it will render new images in that style. You can order up stained glass, spray paint, Play-Doh, cave drawings, or paintings in the style of Monet. You can replace the astronaut with a teddy bear. A dog. Elon Musk. Or have the horse riding a horse. The possibilities are endless, and the end results are terrifyingly impressive—so much so that one of the top questions associated with a Google search of the platform is “Is Dall-E fake?”
Bilton references a report by McKinsey & Co. projecting that AI will replace 45 million American jobs by 2030. The co-founder of Sudowrite, however, believes that the advent of creative writing AI technology will not completely replace, but simply “complement” humans, much like an advanced form of Google auto-complete, which finds the words it “thinks” you intend and --sometimes unexpectedly — inserts them into your text. As Bilton notes, AI (currently) relies almost entirely on what humans have done in the past, so instances of racism and sexism typically pop up (“flight attendant” will automatically conjure up the image of a woman, and “CEO” will usually be a white male). This is actually one of the reasons some of the most advanced AI programs have not been made commercially available. Bilton notes Google’s “societal impact and limitations” policy statement about its most state of the art imaging AI called “Imagen:”
The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access. Second, the data requirements of text-to-image models have led researchers to rely heavily on large, mostly uncurated, web-scraped datasets. While this approach has enabled rapid algorithmic advances in recent years, datasets of this nature often reflect social stereotypes, oppressive viewpoints, and derogatory, or otherwise harmful, associations to marginalized identity groups.
Bilton observes, however, that once such services are released to the public, “all bets are off.” He believes the ability to create convincing fake images and stories on a mass scale through AI could have a disastrous impact on our society and politics (Bilton suggests we recall Russia’s efforts to distort social media during the 2016 election as a primitive example of how such technology can be
utilized for social control; then imagine them multiplied hundredfold by AI in the hands of QAnon and 4Chan users). Bilton himself seems to doubt that despite their apparent reticence in making them publicly available, the creators of these technologies have a good grasp of just how negatively they could be used.
Nevertheless, the technology marches on, whether we asked for it or not.