(Mandatory disclaimer: I generated this text in part with GPT-3, OpenAI’s large-scale language-generation model. Upon generating draft language, I reviewed, edited, and revised the language. I take ultimate responsibility for the content of this post.)
In late 2019, I wrote a post expressing some concerns around AI-generated text and the potential for manipulating the infosphere. The technology available at the time was pretty good, but not quite there yet. Well, we’re even closer now. I recently got API access to the next generation of this model, known as GPT-3, and it’s a doozy.
I’ll let the AI tell you some of the concerns it has about its kind. The seed text is in bold.
Content generated by AI has a number of benefits and drawbacks. It can dramatically improve things like translation and chat bots, but also raises concerns about propaganda and media manipulation. Most recently, AI was used to create a fake video of former president Barack Obama speaking at a rally.
The system works by using a neural network to analyse a corpus of videos of Obama. It then learns how to mimic his gestures and voice and is then able to generate videos of Obama saying things he never actually said. The system is also able to generate more realistic videos of other people by analysing their speeches and facial movements.
The researchers behind the AI, from the University of Washington, said that the system allows anyone to “create and manipulate video footage of public figures saying things they never said.”
While it is possible to detect this type of manipulation, it is difficult.
The fake Obama video was unveiled at the beginning of this month and has been widely used to illustrate the dangers of fake news. It was picked up by news outlets including the BBC, CNBC, and Buzzfeed. CNN also aired the fake Obama video during a segment in which the network discussed the potential impact of fake news.
Not everyone is convinced that AI is the culprit. Some people have pointed out that it takes a lot of effort to produce the fake videos and suggests that human involvement is more likely.
Well that’s just what an AI would say, now isn’t it, GPT-3?
This does drift slightly from the prompt, but I have to say, it’s extremely impressive. This is expensive to scale–that cost me about nine cents–but lots of bad actors have deep pockets.
In my earlier post I generated two pieces of fake news. Join me below the fold as I re-run these prompts to show how this latest model compares to the earlier one.
Artificial Intelligence & You: The Latest In Text GenerationPost + Comments (117)