Saturday, February 16, 2019

Now We Have Deep Fakes for Text

The ability to fake what humans do or produce is beginning to get scary – scary enough that the creators of a new text-generation algorithm aren't going to release it.
That quality, however, has also led OpenAI to go against its remit of pushing AI forward and keep GPT2 behind closed doors for the immediate future while it assesses what malicious users might be able to do with it. “We need to perform experimentation to find out what they can and can’t do,” said Jack Clark, the charity’s head of policy. “If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.”
To show what that means, OpenAI made one version of GPT2 with a few modest tweaks that can be used to generate infinite positive – or negative – reviews of products. Spam and fake news are two other obvious potential downsides, as is the AI’s unfiltered nature . As it is trained on the internet, it is not hard to encourage it to generate bigoted text, conspiracy theories and so on.
I could see this eventually being used to generate non-fiction, like user guides. (There go technical writing jobs). Give it a spec sheet and it should be able to write a press release, so marketing copy writers are out of work. Ditto writers of trash romance and science fiction. Welcome to the unemployed future. You can join your auto worker friends on the dole.


No comments: