Artificially Speaking

July 9, 2019
Gx As Blog

Editor's note: This blog was first published on February 19, 2019.

I’m really trying my best to not be so obsessed with the advancements scientists and researchers are making into developing artificial intelligence (AI). But when I see headlines like this on websites like
IFLScience, “This AI Tool Is So Terrifying, Its Creators Don’t Want To Release It,” it’s difficult not to be concerned. I’ve blogged a few times about the benefits and possibilities AI can afford and once how it wrote a really bad chapter for Harry Potter.

All seven of the Harry Potter novels had been downloaded into an AI’s predictive text keyboard. The keyboard then output suggested word combinations based on the information that had been downloaded. The chapter was titled, “Harry Potter and the Portrait of What Looked Like a Large Pile of Ash.”

Some passages were as follows:

“To Harry, Ron was a loud, slow, and soft bird. Harry did not like to think about birds.”

“The tall Death Eater was wearing a shirt that said ‘Hermione Has Forgotten How to Dance,’ so Hermione dipped his face in mud.”

“The castle grounds snarled with a wave of magically magnified winds. The sky outside was a great black ceiling, which was full of blood.”

“Leathery sheets of rain lashed at Harry’s ghost as he walked across the grounds towards the castle. Ron was standing there and doing a kind of frenzied tap dance. He saw Harry and immediately began to eat Hermione’s family.”

“Harry tore his eyes from his head and threw them into the forest.”

I know. It’s awful. The thing is, AI is getting smarter. Which brings me back to the “terrifying AI tool.”

According to IFLScience.com, researchers at OpenAI in San Francisco have developed a staggeringly advanced text-generating algorithm. Those researchers believe that the AI is so smart, they can’t let it loose because they’re afraid of what it would be able to accomplish in propagating fake news.

The way I understand it, the AI technology called GPT2 has the ability to create complete articles after it has been given a prompt from a person.

According to IFLScience.com:

“The result is a finished piece that sounds perfectly plausible – but is, in actual fact, complete baloney. The team described their research in an as yet non-peer reviewed paper online.

Here’s an example piece, human prompt in bold:

“In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.”   

It’s not the best writing. But it is good enough to the point where researchers are scared of what it could do in terms of online scams and fake news. Keeping the GPT2 AI behind closed doors sounds like a great idea to me. And I think we’ll be safe while the AI are still only capable of doing exactly what they’ve been told/programmed to do. That is until someone decides it’s a good idea to tell the AI to go ahead and do what it feels is the right thing to do. Who knows what will be unleashed when that switch is flipped on?