Blog

OpenAI: A Newly Developed AI Model Too Risky For Release

By 20 February, 2019 September 25th, 2019 No Comments

An artificial intelligence algorithm is proven to achieve a near-human results in creation of text.  How safe is your job? Potential misuse is already alarming

Another day, another amazing artificial intelligence or machine learning breakthrough. The future is advancing at a faster pace than many of us anticipated.

 

Backed by Elon Musk and others, OpenAI is a non-profit artificial intelligence (AI) research company tasked with promoting ethical use of AI by the public. It recently built a new AI-driven language model, badged GPT-2. Using seed text, the model can generate realistic text in different styles, such as web content and news articles. The model is of an unsupervised type, which means it can operate on any text, without being trained on that specific text.

 

The OpenAI research team demonstrated the current ability of machine learning with text analysis and automated generation. They fed a massive compute array with a large set of text pages from the internet (about 8 million pages), on various topics. The model took a few lines of text, and was then tasked with guessing the next word in the sentence, then the next, and the next.

 

What was the result? Check the following example: a few lines for a start, followed by the computer-generated new text, which is surprisingly readable, original, creative and comprehensible.  For more jaw dropping examples, check the link at the bottom for the original OpenAI blog post. 

While the text is not perfect—it requires a few attempts to generate good content and some real-world knowledge is still lacking—it’s only a matter of time before the technology significantly improves.

 

In fact, the results are of such a high human standard that OpenAI has decided to act responsibly and only release some elements of the model, at least for the coming months, and invite open discussion on the matter. The training dataset and the training weights, the “secret sauce” of the recipe, remains with OpenAI for now.

 

A machine capable of creating endless texts offers boundless potential: better speech-to-text engines, chatbots, AI writing assistants.

Yet, the malicious potential is just as important to consider. For example:

 

  • Large-scale fake news generator. Imagine an army of bots, tweeting certain topics (“candidate X for role Y is a dangerous man because…”) yet each writes with a different style and using an original argument.

 

  • Fake user reviews and comments that appear real and trustworthy. . In fact, the OpenAI researchers demonstrates how GPT-2 can write convincing Amazon reviews, per category and number of stars (one star: poor, five stars: great).

 

  • An automated production of spam, phishing and the like. Imagine the next stage, where the learning compute loop closes: GPT-2 or any other equivalent algorithm is being benchmarked by the success of the spam it generates, and it learns to produce the perfect spam—one that can bypass your email filters so you are more likely to open and read itd.

 

  • An SEO machine, producing text that can conquer the top search results.

 

Ironically, OpenAI has been established to enhance public responsibility over AI matters, as opposed to typical command of AI technology by commercial tech giants. And yet, OpenAI has rushed to declare it will not release the trained model and keep it private for now.

 

On the face of it, OpenAI indeed act responsibly. But isn’t it a double edged sword? The large players, which have the most potential for malicious use of such technology, also have the resources to train the model for their own use. They are not waiting for the open discussion. No doubt, while you are reading this, thousands of computers all over the world are crunching the released GPT-2 model, in search for the right weights and formulas that can unleash its powers.

 

But all is not lost. There will always be a need for talented writers like Haruki Murakami or Yuval Noah Harari. Let me leave you with one exceptional excerpt, one that has not been created by a machine:

“It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way”

 

An update (10 March 2019):

The Allen Institute for AI has released an online text generator, based on GPT-2.

The generator allows you to write a few words, then the algorithm will offer the next word. Though the generator is a limited version of GPT-2, it produces surprisingly coherent text. If you don’t like the text, or the algorithm enters a loop, you can redirect the content as you go: simply select another word from the list, or add new words of your own and within seconds you’ll have a perfect paragraph.

This technology goes far beyond your smartphone auto-complete function – it’s next level innovation.

Try it for yourself below!

Try it out

Choose Ticket Type

close-link

SUBSCRIBE FOR UPDATES

Keep me informed with QODE 2020 updates

*all fields are mandatory
Subscribe for Updates
close-image