Open AI GPT-2

On Thursday, a pair of master graduates from Computer Science rolled out an AI text generator based upon GPT-2, an Elon Musk-backed OpenAI program that the company withheld from public release citing concerns over the societal impact of it.

However, the two researchers, Aaron and Vanya believe that the software does not possess any risk to society — not yet. According to Wired, the duo wanted to prove that anyone can develop the software, regardless of their economical status.

In order to replicate GPT-2, the duo used $50,000 worth of free cloud computing from Google. The research graduates also fed millions of webpages to the ML software, gathered by digging up links shared on Reddit.

Just like OpneAI GPT-2, the newly created software analyzed the language patterns and could be used up for many tasks — Translation, chatbots, coming up with unprecedented answers and more. However, the most alarming concern among experts has been the creation of synthetic text, consequently, Fake news.

David Luan, vice president of engineering at OpenAI once told Wired, “It could be that someone who has malicious intent would be able to generate high-quality fake news”. Owing to this and other dangers, the team decided to withhold the model. However, it did put out a research paper.

Previously, there have been iterations of GPT-2. In fact, few people have released language models online based upon the OpenAI software. Of course, they are not using the original model that used “8 million web pages”, however, it still uses the previous versions. You can try it out yourself.

While they are good for playing around, they don’t seem to produce logical statements. Wired, who tested out the original GPT-2 and new model as well, writes, “Machine learning software picks up the statistical patterns of language, not a true understanding of the world.”

Also Read: Popular JavaScript Library ‘Standard’ Now Shows Ads On Installing Via NPM