Ethics

Two grads recreate OpenAI’s text generator it deemed too dangerous to release

openai text generator ai artificial intelligence fake news disinformation researchers grad
©iStock/croc80

Two graduates have recreated and released a fake text generator similar to OpenAI’s which the Elon Musk-founded startup deemed too dangerous to make public.

Unless you’ve been living under a rock, you’ll know the world already has a fake news problem. In the past, at least fake news had to be written by a real person to make it convincing.

OpenAI created an AI which could automatically generate fake stories. Combine fake news, with Cambridge Analytica-like targeting, and the general viral nature of social networks, and it’s easy to understand why OpenAI decided not to make their work public.

On Thursday, two recent master’s degree graduates decided to release what they claim is a recreation of OpenAI’s software anyway.

Aaron Gokaslan, 23, and Vanya Cohen, 24, believe their work isn’t yet harmful to society. Many would disagree, but their desire to show the world what’s possible – without being a huge company without large amounts of funding and resources – is nonetheless admirable.

“This allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses,” said Cohen to WIRED. “I’ve gotten scores of messages, and most of them have been like, ‘Way to go.’”

That’s not to say their work is easy nor particularly cheap. Gokaslan and Cohen used around $50,000 worth of cloud computing from Google. However, cloud computing costs are becoming more cost-effective while increasing in power each year.

OpenAI continues to maintain its stance that such work is better off not being in the public domain until more safeguards against fake news can be put in place.

Social networks have come under pressure from governments, particularly in the West, to do more to counter fake news and disinformation. Russia’s infamous “troll farms” are often cited as being used to create disinformation and influence global affairs.

Facebook is seeking to label potential fake news using fact-checking sites like Snopes, in addition to user reports.

Last Tuesday, OpenAI released a report which said it was aware of five other groups that had successfully replicated its own software but all made the decision not to release it.

Gokaslan and Cohen are in talks with OpenAI about their work and the potential societal implications.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

To Top

We are using cookies on our website

We use cookies to personalise content and ads, to provide social media features, and to analyse our traffic. Please confirm if you accept our tracking cookies. You are free to decline the tracking so you can continue to visit our website without any data sent to third-party services. All personal data can be deleted by visiting the Contact Us > Privacy Tools area of the website.