AI Is getting better at writing fake news, Report

0
1261
AI Is getting better at writing fake news, Report
AI Is getting better at writing fake news, Report

OpenAI, the non-profit founded by Elon Musk and Sam Altman, is withholding its newly-developed AI writer for fear that it could be weaponized by unscrupulous actors to mass produce convincing fake news.

The organization created a machine learning algorithm, GPT-2, that can produce natural-looking language largely indistinguishable from that of a human writer while largely “unsupervised” – it needs only a small prompt text to provide the subject and context for the task.

The team have made some strides toward this lofty goal, but have also somewhat inadvertently admitted that, once perfected, the device can mass-produce fake news on an unprecedented scale. A fake news super weapon for the information warfare era, if you will.

“We have observed various failure modes,” the team observed. “Such as repetitive text, world modelling failures (eg the model sometimes writes about fires happening under water), and unnatural topic switching.”

With topics familiar to the system (a large online footprint with plenty of sources e.g. news about Ariana Grande, Hillary Clinton etc) the system can generate “reasonable sample” roughly 50 percent of the time.

“Overall, we find that it takes a few tries to get a good sample” says David Luan, vice president of engineering at OpenAI.

GPT-2 boasts 1.5 billion parameters, and was trained on a far larger dataset than its next nearest competitors and the system employs machine learning to establish “quality” sources of content, based on some eight million pages posted to link-sharing site Reddit. For a link to qualify for inclusion, it needs a “karma” score of three or higher, meaning that three human minds deemed the link worthy of viewing.

“This can be thought of as a heuristic indicator for whether other users found the link interesting, educational or just funny,” the team writes.

Quotes and attributions are entirely fabricated by GPT-2, but the story, constructed word-by-word, is coherent and based entirely on pre-existing content online while avoid direct plagiarism. Critics have already highlighted that the paper published alongside OpenAI’s announcement has not been peer-reviewed.

Debate is already raging online about the moral and ethical implications of such technology and its potential impact on the online information ecosystem as well as the political process around the wider, physical world.

“[We] think governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems,” OpenAI said.

Google parent company Alphabet has adopted a similar practice of not divulging its latest AI research openly to the public for fear that it may be weaponized.

LEAVE A REPLY

Please enter your comment!
Please enter your name here