Fake News 2.0

Is it about to get worse?

Generative Adversarial Nets
 

FUTURE PROOF – BLOG BY FUTURES PLATFORM


Could so-called “fake news” be about to become a lot more real? Last year, Kenny Jones and Derrick Bonafilia from Williams College wrote code that uses a type of machine-learning algorithm to create art that is practically indistinguishable from “real” art. The technique can also be applied to videos and other images—which makes us wonder: How will we be able to tell what's real from what's not in the future?

 

According to Techcrunch, the two students, under the guidance of professor Andrea Danyluk, taught themselves machine learning and read about 50 papers on Generative Adversarial Networks (GANs) before completing the project in under a year. Their success has landed them a job as software engineers at Facebook.

And earlier in February, YouTuber Mario Klingemann gifted us with an early sample of what GANs can do with video. In it, one can see Françoise Hardy (decades ago), a famous French artist in the 1960s, defending Donald Trump from recent events by evoking “alternative facts.”

Of course, the video is obviously fake—it was created in a few days on a desktop computer. The voice is that of Trump’s adviser Kellyanne Conway and the footage, as it appears, never actually happened. But it’s a good example of the potential of this technology to distort reality, especially considering how fast AI progresses.

Traditionally, AI, or machine learning, require a lot of human effort. But this technology circumvents this need by creating an alternative way to teach AI how to create labelled data from existing data by using two neural networks.

Kind of like a “cop vs. a counterfeiter”, if we assume images are dollar bills, the counterfeiter (generator network) tries to create realistic-looking bills while the cop (discriminator network) tries to determine whether they are real or fake. The game goes on for a while, and eventually, the counterfeiter is able to create bills that are identical and indistinguishable from the real deal.

As a technological feat, this is wonderful progress and should be pursued. But why might it be a problem when it comes to reporting false news or events?

As The Economist puts it, “images and sound recordings retain for many an inherent trustworthiness. GANs are part of a technological wave that threatens this credibility.” And it’s true. Today, “fake news” are by and large reduced to false reporting based on erroneous testimony, data manipulation or misinterpretation, outright media bias, or mistakes, even when intentions are good. When confronted with these, we ask for more solid evidence, such as clear footage or audio. But what happens when these can be easily faked?

Some suggestions include the use of cryptography to verify that footage or images are provided by trusted organizations or unique keys that only the source devices or the signing organization possess. These will likely ensure that traditionally reliable news outlets will continue to provide “real news.” But they must put processes in place that ensure the veracity of their content. The rest of us—we’ll just need to start to be a little more sceptical of what we lay eyes upon.

Generative Adversarial Nets - Fresh Machine Learning #2, Siraj Raval


Make your foresight and strategy workshops engaging with Futures Platform - a collaborative foresight toolbox where you can find a library of 900+ trend analyses by futurists, collaborate with your colleagues, visualise and document your process all in one place.

 

RELATED


 
Previous
Previous

Is the Hyperloop Coming to the Nordics?

Next
Next

Global Middle-Class Grew Faster Than Expected