Jump to content

Fake video will soon be good enough to fool entire populations


The AchieVer

Recommended Posts

In the coming months, seeing won't be believing thanks to increasingly convincing computer-generated deepfake videos


32-crop-11-18-wiredworld-security_04.jpg
David Doran

Breaking news: a video “leaks” of a famous terrorist leader meeting in secret with an emissary from a country in the Middle East. News organisations air the video as an “unconfirmed report”. American officials can neither confirm nor deny the authenticity of the video – a typically circumspect answer on intelligence matters. 

The US president condemns the country that would dare hold secret meetings with this reviled terrorist mastermind. Congress discusses imposing sanctions. A diplomatic crisis ensues. Perhaps, seeing an opportunity to harness public outrage, the president orders a cruise missile strike on the last known location of the terrorist leader. 

 

All of this because of a few seconds of film – a masterful fabrication. 

 

In 2019, we will for the first time experience the geopolitical ramifications of a new technology: the ability to use machine learning to falsify video, imagery and audio that convincingly replicates real public figures. 

 

“Deepfakes” is the term becoming shorthand for a broad range of manipulated video and imagery, including face swaps (identity swapping), audio deepfakes (voice swapping), deepfake puppetry (mapping a target’s face to an actor’s for facial reenactment), and deepfake lip-synching (synthetic video created to match an audio file and footage of their face). This term was coined in December 2017 by a Reddit user of the same name, who used open-source artificial intelligence tools to paste celebrities’ faces on to pornographic video clips. A burgeoning community of online deepfake creators followed suit.

Deepfakes will continue to improve in ease and sophistication as developers create better AI and new techniques that make it easier to create falsified videos. The telltale signs of a faked video – subjects not blinking, flickering of the facial outline, over-centralised facial features – will become less obvious and, eventually, imperceptible. Ultimately, maybe in a matter of a few years, it will be possible to synthetically generate footage of people without relying on any existing footage. (Current deepfakes need stock footage to provide the background for swapped faces.)

 

Perpetrators of co-ordinated online disinformation operations will gladly incorporate new, AI-powered digital impersonations to advance political goals, such as bolstering support for a military campaign or to sway an electorate. Such videos may also be used simply to undermine public trust in media.

 

Aside from geopolitical meddling or disinformation campaigns, it’s easy to see how this technology could have criminal, commercial applications, such as manipulating stock prices. Imagine a rogue state creating a deepfake that depicts a CEO and CFO furtively discussing missing expectations in the following week’s quarterly earnings call. Before releasing the video to a few journalists, they would short the stock – betting on the stock price plummeting when the market overreacts to this “news”. By the time the video is debunked and the stock market corrects, the perpetrator has already made away with a healthy profit.

 

Perhaps the most chilling realisation about the rise of deepfakes is that they don’t need to be perfect to be effective. They need to be just good enough that the target audience is duped for just long enough. That’s why human-led debunking and the time it requires will not be enough. To protect people from the initial deception, we will need to develop algorithmic detection capabilities that can work in real time, and we need to conduct psychological and sociological research to understand how online platforms can best inform people that what they’re watching is fabricated. 

In 2019, the world will confront a new generation of falsified video deployed to deceive entire populations. And we may not realise the video is fake until we’ve already reacted – maybe overreacted. Indeed, it may take such an overreaction for us all to consider how we relate to fast-moving information online, not only from a technological and platform point of view, but from the perspective of everyday citizens and the mistaken assumption that “seeing is believing”.

 

 

Source

Link to comment
Share on other sites


  • Replies 2
  • Views 725
  • Created
  • Last Reply
6 minutes ago, Jogs said:

Seeing is Not Believing

In today’s technological advances, it’s difficult to differentiate between the real and the manufactured .

Link to comment
Share on other sites


Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...