Jump to content

Researchers Troll Google Video AI with Images of Audi Cars and Spaghetti


CrAKeN

Recommended Posts

Google's recently launched video classification API is not as smart as people expected, according to new research published by a three-man team from the Univerisity of Washington.

 

In a paper published last Friday, researchers presented a method that successfully fools Google's new Cloud Video Intelligence API, a machine learning system the company launched exactly a month ago.

 

This new API, currently in beta testing, uses powerful deep-learning models, built using frameworks like TensorFlow, to analyze videos and classify them based on their content.

 

Trolling-Google-Original-Labels.jpg

 

The trick, according to researchers, was to insert an unrelated image inside the video at every two seconds.

 

These photos were enough to fool Google's new API, which detected the images as dominant among the rest of the video frames and used them to classify the video in the wrong categories.

 

Researchers used the following images during their tests.

 

Trolling-Google-Sample-Images.jpg

 

The results speak for themselves, as the Google video classification AI tagged the video primarily on the fake images secretly inserted in the video feed.

 

Trolling-Google-Faked-Labels.jpg

 

Currently, even if in beta, this new AI-based video classification system is under testing with companies such as Disney (entertainment), Airbus (avionics), or Ocado (supermarket chain).

 

Flaws have real world impact if left unfixed


Researchers say they carried out this experiment because this flaw, if left inside the Google API, would allow an adversary to bypass the video classification system.

 

For example, this flaw could be used to mask ISIS propaganda videos uploaded on YouTube. Misclassifying these videos would result in the videos reaching a wider audience when they're presented to users as related video suggestions.

 

"Note that we could deceive the Google’s Cloud Video Intelligence API, without having any knowledge about the learning algorithms, video annotation algorithms or the cloud computing architecture used by the API," researchers said. "The success of the image insertion attack shows the importance of designing the system to work equally well in adversarial environments."

 

Bleeping Computer readers interested in the researchers' work can read their paper in full here. The paper is entitled "Deceiving Google's Cloud Video Intelligence API Built for Summarizing Videos," and is authored by Hossein Hosseini, Baicen Xiao, and Radha Poovendran.

 

Source

Link to comment
Share on other sites


  • Views 561
  • Created
  • Last Reply

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...