Jump to content

FBI agent explores how social engineering attacks get a boost from social media


steven36

Recommended Posts

 

In the late 1990s, Kevin Mitnick introduced his version of human hacking to the digital world. Mitnick was then, and still is, an expert at social engineering, which is the "... psychological manipulation of people into performing actions or divulging confidential information."

 


original-a8d4cd3a6e9016364373af936126c03

 

 

Over the years, Mitnick and his counterparts polished their skills, and very few doors—digital and otherwise—went unopened. However, the return on investment for cybercriminals was less than satisfactory; even though scamming a victim via social engineering is simpler than using a technical hack, the process is labor intensive.

 

Enter social media

Fast forward a few years to when social media came into the picture. "Social engineering, when coupled with the new and widespread use of social media, becomes more effective by exploiting the wealth of information found on social-networking sites," writes John J. Lenkart in his Naval Postgraduate School thesis (PDF). "This information allows for more selective targeting of individuals with access to critical information."

In his research, Lenkart, a unit chief in the Federal Bureau of Investigation, determined that all social-engineering attacks have one thing in common: They rely on acquiring pertinent information about the target organization or individual. Figure A depicts the steps involved in a social-engineering attack.

 

Figure A

original-c98c024ededeaa570095a273d8a2b62

 

Lenkart next mentions that employing social-media outlets increases the effectiveness of social engineering by expanding the attack surface of the intended victim organization and its members. The way social media comes into play is shown in Figure B and is circled in red.

 

Figure B

 

original-ffb0e062d827e7ce36c018bb98ea91d

 

Lenkart then writes that data mining social-media outlets clearly enhances social-engineering techniques by being able to identify the sphere of influence or inner trust circle of a targeted individual or organization.

Collective-attention threats

James Caverlee, associate professor at Texas A&M University, along with Kyumin Lee assistant professor Utah State University, and former Texas A&M student Krishna Y. Kamath, now at Twitter are also interested in social-media-enhanced social engineering, in particular, threats involving spam.

 

n their coauthored paper Combating Threats to Collective Attention in Social Media: An Evaluation (PDF), they write, "Breaking news, viral videos, and popular memes are all examples of the collective attention of huge numbers of users focusing on large-scale social systems. But this self-organization, leading to user attention quickly coalescing and then collectively focusing on the phenomenon, opens these systems to new threats like collective-attention spam."

Caverlee, Lee, and Kamath first point out that large-scale social systems such as web-based social networks, online social-media sites, and web-scale crowdsourcing systems have several positive features:

  • large-scale growth in the size and content in the community;
  • bottom-up discovery of "citizen-experts";
  • discovery of new resources beyond the scope of the system's designers; and
  • new social-based information search and retrieval algorithms.

However, the three add, "The relative openness and reliance on users coupled with the widespread interest and growth of these social systems carries risks and raises growing concerns over the quality of information in these systems."

In this TechRepublic article, fake news is discussed as a way for adversaries to get targeted victims to do something they would rather not. Collective-attention threats have the same effect and appear to be more successful because the content is considered trustworthy.

Next, the researchers identify three types of collective-attention threats targeting social-media outlets:

  • content pollution by social spammers;
  • coordinated campaigns for strategic manipulation; and
  • threats to collective attention.

 

A possible solution

This Texas A&M University press release notes that Caverlee and his team are building a threat-awareness application that will serve as an early warning system for users. The countermeasure should mitigate the effects of collective-attention threats.

The early warning system consists of a framework for detecting and filtering social spammers and content polluters in social systems. The framework is built around what they call a social honeypot (Figure C) that will:

  • harvest spam profiles from social-networking communities;
  • develop robust statistical user models for distinguishing between social spammers and legitimate users; and
  • filter out unknown (including zero-day) spammers based on these user models.

The framework will consist of a method set and algorithms that detect coordinated campaigns in large-scale social systems by linking free text posts with common "talking points" and extracting campaigns from large-scale social systems.

 

Figure C

original-04becc4f67b1a2520c1c664ef3d31aa

 

 

Not to be taken lightly

Of concern to the Texas A&M researchers is the understanding that it only takes a few spammers using collective-attention threat methodology to disrupt the quality of information.

FBI Unit Chief Lenkart, in his conclusion warns, "The pervasiveness of social-networking media cannot be ignored when developing a security program to limit its impact on an organization's vulnerability to the insider threat."

 

By Michael Kassner

http://www.techrepublic.com/article/fbi-agent-explores-how-social-engineering-attacks-get-a-boost-from-social-media/

 

 

Link to comment
Share on other sites


  • Views 267
  • Created
  • Last Reply

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...