Jump to content

Dream on if you think spies will reveal their exploits


steven36

Recommended Posts

Governments will hide and exploit vulnerabilities as long as they can, even if that risks criminals getting their hands on them too

 

S8ZFhjJ.jpg

 

Some people are angry at the U.S. government for secretly stockpiling exploits for security flaws so they can use them to spy on people. The latest outcry came late last week after the revelation that spies -- in this case, believed to work for the United Arab Emirates -- tried to hack an activist's iPhone using three separate flaws not revealed to Apple by the government.

 

Sorry, but the U.S. government -- indeed, all governments -- is going to do this, and we'd better get used to that fact.

 

Vendors, allies, businesses, citizens, and researchers should never assume that the National Security Agency, GCHQ, Mossad, GRU, or any of the others will promptly disclose detected vulnerabilities.

 

Like it or not, you should assume instead that they will exploit them either until someone else finds the vulnerability -- a vendor, researcher, or ethical hacker -- or until it's in the government's interest to disclose it, such as when it knows a competing power has found it as well. Even the United States, whose official policy is to quickly release details of discovered exploits to affected vendors, has acknowledged that it doesn't release exploits it thinks it can use.

This reality does impose a high price on everyone, because it means we have to spend the effort to find the vulnerabilities on our own knowing that they could be used against us until we do. Vendors and researchers need to redouble their efforts as a result.

A government targeted a human rights activist through an iOS flaw

The most recent case of a government agency taking advantage of undisclosed vulnerabilities shows why it's critical that vendors, researchers, and ethical hackers seek out vulnerabilities. Once Apple learned of the vulnerability, it acted quickly to close it, with its release late last week of iOS 9.3.5. Spy agencies knew Apple would act quickly, which is why they didn't tell Apple in the first place. Instead, they used it for as long as they could.

The vulnerabilities came to light when suspicious SMS messages sent to Ahmed Mansoor, a human rights activist in the United Arab Emirates, were found to include a link that if clicked loaded a page in Safari that would have triggered the Trident exploit. That exploit targeted a memory corruption flaw in the WebKit browser engine that underpins Safari and two kernel flaws in iOS to remotely jailbreak an iPhone.

 

Lucky for Mansoor, he didn't click the link. But surely others have been ensnared from similar attacks. Had he done so, the government would have been able to remotely jailbreak Mansoor's iPhone to record his WhatsApp and Viber communications, track his movements, and use the phone camera and microphone to monitor what he was doing

Trident was identified by researchers from Citizen Lab, based in the University of Toronto’s Munk School of Global Affairs, and mobile security company Lookout Security. Researchers linked Trident back to NSO Group, the company behind government-exclusive "lawful intercept" spyware Pegasus. NSO specializes in giving governments new ways to spy on citizens, businesses, and other governments. 

 

The good news in this most recent case is that iOS users quickly adopt updates, so the majority of users will be on the latest operating system within a few weeks -- and immune from Trident. If the exploit targeted Android, Windows, or most other platforms, the patch could have taken weeks, months, or even longer to become available.

 

But the pattern of response for the Trident exploit shows how the security world works -- and will continue to work: Spies exploit a vulnerability. The flaw is eventually found and reported to the vendor. The vendor patches the vulnerable software, and users apply the update. Spies then move on to the next vulnerability in their arsenal. Wash, rinse, repeat.

Government spying is distasteful, but criminal use is even worse

Well, not always. Sometimes, those vulnerabilities don't get found by ethical organizations for vendors to quickly fix and users to quickly patch. Or even found by a government for use in a "national security" purpose. Sometimes, they get traded on the black market for other nefarious actors -- criminal and otherwise -- to exploit more widely and for more potential harm. 

 

For example, a group called Shadow Brokers dumped several exploits and attack tools ostensibly stolen from the Equation Group (widely believed to be an NSA contractor) onto GitHub earlier this month. The attack tools were originally developed for older Cisco and Juniper firewalls, but hardware procurement cycles being what they are, many organizations still have those in place -- and became vulnerable to attacks. The tools even work on some of the newer networking products, expanding the pool of potential victims.

 

In the case of the Equation Group exploits, a New York University professor set up a honeypot and saw attacks using them occur within just 24 hours of their release on GitHub. Criminals jump on exploit dumps and vulnerability disclosures promptly to take advantage of them quickly, before vendors and then users can respond.

 

Even though GitHub took the Shadow Brokers account down, the exploits were out. Cisco and Juniper are working on patches, but in the meantime businesses throughout the globe are vulnerable. It almost makes you wish it were a government that found the weaknesses in the firewalls so its use would be limited to government spying, not for a range of criminal activities.

 

We saw a similar occurrence a year ago when someone publicly dumped files from Hacking Team that revealed yet more vulnerabilities in Adobe Flash. Very quickly, major crimeware kits were targeting those Flash vulnerabilities.

Worst-case scenario: Government exploits reach the black market

An even worse scenario is if secret exploits held by a government do make it into the black market. There's no guarantee that a government agency can keep an exploit secret -- especially because governments often get them from private companies like NSO or Zerodium. The government doesn't actually control that exploit knowledge.

 

Remember the big fight Apple had last year with the FBI over getting access to a mass-murderer's iPhone? Apple refused to help, for fear of ending all its users' privacy. So the FBI ultimately turned to a private company or hacker group -- no one knows who -- and paid about $1 million to use an exploit Apple was apparently unaware of. We don't know if the exploit worked or if Apple ever learned what it is, but the FBI dropped the case, suggesting it got what it wanted. Maybe others have it as well. We may never know.

 

In an ideal world, if you accept the legitimacy of government spying, perhaps governments should have only a set period during which they can keep an exploit secret, after which they must disclose it to the affected vendor. That would reduce the chances of exploits known to governments getting into criminal hands unknown to users and vendors.

 

But we don't live in an ideal world. Any government will want to use exploits as long as it can, and none will trust other governments to play nice even if it does.

 

So vendors, researchers, and ethical hackers need to keep looking hard for vulnerabilities and disclosing them to affected parties as quickly as possible. They're not just doing so to protect us from criminals, but from governments good and bad. Governments -- especially their spy agencies -- aren't stupid like many people like to think. They're at least as wily as cybercriminals, and we need to factor that reality into our own self-defense work.

 

Source: 

http://www.infoworld.com/article/3113074/hacking/dream-on-if-you-think-spies-will-reveal-their-exploits.html

 

Link to comment
Share on other sites


  • Replies 2
  • Views 432
  • Created
  • Last Reply

People don't want to know what can be exploited and I can prove that.  We found an exploit in Windows 10 when it was released that dates back to Windows 3.1 and not one person has asked me about it, but I wouldn't tell anyway.  And last week I posted a vulnerability that has existed in Windows since Windows 95 that allows all versions to be hacked and so many people complained that the admins took it down.  And I will never post another.  If you want to be ostriches and stick you head in the sand be my guest but don't complain about anyone who collects exploits for later use and doesn't reveal them.

Link to comment
Share on other sites


Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...