Jump to content

Search the Community

Showing results for tags 'security'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 431 results

  1. Alex42

    ExpressVPN Giveaway [3 Prizes]

    ExpressVPN Giveaway [3 Prizes] Hackology is pleased to bring you a Giveaway where 3 Winners will WIN a 12 Month subscription of ExpressVPN each. We are giving away $465 worth of subscription in this Giveaway. Join the giveaway and share among your friends. More participants mean more giveaways in coming future Do read the Giveaway Post on Hackology Blog where I have shared how ExpressVPN and HP have teamed up to launch their first laptop which will come with a free trial of ExpressVPN : What Winners will get ? After the end of competition we will select 3 winners. All winners will be given a subscription of ExpressVPN which will be valid for 12 Months. They will not be asked for any Credit Card details or anything - Just an email address so their account details can be emailed https://hackology.co/giveaway/3-ExpressVPN-2019.html
  2. Mine is extremely light, but undoubtedly powerful. Here is my setup: Defensewall ShadowDefender Keyscrambler Sandboxie (custom rules) (A2, SAS, MBAM used rarely, on demand)
  3. (Reuters) - A federal judge said up to 29 million Facebook Inc users whose personal information was stolen in a September 2018 data breach cannot sue as a group for damages, but can seek better security at the social media company after a series of privacy lapses. In a decision late Tuesday night, U.S. District Judge William Alsup in San Francisco said neither credit monitoring costs nor the reduced value of stolen personal information was a “cognizable injury” that supported a class action for damages. Alsup also said damages for time users spent to mitigate harm required individualized determinations rather than a single classwide assessment. Users were allowed to sue as a group to require Facebook to employ automated security monitoring, improve employee training, and educate people better about hacking threats. Alsup rejected Facebook’s claim that these were unnecessary because it had fixed the bug that caused the breach. “Facebook’s repetitive losses of users’ privacy supplies a long-term need for supervision,” at least at this stage of the litigation, Alsup wrote. Allowing a damages class action could have exposed Facebook to a higher total payout. Lawyers for the Facebook users did not immediately respond to requests for comment. Facebook did not immediately respond to similar requests. On Sept. 28, 2018, Facebook said that hackers had exploited software flaws to access 50 million users’ accounts, at the time considered the largest breach in the California-based company’s 14-year history. It scaled back the size two weeks later, saying 30 million users had their access tokens stolen, while 29 million had personal information such as gender, religion, email addresses, phone numbers and search histories taken. Facebook has faced many lawsuits over privacy, including for allowing British political consulting firm Cambridge Analytica access data for an estimated 87 million users. In September, U.S. District Judge Vince Chhabria in San Francisco said Facebook must face most of a damages lawsuit over access by third parties such as Cambridge, calling Facebook’s views about users’ privacy expectations “so wrong.” Facebook Chief Executive Mark Zuckerberg outlined his “privacy-focused vision” for social media in a March 6 blog post. “Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks,” he wrote. The case is Adkins v Facebook Inc, U.S. District Court, Northern District of California, No. 18-05982. Source
  4. The Royal Malaysia Police (PDRM) “are allowed to inspect mobile phones to ensure there are no obscene, offensive, or communication threatening the security of the people and nation,” the Dewan Rakyat was told yesterday. According to a media report from MalaysiaKini, PDRM also have the right to, including “phone bugging” or “tapping” to ensure investigations could be carried out in cases involving security. The article quoted Deputy Home Minister Mohd Azis Jamman who was responding to questions from YB Chan Ming Kai (PH-Alor Star). The Deputy Home Minister also said that “the public should be aware of their rights during a random check, including requesting the identity of the police officer conducting the search for record purposes, in case there is a breach of the standard operating procedures (SOP),”. However, details of the “Police SOP” were not revealed. In 2014, the then Minister in the Prime Minister’s Department Nancy Shukri said that law enforcers (such as PDRM) in the country are empowered under five different laws to tap (wiretap) any communications done by suspects of criminal investigations. This would include intercepting, confiscating and opening any package sent via post, intercepting any messages sent or received through any means of telecommunication (voice/SMS/Internet); intercept, listen to and record any conversations (phone) over telecommunications networks. The provisions are found under Section 116C of the Criminal Procedure Code, and also under Section 27A of the Dangerous Drugs Act 1952, Section 11 of the Kidnapping Act 1961, Section 43 of the Malaysian Anti Corruption Commission Act 2009, and Section 6 of the Security Offences (Special Measures) Act 2012 (SOSMA). According to Malaysia-Today in a 2013 article, Section 6 (SOSMA) gives the Public Prosecutor the power to authorise any police officer to intercept any postal article, as well as any message or conversation being transmitted by any means at all, if he or she deems it to contain information relating to a “security offence”. It also gives the Public prosecutor the power to similarly require a communications service provider like telecommunications companies (including Maxis, Celcom Axiata, Telekom Malaysia, Digi, U Mobile, Yes4G and others) to intercept and retain a specified communication, if he or she considers that it is likely to contain any information related to “the communication of a security offence.” Additionally, it vests the Public Prosecutor with the power to authorise a police officer (PDRM) to enter any premises to install any device “for the interception and retention of a specified communication or communications.” The Malaysia-Today article said such a scope of what the government can do in terms of intercepting people’s messages is troubling – at least to those who understand its implication. In particular, there are those who are anxious that it can be used to tap on detractors and political opponents. “Due to the vagueness and broadness of the ground for executing interception, this provision is surely open to abuse especially against political dissent,” said Bukit Mertajam MP Steven Sim at the time. Stressing that the act does not provide any guidelines on the “interception”, he added: “The government can legally ‘bug’ any private communication using any method, including through trespassing to implement the bugging device and there is not stipulated time frame such invasion of privacy is allowed”. “If that is not enough, service providers such as telcos and internet service providers are compelled by Section 6(2)(a),” At the moment, the Malaysian Government has not revealed on the number of people/communication it has tapped/intercept in the past decade. [Update, 20 November 2019]: Deputy Home Minister Datuk Mohd Azis Jamman released a statement saying that the Royal Malaysia Police (PDRM) can confiscate the cell phones of the suspected and those involved in any ongoing investigation, and not conduct random checks on the public. Source: Malaysia Police (PDRM) can Intercept your Voice Calls/SMS, check your Handphone (via Malaysian Wireless) p/s: Deputy Home Minster later clarified that phone checking can only be done, if individuals are suspected of committing wrongdoings under the following acts (a warrant will be required as part of SOP): Penal Code (Act 574) Section 233 under the Communications and Multimedia Act (Act 588) Sedition Act 1948 (Act 15) Security Offences (Special Measures) 2012 Act (747) Anti-Trafficking in Persons and Anti-Smuggling of Migrants 2007 (Act 670) Prevention of Terrorism Act 2015 (Act 769). The people can report to Standard Compliance Department (Jabatan Integriti dan Pematuhan Standard or JIPS) if the enforcement officers randomly checking phones without a proper warrant and/or SOP. Source: Home Ministry: PDRM can only check phones belonging to suspects and individuals involved in ongoing investigations (via The Star Online) p/s 2: The original title of the news is added with "(update: Home Minister says cannot)" to reflect that although previous news quoted that Police can check (and intercept) the public's devices randomly, but with a new clarification from Home Ministry (stated on p/s part), the police cannot check (and intercept) the devices without a proper warrant that reflects any one (or more) from 6 acts listed.
  5. A Facebook VP says the company is looking into it Facebook might have another security problem on its hands, as some people have reported on Twitter that Facebook’s iOS app appears to be activating the camera in the background of the app without their knowledge. Facebook says it’s looking into what’s happening. There are a couple ways that this has been found to happen. One person found that the camera UI for Facebook Stories briefly appeared behind a video when they flipped their phone from portrait to landscape. Then, when they flipped it back, the app opened directly to the Stories camera. You can see it in action here (via CNET😞 It’s also been reported that when you view a photo on the app and just barely drag it down, it’s possible to see an active camera viewfinder on the left side of the screen, as shown in a tweet by web designer Joshua Maddux: Maddux says he could reproduce the issue across five different iPhones, which were all apparently running iOS 13.2.2, but he reportedly couldn’t reproduce it on iPhones running iOS 12. Others reported they were able to replicate the issue in replies to Maddux’s tweet. CNET and The Next Web said they were able to see the partial camera viewfinder as well, and The Next Web noted that it was only possible if you’ve explicitly given the Facebook app access to the camera. In my own attempts, I couldn’t reproduce the issue on my iPhone 11 Pro running iOS 13.2.2. Guy Rosen, Facebook’s VP of integrity, replied to Maddux this morning to say that the issue he identified “sounds like a bug” and that the company is looking into it. With the second method, the way the camera viewfinder is just peeking out from the left side of the screen suggests that the issue could be a buggy activation of the feature in the app that lets you swipe from your home feed to get to the camera. (Though I can’t get this to work, either.) I don’t know what might be going on with the first method — and with either, it doesn’t appear that the camera is taking any photos or actively recording anything, based on the footage I’ve seen. But regardless of what’s going on, unexpectedly seeing a camera viewfinder in an app is never a good thing. People already worry about the myth that Facebook is listening in to our conversations. A hidden camera viewfinder in its app, even if it’s purely accidental, might stoke fears that the company is secretly recording everything we do. Hopefully Facebook fixes the issues soon. And you might want to revoke the Facebook app’s camera access in the meantime, just to be safe. Source: Facebook’s iOS app might be opening the camera in the background without your knowledge (via The Verge) p/s: The news was posted under Security & Privacy News, instead of Mobile News as this news talks about privacy issue on Facebook's iOS app with regards to the camera bug.
  6. Apple may have known for months Apple stakes a lot of its reputation on how it protects the privacy of its users, as it wants to be the only tech company you trust. But if you send encrypted emails from Apple Mail, there’s currently a way to read some of the text of those emails as if they were unencrypted — and allegedly, Apple’s known about this vulnerability for months without offering a fix. Before we go any further, you should know this likely only affects a small number of people. You need to be using macOS, Apple Mail, be sending encrypted emails from Apple Mail, not be using FileVault to encrypt your entire system already, and know exactly where in Apple’s system files to be looking for this information. If you were a hacker, you’d need access to those system files, too. Apple tells The Verge it’s aware of the issue and says it will address it in a future software update. The company also says that only portions of emails are stored. But the fact that Apple is still somehow leaving parts of encrypted emails out in the open, when they’re explicitly supposed to be encrypted, obviously isn’t good. The vulnerability was shared by Bob Gendler, an Apple-focused IT specialist, in a Medium blog published on Wednesday. Gendler says that while trying to figure out how macOS and Siri suggest information to users, he found macOS database files that store information from Mail and other apps which are then used by Siri to better suggest information to users. That isn’t too shocking in and of itself — it makes sense that Apple needs to reference and learn from some of your information to provide you better Siri suggestions. But Gendler discovered that one of those files, snippets.db, was storing the unencrypted text of emails that were supposed to be encrypted. Here’s an image he shared that’s helpful to explain what’s going on: The circle on the left is around an encrypted email, which Gendler’s computer is not able to read, because Gendler says he removed the private key which would typically allow him to do so. But in the circle on the right, you can make out the text of that encrypted email in snippets.db. Gendler says he tested the four most recent macOS releases — Catalina, Mojave, High Sierra, and Sierra — and could read encrypted email text from snippets.db on all of them. I was able to confirm the existence of snippets.db, and found that it stored portions of some of my emails from Apple Mail. I couldn’t find a way to get snippets.db to store encrypted emails I sent to myself, though. Gendler first reported the issue to Apple on July 29th, and he says the company didn’t even offer him a temporary solve until November 5th — 99 days later — despite repeated conversations with Apple about the issue. Even though Apple has updated each of the four versions of macOS where Gendler spotted the vulnerability in the months since he reported it, none of those updates contained a true fix. If you want to stop emails from being collected in snippets.db right now, Apple tells us you can do so by going to System Preferences > Siri > Siri Suggestions & Privacy > Mail and toggling off “Learn from this App.” Apple also provided this solution to Gendler — but he says this temporary solution will only stop new emails from being added to snippets.db. If you want to make sure older emails that may be stored in snippets.db can no longer be scanned, you may need to delete that file, too. If you want to avoid these unencrypted snippets potentially being read by other apps, you can avoid giving apps full disk access in macOS Catalina, according to Apple — and you probably have very few apps with full disk access. Apple also says that turning on FileVault will encrypt everything on your Mac, if you want to be extra safe. Again, this vulnerability probably won’t affect that many people. But if you do rely on Apple Mail and believed your Apple Mail emails were 100 percent encrypted, it seems that they’re not. As Gendler says, “It brings up the question of what else is tracked and potentially improperly stored without you realizing it.” Source: Apple is fixing encrypted email on macOS because it’s not quite as encrypted as we thought (via The Verge)
  7. It’s calling the partnership the ‘App Defense Alliance’ Google announced today that it’s teaming up with three security companies to help identify malicious apps before they’re published on the Play Store and can potentially do harm to Android users. The company is calling this partnership the App Defense Alliance. Android is on over 2.5 billion devices, according to Google, and the company says that makes the platform “an attractive target” for abuse. That abuse can take the form of hidden malware or secret code designed to spy and siphon away sensitive user data. This seems to be particularly true of the Play Store — over the past year or so, Google has had to take action against multiple developers for releasing apps on the Play Store using scammy ad practices. By forming the App Defense Alliance, Google is enlisting security companies ESET, Lookout, and Zimperium to help scan for bad apps before they hit the Play Store in the first place. Google already builds Google Play Protect, its malware protection service, right into Android. The company says it also uses Play Protect to scan “billions” of apps every day on the Play Store. So it seems like Google should already be catching these bad apps — but apparently, the problem is big enough that the company felt the need to bring in some reinforcements. In theory, with more companies helping scan Play Store apps, there’s a better chance you won’t accidentally download one of the bad offenders on your Android device. Source: Google teams up with security companies to catch bad apps before they hit the Play Store (via The Verge)
  8. In a move to fight spam and improve the health of the web, Mozilla will hide notification popups -- a feature nobody asked for. In a move to fight spam and improve the health of the web, Firefox will hide those annoying notification popups by default starting next year, with the release of Firefox 72, in January 2020, ZDNet has learned from a Mozilla engineer. The move comes after Mozilla ran an experiment back in April this year to see how users interacted with notifications, and also looked at different ways of blocking notifications from being too intrusive. Usage stats showed that the vast majority (97%) of Firefox users dismissed notifications, or chose to block a website from showing notifications at all. As a result, Mozilla engineers have decided to hide the notification popup that drops down from Firefox's URL bar, starting with Firefox 72. If a website shows a notification, the popup will be hidden by default, and an icon added to the URL bar instead. Firefox will then animate the icon using a wiggle effect to let the user know there's a notification subscription popup available, but the popup won't be displayed until the user clicks the icon. We've recorded a GIF of this new routine, here. Firefox Nightly versions already come with this notification popup blocker active, but the stable Firefox branch is scheduled to get it next January. The Notification API, and how it all went south Notification popups were added to modern browsers in Chrome 22 (September 2012) and Firefox 22 (June 2013), with the addition of the Notifications API. Their initial purpose was to allow websites to display notifications, and alert users of new content, after users closed a website's tab. For example, you subscribed to Slack notifications, have a conversation, and close the Slack browser tab. The Notifications API allowed websites to show a popup when you received a new message, or there was new content available in your (now-closed) Slack tab. News sites, such as ZDNet, also use notifications to alert users when new articles are out. Social networks and instant messaging clients use it to show alerts for trending topics or new messages. The feature has its use cases, and can be extremely useful, but only when used by legitimate organizations. Fraudsters and spammers love notifications But over the past few years, unscrupulous groups have realized that the Notifications API provides an ideal method of pushing spam to users, even after users left the malicious site. Cybercrime groups have been luring users on random sites, and showing notification popups. If users accidentally clicked on the wrong button and subscribed to one of these shady sites, then they'd be pestered with all sorts of nasty popups. Malicious threat actors have been seen using notifications (also known as subscription spam) to push links to shady products, links to malware downloads, or run-of-the-mill pill or Viagra spam [1, 2]. "Notification spam is quite common, especially via certain types of publishers and malvertising in general," Jérôme Segura, malware analyst at Malwarebytes, told ZDNet in an interview today. "Since most browsers can block ad popups or popunders, push notifications have been greatly abused," Segura added. "In fact, I even question the merits of such a 'feature' in the first place or at least some serious oversight in how it could be implemented. "Years ago, people would come to you about annoying ad notifications popping up on their machine, and that was usually due to adware programs [installed locally]. But these days, I would say this has been largely replaced by notification spam, which is very easy to fall for with some basic social engineering," he added. "In comparison to cleaning up an infected machine, it's actually much easier to remove already allowed notifications, but most people just don't know how," Segura said. And browser makers, too, have realized that the feature can be quite annoying, and downright dangerous. In recent years, most browsers have added settings to block websites from showing notifications. However, Mozilla is the first browser vendor to block notification popups by default. "I think Mozilla's decision is good for the health of the web," Segura told ZDNet. You can unsubscribe from receiving notifications from sites via any browser's settings section. Most browsers support a search feature in the settings section. Users can use it to search for the "notifications" options and block or unsubscribe from the shady sites. Source
  9. Has collective amnesia about stance on end-to-end encryption The British government wants your bright ideas for improving the nation's cybersecurity because it wants to "understand the apparent lack of strong commercial rationale for investment" in locking down your shizz. As part of its fond hope of making the UK a bit more secure than the rest of the world, the Department for Digital, Culture, Media and Sport (DCMS) wants you to tell it what it could be doing better. The Cyber Security Incentives and Regulation Review is intended to tell UK.gov which of its security-enhancing initiatives do and don't work. Many of those are routed to the great unwashed via the National Cyber Security Centre (NCSC). In its detailed consultation document, accessible here, DCMS claimed that "only around 60 per cent of organisations took actions to identify cyber security risks", citing a survey it carried out earlier this year. Back in April, NCSC tech director Ian Levy said: "I think we're still seeing very common things happen that were happening 15 years ago. We've got to find some way of changing it. It's obvious the way we've been trying to get people to change this hasn't been working." Perhaps perceptively, the department opined that part of the problem with getting smaller businesses to take cybersecurity seriously was the problem that security is "viewed as an IT-specific issue and an objective in itself, rather than an enabler of business continuity and operational resilience". Digital minister Matt Warman, the one-time technology editor of the Daily Telegraph, pleaded: "I hope this review will encourage the industry to think about what government could do to help and what incentives might encourage firms and businesses to manage their cyber risk." DCMS also published a postal feedback address, presumably for the use of people who write in green ink and think all of the internet is hopelessly insecure. Separately, defence ministers published their latest response to Parliament's Joint Committee on the National Security Strategy, in which the word "cyber" was mentioned just six times across 17 pages. The Ministry of Defence is spending £40m on its "cyber security operations capability", bunging £12m on the Defence Cyber School for training uniformed infosec bods, and a total of £265m on securing existing military hardware against cyber threats. Source
  10. All Android 8 (Oreo) or later devices are impacted. Google released a patch last month, in October 2019. Google patched last month an Android bug that can let hackers spread malware to a nearby phone via a little-known Android OS feature called NFC beaming. NFC beaming works via an internal Android OS service known as Android Beam. This service allows an Android device to send data such as images, files, videos, or even apps, to another nearby device using NFC (Near-Field Communication) radio waves, as an alternative to WiFi or Bluetooth. Typically, apps (APK files) sent via NFC beaming are stored on disk and a notification is shown on screen. The notification asks the device owner if he wants to allow the NFC service to install an app from an unknown source. But, in January this year, a security researcher named Y. Shafranovich discovered that apps sent via NFC beaming on Android 8 (Oreo) or later versions would not show this prompt. Instead, the notification would allow the user to install the app with one tap, without any security warning. While the lack of one prompt sounds unimportant, this is a major issue in Android's security model. Android devices aren't allowed to install apps from "unknown sources" -- as anything installed from outside the official Play Store is considered untrusted and unverified. If users want to install an app from outside the Play Store, they have to visit the "Install apps from unknown sources" section of their Android OS and enable the feature. Until Android 8, this "Install from unknown sources" option was a system-wide setting, the same for all apps. But, starting with Android 8, Google redesigned this mechanism into an app-based setting. In modern Android versions, users can visit the "Install unknown apps" section in Android's security settings, and allow specific apps to install other apps. For example, in the image below, the Chrome and Dropbox Android apps are allowed to install apps, similar to the Play Store app, without being blocked. The CVE-2019-2114 bug resided in the fact that the Android Beam app was also whitelisted, receiving the same level of trust as the official Play Store app. Google said this wasn't meant to happen, as the Android Beam service was never meant as a way to install applications, but merely as a way to transfer data from device to device. The October 2019 Android patches removed the Android Beam service from the OS whitelist of trusted sources. However, many millions of users remain at risk. If users have the NFC service and the Android Beam service enabled, a nearby attacker could plant malware (malicious apps) on their phones. Since there's no prompt for an install from an unknown source, tapping the notification starts the malicious app's installation. There's a danger that many users might misinterpret the message as coming from the Play Store, and install the app, thinking it's an update. HOW TO PROTECT YOURSELF There are good news and bad news. The bad news is that the NFC feature is enabled by default on mostly all newly-sold devices. Many Android smartphone owners may not even be aware that NFC is enabled even right now. The good news is that NFC connections are initiated only when two devices are put near each other at a distance of 4 cm (1.5 inches) or smaller. This means an attacker needs to get his phone really close to a victim's, something that may not always be possible. To stay safe, any user can disable both the NFC feature and the Android Beam service. If they use their Android phones as access cards, or as a contactless payment solutions, they can leave NFC enabled, but disable the Android Beam service -- see image below. This blocks NFC file beaming, but still allows other NFC operations. So, there's no need to panic. Just disable Android Beam and NFC if you don't need them, or update your phone to receive the October 2019 security updates and continue using both NFC and Beam as usual. A technical report on CVE-2019-2114 is available here. Source: Android bug lets hackers plant malware via NFC beaming (via ZDNet)
  11. Allowing facial recognition technology to spread without understanding its impact could have serious consequences. In the last few years facial recognition has been gradually introduced across a range of different technologies. Some of these are relatively modest and useful; thanks to facial recognition software you can open you smartphone just by looking at it, and log into your PC without a password. You can even use your face to get cash out of an ATM, and increasingly it's becoming a standard part of your journey through the airport now. And facial recognition is still getting smarter. Increasingly it's not just faces that can be recognised, but emotional states too, if only with limited success right now. Soon it won't be too hard for a camera to not only recognise who you are, but also to make a pretty good guess at how you are feeling. But one of the biggest potential applications of facial recognition on the near horizon is, of course, for law and order. It is already being used by private companies to deter persistent shoplifters and pickpockets. In the UK and other countries police have been testing facial recognition in a number of situations, with varying results. There's a bigger issue here, as the UK's Information Commissioner Elizabeth Denham notes: "How far should we, as a society, consent to police forces reducing our privacy in order to keep us safe?" She warns that when it comes to live facial recognition "never before have we seen technologies with the potential for such widespread invasiveness," and has called for police, government and tech companies to work together to eliminate bias in the algorithms used; particularly that associated with ethnicity. She is not the only one to be raising questions about the use of facial recognition by police; similar questions are being asked in the US, and rightly so. There is always a trade-off between privacy and security. Deciding where to draw the line between the two is key. But we also have to make the decision clearly and explicitly. At the moment there is a great risk that as the use of facial recognition technology by government and business spreads, the decision will be taken away from us. In the UK we've already built up plenty of the infrastructure that you'd need if you were looking to build a total surveillance state. There are probably somewhere around two million private and local government security cameras in the UK; a number that is rising rapidly as we add our own smart doorbells or other web-connected security cameras to watch over our homes and businesses. In many cases it will be very easy to add AI-powered facial recognition analysis to all those video streams. I can easily see a scenario where we achieve an almost-accidental surveillance state, through small steps, each of which make sense on their own terms but which together combine to hugely reduce our privacy and freedoms, all in the name of security and efficiency. It is much easier to have legitimate concerns about privacy addressed before facial recognition is a ubiquitous feature of society. And the same applies other related technologies like gait recognition or other biometric systems that can recognise us from afar. New technology rolled out in the name of security is all but impossible to roll back. For sure, these technologies can have many benefits, from making it quicker to unlock your phone or recognising criminals in the street. But allowing these technologies to become pervasive without rigorous debate about the need for them, the effectiveness of them and their broader impact on society is deeply unwise and could leave us facing much bigger problems ahead. Source: We must stop smiling our way towards a surveillance state (via ZDNet)
  12. Google's decade-old feature used in Chrome to reduce browsing history is now available for location services in Android. After announcing it twice in the past year, Google is keeping its promise and unrolling Incognito mode for Maps for its Android users, the company confirmed in a blog post. Modeled after the same tool that can be used in Chrome since 2008 to visit web pages without any browsing history being recorded within the platform, the new feature will prevent users' activity in Maps from being saved to their Google account. This means that, when it is switched on, you can search and view locations without having any information added to your Google account history – making, for instance, Google's personalized recommendations a lot more neutral, since they are based on your personal data. Maps will also stop sending you notifications, updating your location history and sharing your location. Google first announced that Incognito would be released for Maps a few months ago, and more recently reiterated that the feature was coming soon. Eric Miglia, director of privacy and data protection office at Google, said: "managing your data should be just as easy as making a restaurant reservation or using Maps to find the fastest way back home". While Incognito is indeed easy to switch on – users simply have to tap the option on their profile picture in Maps – there is a caveat. "Turning on Incognito mode in Maps does not affect how your activity is used or saved by internet providers, other apps, voice search, and other Google services", reads the announcement. In other words, turning the feature on minimizes the information stored in users' personal Google accounts, but it doesn't do much to stop third-parties from accessing that data. It is therefore useful for those who wish to get rid of personalized recommendations prompted by Google, but it should not be seen as an entirely reliable privacy tool. When it is switched on, Incognito also stops some key features from running, which include Google Assistant's microphone in navigation, so it might not be a tool that commuters will be using at all times. As well as Incognito for Maps, Google teased two other services to enhance privacy protection in its services last month. YouTube will have a history auto-delete option, and Google Assistant will be getting voice commands that let users manage the Assistant's own privacy settings. The company's attempts to strengthen privacy controls for its users comes at the same time as loopholes emerged in Chrome's Incognito mode. Websites were found to be able to detect visitors based on whether or not an API was available in Chrome's FileSystem, which let them enforce free article limits in the case of news websites, for instance. Although Google modified its FileSystem in Chrome 76 to prevent this, website developers have again been crafting methods to bypass the new system. Incognito for Maps is expected to hit iOS soon, but no precise date was confirmed by Google. Source: Google Maps on Android user? Now you can switch to incognito mode (via ZDNet) p/s: While this news talks about new feature on Google Maps app in Android and iOS and initially intending to post under Mobile Software News, this news is better suited to be posted under Secuirty and Privacy News section, as this news holds greater emphasis on security and privacy features on Google Maps app, including Incognito mode.
  13. In the past, if you wanted to send money via Google Pay you’d be prompted to use an old school PIN pattern to authenticate the transfer. Starting with version 2.100 Google is finally adding support for fingerprints and face authentication for money transfers thanks to Android 10’s biometric API. The feature is currently only available for Android 10 devices and is found in the Sending money settings section of the app. Previously you were only limited to using your Google Account Pin but now can switch to biometric authentication instead. Google’s own Pixel 4 lineup relies on facial unlock for authentication so this new addition will certainly be useful for owners of that one. Source: Google Pay finally adds biometric authentication for money transfers (via GSMArena) p/s: While this news is about mobile software app, but this article is suited to be placed under Security & Privacy News as the post does talk about new security feature (using biometric authentication on top of Google Account Pin) on Google Pay when it is installed under Android 10.
  14. The team behind the Tails Operating System have announced the availability of Tails 4.0, the first major released to be based on Debian 10. With the re-basing of the operating system, new software is included – two important software packages that were updated are the Linux kernel which adds support for new hardware, and the Tor Browser which was bumped to version 9.0 and stops websites identifying you based on the size of the browser windows using a technique called letterboxing. Other software packages that were updated include KeePassXC which has replaced KeePassX, OnionShare has been upgraded to 1.3.2 bringing usability improvements, the metadata cleaner tool, MAT, has been upgraded to 0.8.0 and loses its graphical interface and now appears in the right-click menu instead, Electrum has been upgraded to 3.3.8 and works in Tails again, Enigmail was updated to mitigate OpenPGP certificate flooding, and Audacity, GIMP, Inkscape, LibreOffice, git, and Tor all received upgrades too. Another major change is to the Tails Greeter. With this update, languages which have too little translations to be useful have been removed, the list of keyboard layouts has been simplified, the options chosen in the Formats settings are now applied because they weren’t before, and finally, it’s now possible to open help pages in other languages than English, when available. The final thing worth mentioning about this update pertains to performance and usability improvements. Tails 4.0 starts 20% faster, requires 250MB less RAM, has a smaller download footprint, adds support for Thunderbolt devices, the on-screen keyboard is easier to use, and USB tethering from iPhone is now supported. Unfortunately, users on previous version will have to perform a manual update to Tails 4.0 but it shouldn’t take too long to do, you can find out more information on the Tails install guide. Source: Tails OS is now based on Debian 10 and ships major Tor Browser update (via Neowin)
  15. Sky-high rates of return were promised to participants. Executives of a US company are being accused of raising at least $11 million through a cryptocurrency-based Ponzi scheme. This week, the US Commodity Futures Trading Commission (CFTC) said that a civil enforcement action has been filed against David Gilbert Saffron and Circle Society, Corp., a Nevada-based firm. According to prosecutors, Saffron allegedly operated a Ponzi scheme with the assistance of other defendants at the company, soliciting and accepting a minimum of $11 million in both Bitcoin (BTC) and US dollars. These funds were taken from investors on the promise that their 'investment' would be traded and exchanged for binary options on foreign currencies as well as various cryptocurrencies. Participants were reportedly promised a guaranteed return of 300 percent. As is usually the case with a lure of huge returns for no effort, and guaranteed to boot, the promise was empty. The CFTC says that from December 2017 to the current date, a Ponzi pool was operated by Circle Society, backed by fraudulent claims concerning Saffron's trading experience. However, rather than using cash offered by 14 investors to trade in binary options, the funds were used to pay off other participants, perpetuating the scheme further. Saffron and Circle Society are charged with fraudulent solicitation, misappropriation, and registration violations. An order has also been issued and extended by a US court to freeze their assets. "Fraudulent schemes, like that alleged in this case, not only cheat innocent people out of their hard-earned money, but they threaten to undermine the responsible development of these new and innovative markets," said CFTC Chairman Heath Tarbert. The CTFC hopes the action will result in the full compensation of victims, trading bans, and penalities -- but cautions that unless the case is proved and money can be recovered, there may not be any restitution possible. A hearing is scheduled for October 29, 2019. Ponzi schemes, fraudulent Initial Coin Offerings (ICO), and exit scams are rife in the cryptocurrency space and are a headache for regulators to manage. In May, the operators of OneCoin, Konstantin Ignatov and self-confessed 'crypto queen' Ruja Ignatova, became central to a class-action lawsuit that claims the pair ran a multimillion-dollar cryptocurrency Ponzi scheme. The complainants say that Ignatov and Ignatova's ICO "purported cryptocurrency that never really existed, on a blockchain that never really existed, born from mining farms that never really existed, yet fraudulently sold to investors throughout the world through a densely-packed multi-level-marketing system." OneCoin is estimated to have generated $4 billion in revenue. A whistleblower that has spoken out against the project has reportedly received death threats. The Bulgaria-based company is still trading. Source
  16. Planting Tiny Spy Chips in Hardware Can Cost as Little as $200 A new proof-of-concept hardware implant shows how easy it may be to hide malicious chips inside IT equipment. Illustration: Casey Chin; Getty Images More than a year has passed since Bloomberg Businessweek grabbed the lapels of the cybersecurity world with a bombshell claim: that Supermicro motherboards in servers used by major tech firms, including Apple and Amazon, had been stealthily implanted with a chip the size of a rice grain that allowed Chinese hackers to spy deep into those networks. Apple, Amazon, and Supermicro all vehemently denied the report. The NSA dismissed it as a false alarm. The Defcon hacker conference awarded it two Pwnie Awards, for "most overhyped bug" and "most epic fail." And no follow-up reporting has yet affirmed its central premise. But even as the facts of that story remain unconfirmed, the security community has warned that the possibility of the supply chain attacks it describes is all too real. The NSA, after all, has been doing something like it for years, according to the leaks of whistle-blower Edward Snowden. Now researchers have gone further, showing just how easily and cheaply a tiny, tough-to-detect spy chip could be planted in a company's hardware supply chain. And one of them has demonstrated that it doesn't even require a state-sponsored spy agency to pull it off—just a motivated hardware hacker with the right access and as little as $200 worth of equipment. "It’s not magical. It’s not impossible. I could do this in my basement." Monta Elkins, FoxGuard At the CS3sthlm security conference later this month, security researcher Monta Elkins will show how he created a proof-of-concept version of that hardware hack in his basement. He intends to demonstrate just how easily spies, criminals, or saboteurs with even minimal skills, working on a shoestring budget, can plant a chip in enterprise IT equipment to offer themselves stealthy backdoor access. (Full disclosure: I'll be speaking at the same conference, which paid for my travel and is providing copies of my forthcoming book to attendees.) With only a $150 hot-air soldering tool, a $40 microscope, and some $2 chips ordered online, Elkins was able to alter a Cisco firewall in a way that he says most IT admins likely wouldn't notice, yet would give a remote attacker deep control. "We think this stuff is so magical, but it’s not really that hard," says Elkins, who works as "hacker in chief" for the industrial-control-system security firm FoxGuard. "By showing people the hardware, I wanted to make it much more real. It’s not magical. It’s not impossible. I could do this in my basement. And there are lots of people smarter than me, and they can do it for almost nothing." A Fingernail in the Firewall Elkins used an ATtiny85 chip, about 5 millimeters square, that he found on a $2 Digispark Arduino board; not quite the size of a grain of rice, but smaller than a pinky fingernail. After writing his code to that chip, Elkins desoldered it from the Digispark board and soldered it to the motherboard of a Cisco ASA 5505 firewall. He used an inconspicuous spot that required no extra wiring and would give the chip access to the firewall's serial port. The image below gives a sense of how tough spotting the chip would be amidst the complexity of a firewall's board—even with the relatively small, 6- by 7-inch dimensions of an ASA 5505. Elkins suggests he could have used an even smaller chip but chose the ATtiny85 because it was easier to program. He says he also could have hidden his malicious chip even more subtly, inside one of several radio-frequency shielding "cans" on the board, but he wanted to be able to show the chip's placement at the CS3sthlm conference. The bottom side of a Cisco ASA 5505 firewall motherboard, with the red oval marking the 5-millimeter-squared chip that Elkins added.Photograph: Monta Elkins Elkins programmed his tiny stowaway chip to carry out an attack as soon as the firewall boots up in a target's data center. It impersonates a security administrator accessing the configurations of the firewall by connecting their computer directly to that port. Then the chip triggers the firewall's password recovery feature, creating a new admin account and gaining access to the firewall's settings. Elkins says he used Cisco's ASA 5505 firewall in his experiment because it was the cheapest one he found on eBay, but he says that any Cisco firewall that offers that sort of recovery in the case of a lost password should work. "We are committed to transparency and are investigating the researcher’s findings," Cisco said in a statement. "If new information is found that our customers need to be aware of, we will communicate it via our normal channels." Once the malicious chip has access to those settings, Elkins says, his attack can change the firewall's settings to offer the hacker remote access to the device, disable its security features, and give the hacker access to the device's log of all the connections it sees, none of which would alert an administrator. "I can basically change the firewall's configuration to make it do whatever I want it to do," Elkins says. Elkins says with a bit more reverse engineering, it would also be possible to reprogram the firmware of the firewall to make it into a more full-featured foothold for spying on the victim's network, though he didn't go that far in his proof of concept. A Speck of Dust Elkins' work follows an earlier attempt to reproduce far more precisely the sort of hardware hack Bloomberg described in its supply chain hijacking scenario. As part of his research presented at the Chaos Computer Conference last December, independent security researcher Trammell Hudson built a proof of concept for a Supermicro board that attempted to mimic the techniques of the Chinese hackers described in the Bloomberg story. That meant planting a chip on the part of a Supermicro motherboard with access to its baseboard management controller, or BMC, the component that allows it to be remotely administered, offering a hacker deep control of the target server. Hudson, who worked in the past for Sandia National Labs and now runs his own security consultancy, found a spot on the Supermicro board where he could replace a tiny resistor with his own chip to alter the data coming in and out of the BMC in real time, exactly the sort of attack that Bloomberg described. He then used a so-called field reprogrammable gate array—a reprogrammable chip sometimes used for prototyping custom chip designs—to act as that malicious interception component. "For an adversary who wants to spend any money on it, this would not have been a difficult task." Security researcher Trammell Hudson Hudson's FPGA, at less than 2.5 millimeters square, was only slightly larger than the 1.2-millimeters-square resistor it replaced on the Supermicro board. But in true proof-of-concept style, he says he didn't actually make any attempts to hide that chip, instead connecting it to the board with a mess of wiring and alligator clips. Hudson argues, however, that a real attacker with the resources to fabricate custom chips—a process that would likely cost tens of thousands of dollars—could have carried out a much more stealthy version of the attack, fabricating a chip that carried out the same BMC-tampering functions and fit into a much smaller footprint than the resistor. The result could even be as small as a hundredth of a square millimeter, Hudson says, vastly smaller than Bloomberg's grain of rice. "For an adversary who wants to spend any money on it, this would not have been a difficult task," Hudson says. "There’s no need for further comment about false reports from more than a year ago," Supermicro said in a statement. But Elkins points out that his firewall-based attack, while far less sophisticated, doesn't require that custom chip at all—only his $2 one. "Don’t discount this attack because you think someone needs a chip fab to do it," Elkins says. "Basically anyone who’s an electronic hobbyist can do a version of this at home." Elkins and Hudson both emphasize that their work isn't meant to validate Bloomberg's tale of widespread hardware supply chain attacks with tiny chips planted in devices. They don't even argue that it's likely to be a common attack in the wild; both researchers point out that traditional software attacks can often give hackers just as much access, albeit not necessarily with the same stealth. But both Elkins and Hudson argue that hardware-based espionage via supply-chain hijacking is nonetheless a technical reality, and one that may be easier to accomplish than many of the world's security administrators realize. "What I want people to recognize is that chipping implants are not imaginary. They’re relatively straightforward," says Elkins. "If I can do this, someone with hundreds of millions in their budget has been doing this for a while." Source: Planting Tiny Spy Chips in Hardware Can Cost as Little as $200
  17. Rogerio Luar

    Encryption Programs

    Can someone indicate good encryption programs, I have laptop with confidential information if it gets lost or stolen, malicious people can remove the hard drive and use this information.
  18. UPDATE 1 UPDATE 2 ------------------------------------------ 1) - Spycar What is Spycar? Spycar is a suite of tools designed to mimic spyware-like behavior, but in a benign form. Intelguardians created Spycar so anyone could test the behavior-based defenses of an anti-spyware tool. Spycar runs only on Windows, the same platform most targeted by spyware developers. What does Spycar do? The following links are Spycar. Clicking on each of the links will make Spycar try to take some benign action on your system. When you first run it, Spycar will ask you to name a test profile, a small file where we'll store state information about a given series of Spycar tests you perform. Then, when you click on each link, Spycar works by pushing a Windows executable to your browser. Currently, Spycar runs only on Windows, and its browser-centric alterations focus on IE, although it can be triggered by any Windows browser (Firefox-altering Spycar modules will be released soon!). Spycar does not include any exploits, so you must click "OK" in the message that appears in your browser to run the given Spycar function. If, after you click "OK", your anti-spyware tool blocks the given Spycar action, good for you! If not, this benign alteration will occur. Then, when you have clicked each of these links, you can click on the Results/Clean-Up link to have the Spycar tool called TowTruck automatically measure how your anti-spyware tool did, and to restore your machine to the pre-Spycar settings. Note that we designed Spycar as a series of different links and associated executables. We did not make it a monolithic one-click-to-conduct-all-actions programs, because an anti-spyware tool may shut down a given program early on in its cycle, without letting Spycar accurately test later modules. That's why you have to click on each link, giving your anti-spyware tool a fair shot at stopping each individual action. Spycar Tests Spycar Homepage 2) -Shields UP Without your knowledge or explicit permission, the Windows networking technology which connects your computer to the Internet may be offering some or all of your computer's data to the entire world at this very moment! GRC Shields UP Test 3) - DNS Nameserver Spoofability Test Can you trust your Domain Name Servers? You and your web browser would believe you were at your banking site. You entered the URL correctly, or used a reliable link or shortcut. Everything would look right. But you would be logged onto a malicious foreign web site which was ready and able to capture your private banking information. DNS Spoofability test 4) -Symantec Security Check Symantec Security Test 5) -PC Security Test PC Security Test is a free program for Windows that checks computer security against viruses, spyware and hackers. With a few mouse clicks, users can easily control the efficiency of their protection software (anti-virus programs, spyware scanners and firewalls). PC Security Test simulates virus, spyware and hacking attacks and monitors the responses of your protection software. Don't worry, no real viruses are involved !After the tests are complete, PC Securtiy computes a security index and provides tips on improving PC security. Download PC Security Test Homepage 6) -PC Flanks Battery of Tests PC Flanks Tests 7)- Security Scan from Audit My PC scans done - Firewall Scanner , Privacy Scanner , Exploit Scanner Audit My PC 8 ) -Test My PC Security Battery of Tests . Test My PC Security has a wide range of downloadable firewall leak and HIPS tests so you can find out just how good your security software is. Firewall Leak Tests – Firewall leak tests are written to test how effective the firewall component of your security software is at detecting and blocking outgoing connection attempts. If a program is able to connect to the internet without your knowledge then it is capable of transmitting any private data you may have on your machine. The techniques used by these programs are sophisticated but are representative of real world threats – so your firewall needs to block them. HIPS Tests – Tests designed to check how well your security software protects your internal system from attack by malicious executables such as viruses. A good HIPS system will restrict access to your critical operating system files, registry keys, COM interfaces and running processes. It should block untrusted processes from modifying the memory space of other programs and stop malware whenever it tries to install itself. Firewall Leak and HIPS tests – These tests are designed to test both of the above at the same time (both the Firewall and Host Intrusion Prevention components of your software). Download Complete Set of tests (Zip ) Individual Tests Home Page 9) -Belarc Advisor - Free Personal PC Audit The Belarc Advisor builds a detailed profile of your installed software and hardware, missing Microsoft hotfixes, anti-virus status, CIS (Center for Internet Security) benchmarks, and displays the results in your Web browser. All of your PC profile information is kept private on your PC and is not sent to any web server. Download BelArc Security Advisor BelArc Home Page 10) - Qualys Browser Check Perform a security analysis of your browser and its installed and missing plug ins and / or any other security patches or any other security issues . Qualys Browser Check 11) - Browser Spy BrowserSpy.dk is a collection of online tests that shows you just how much personal information can be collected from your browser just by visiting a page. BrowserSpy.dk can tell you all kinds of detailed information about you and your browser. Information ranging from simple stuff like the name and version of your browser to more detailed stuff like what kind of fonts you have installed and what hardware you're running on. You name it, BrowserSpy.dk shows it! When you surf around the internet your browser leaves behind a trail of digital footprints. Websites can use these footprints to check your system. BrowserSpy.dk is a service where you can check just what information it's possible to gather from your system, just by visiting a website.Privacy to the ultimate test! Browser Spy 12) - Eicar Test File The Eicar Test file , your anti virus should alert you to both the files when you click on them . if it doesnt , let them download , and then extract them or use them or scan your pc with your AV scanner . if working , your AV scanner should alert you this ( FAKE ) threat ... Eicar2com test Zip eicar.com 13) - Firewall Leak Tester Download Firewall Leak Test Leak Test Home Page 14) - Zemana Logging Tests . These test programs simulate the activities of different loggers. If your security software is protecting you proactively, then the simulation should trigger a warning message. No warning means no proactive protection... and probably no protection at all! If the simulation does not trigger a warning, then your current security software does not protect you . http://zemana.com/SecurityTests.aspx 15) - Spy Shelter Security Test Tool Download Spy Shelter Test Spy Shelter Home Page 16) - BufferZone Security Test Tool In the following demo, we will simulate what will happen when you receive a malicious file. It could come in through any number of ways: browsing, as an email attachment, from a USB storage device, just to name a few. We will attempt to prove that none of your security system's defense layers will identify or alert you to our intrusion attempt. Note: This is only a demo and no actual damage will be caused to your PC. Download Test File BufferZone Test Homepage 17) - Matousec Security Software Testing Suite Security Software Testing Suite (SSTS) is a set of tools used for testing Windows security software that implement application-based security – i.e. most of the Internet security suites, HIPS, personal firewalls, behavior blockers etc. SSTS is based on the idea of independent programs that attempt to bypass various features of the security software. Each test of SSTS is directed against a single feature or against a few closely connected features of the security software. Download SSTS. Matousec SSTS Homepage 18) - RUBotted - Test if your PC is Acting like a BOT . RUBotted monitors your computer for potential infection and suspicious activities associated with bots. Bots are malicious files that enable cybercriminals to secretly take control of your computer. As more bots secretly take control of computers and use these infected machines in malicious activities, bot networks are becoming more resilient. The emergence of new bot families and the continued proliferation of some of the threat landscape's most notorious botnets only reinforce the need for a reliable solution against botnets. It is capable of detecting known and unknown variants of known botnet families including some of the most notorious botnets today: ZBOT/ZeuS – bank information stealerKOOBFACE – most successful Web 2.0 botnetWALEDAC – infamous spamming botDownload RUBotted RUBotted Homepage 19) Comodo Tests ( Thanks to Alienforce1) Comodo Parent Injection Leak Test Suite (contains 3 Tests) The CPIL suite contains three separate tests especially developed by Comodo engineers to test a firewall's protection against parent injection leak attacks Download CPIL -- -------------------- Comodo HIPS and Firewall Leak Test Suite (contains 5 tests) Comodo's latest suite of tests cover a wider range of exploits and will tell quickly inform you if your computer is vulnerable to Root kits, Background Intelligent Transfer attacks and process injection attacks. Download HIPS and Firewall Test 20) Phish Test Verify the authenticity of a URL with this online live tool . suspect a link to be Phishy test it here . and see if its been reported a web forgery or not . other way to use the tool is to check your system for Phishing safety . copy a link from the website which has already been reported to be a web forgery . open it in your browser and see if you get any alerts . PhishTank PS-- please read all the instructions on a tests web site thoroughly and completely before running or performing a test . the post can not be held responsible for any loss of data , loss of system stability , system crashes , BSOD, system failures or for that reason , any thing that may arise while or after performing a test .!! nothing serious , just a random precautionary statement , all tests are safe . go ahead and try them and test your system ...
  19. Brian12

    Malware Removal Guide

    "This guide will help you remove malicious software from your computer. If you think your computer might be infected with a virus or trojan, you may want to use this guide. It provides step-by-step instructions on how to remove malware from Windows operating system. It highlights free malware removal tools and resources that are necessary to clean your computer. You will quickly learn how to remove a virus, a rootkit, spyware, and other malware." Guide: http://www.selectrealsecurity.com/malware-removal-guide I'll be posting updates. :)
  20. LAS VEGAS (Reuters) - Apple Inc (AAPL.O) is offering cyber security researchers up to $1 million to detect flaws in iPhones, the largest reward offered by a company to defend against hackers, at a time of rising concern about governments breaking into the mobile devices of dissidents, journalists and human rights advocates. Unlike other technology providers, Apple previously offered rewards only to invited researchers who tried to find flaws in its phones and cloud backups. At the annual Black Hat security conference in Las Vegas on Thursday, the company said it would open the process to all researchers, add Mac software and other targets, and offer a range of rewards, called "bounties," for the most significant findings. The $1 million prize would apply only to remote access to the iPhone kernel without any action from the phone's user. Apple's previous highest bounty was $200,000 for friendly reports of bugs that can then be fixed with software updates and not leave them exposed to criminals or spies. Government contractors and brokers have paid as much as $2 million for the most effective hacking techniques to obtain information from devices. Apple's new bounties, however, are in the same range as some published prices from contractors. Apple is taking other steps to make research easier, including offering a modified phone that has some security measures disabled. A number of private companies, such as Israel's NSO Group, sell hacking capabilities to governments to target their critics. One such attack was made against a friend of Washington Post columnist Jamal Khashoggi, a critic of the Saudi Arabian government, who was murdered inside the Saudi consulate in Istanbul in October 2018. A principal component of such breaches is programs that take advantage of otherwise unknown flaws in the phones, their software or installed applications. A number of private companies, such as Israel’s NSO Group, sell hacking capabilities to governments. “NSO Group develops technology that is licensed to intelligence and law enforcement agencies for the sole purpose of preventing and investigating terror and crime,” NSO said in a statement. “It is not a tool to target journalists for doing their job or to silence critics.” Source: Reuters
  21. Last week we learned about DataSpii, a report by independent researcher Sam Jadali about the “catastrophic data leak” wrought by a collection of browser extensions that surreptitiously extracted their users’ browsing history (and in some cases portions of visited web pages). Over four million users may have had sensitive information leaked to data brokers, including tax returns, travel itineraries, medical records, and corporate secrets. While DataSpii included extensions in both the Chrome and Firefox extension marketplaces, the majority of those affected used Chrome. Naturally, this led reporters to ask Google for comment. In response to questions about DataSpii from Ars Technica, Google officials pointed out that they have “announced technical changes to how extensions work that will mitigate or prevent this behavior.” Here, Google is referring to its controversial set of proposed changes to curtail extension capabilities, known as Manifest V3. As both security experts and the developers of extensions that will be greatly harmed by Manifest V3, we’re here to tell you: Google’s statement just isn’t true. Manifest V3 is a blunt instrument that will do little to improve security while severely limiting future innovation. To understand why, we have to dive into the technical details of what Manifest V3 will and won’t do, and what Google should do instead. The Truth About Manifest V3 To start with, the Manifest V3 proposal won't do much about evil extensions extracting people’s browsing histories and sending them off to questionable data aggregators. That’s because Manifest V3 doesn’t change the observational APIs available to extensions. (For extension developers, that means Manifest V3 isn’t changing the observational parts of chrome.webRequest.) In other words, Manifest V3 will still allow extensions to observe the same data as before, including what URLs users visit and the contents of pages users visit. (Privacy Badger and other extensions rely on these observational APIs.) Additionally, Manifest V3 won’t change anything about how “content scripts” work. Content scripts are pieces of Javascript that allow extensions to interact with the contents of web pages, both an important capability to allow extensions to deliver useful functionality and yet another way to extract user browsing data. One change in Manifest V3 that may or may not help security is how extensions get permission to interact with websites. Under Manifest V3, users will be able to choose when they’re visiting a website whether or not they want to give the extension access to the data on that website. Of course it’s not practical to have to allow an ad- or tracker-blocker or accessibility-focused extension every time you visit a new site, so Chrome will still allow users to give extensions permission to run on all sites. As a result, extensions that are designed to run on every website—like several of those involved in DataSpii—will still be able to access and leak data. The only part of Manifest V3 that goes directly to the heart of stopping DataSpii-like abuses is banning remotely hosted code. You can’t ensure extensions are what they appear to be if you give them the ability to download new instructions after they’re installed. But you don't need the rest of Google’s proposed API changes to stop this narrow form of bad extension behavior. Manifest V3 Crushes Innovation What Manifest V3 does do is stifle innovation. Google keeps claiming that the proposed changes are not meant to “[prevent] the development of ad blockers.” Perhaps not, but what they will do in their present form is effectively destroy powerful privacy and security tools such as uMatrix and NoScript. That’s because a central part of Manifest V3 is the removal of a set of powerful capabilities that uMatrix, NoScript, and other extensions rely on to protect users (for developers, we’re talking about request modification using chrome.webRequest). Currently, an extension with the right permissions can review each request before it goes out, examine and modify the request however it wants, and then decide to complete the request or block it altogether. This enables a whole range of creative, innovative, and highly customizable extensions that give users nearly complete control over the requests that their browser makes. Manifest V3 replaces these capabilities with a narrowly-defined API (declarativeNetRequest) that will limit developers to a preset number of ways of modifying web requests. Extensions won’t be able to modify most headers or make decisions about whether to block or redirect based on contextual data. This new API appears to be based on a simplified version of Adblock Plus. If your extension doesn’t work just like Adblock Plus, you will find yourself trying to fit a square peg into a round hole. If you think of a cool feature in the future that doesn’t fit into the Adblock Plus model, you won’t be able to make an extension using your idea unless you can get Google to implement it first. Good luck! Google doesn’t have an encouraging track record of implementing functionality that developers want, nor is it at the top of Google’s own priority list. Legitimate use cases will never get a chance in Chrome for any number of reasons. Whether due to lack of resources or plain apathy, the end result will be the same—removing these capabilities means less security and privacy protection for Chrome’s users. For developers of ad- and tracker-blocking extensions, flexible APIs aren’t just nice to have, they are a requirement. When particular privacy protections gain popularity, ads and trackers evolve to evade them. As a result, the blocking extensions need to evolve too, or risk becoming irrelevant. We’ve already seen trackers adapt in response to privacy features like Apple’s Intelligent Tracking Prevention and Firefox’s built-in content blocking; in turn, pro-privacy browsers and extensions have had to develop innovative new countermeasures. If Google decides that privacy extensions can only work in one specific way, it will be permanently tipping the scales in favor of ads and trackers. The Real Solution? Enforce Existing Policies In order to truly protect users, Google needs to start properly enforcing existing Chrome Web Store policies. Not only did it take an independent researcher to identify this particular set of abusive extensions, but the abusive nature of some of the extensions in the report has been publicly known for years. For example, HoverZoom was called out at least six years ago on Reddit. Unfortunately, the collection of extensions uncovered by DataSpii is just the latest example of an ongoing pattern of abuse in Chrome Web Store. Extensions are bought out (or sometimes outright hijacked), and then updated to steal users’ browsing histories and/or commit advertising fraud. Users complain, but nothing seems to happen. Often the extension is still available months later. The “Report Abuse” link doesn't seem to produce results, obfuscated code doesn't seem to trigger red flags, and no one responds to user reviews. “SHINE for reddit” stayed up for several years while widely known to be an advertising referrals hijacker that fetched and executed remote code. A study from 2015 demonstrated various real-world obfuscation and remote code execution techniques. A study from 2017 analyzed the volume of outgoing traffic to detect history leakage. The common thread here is that the Chrome Web Store does not appear to have the oversight to reject suspicious extensions. The extensions swept up by DataSpii are not obscure by any measure. According to the DataSpii report, some of the extensions had anywhere from 800,000 to 1.4+ million users. Is it too much to ask a company that makes billions in profit every year to prioritize reviewing all popular extensions? Had Google systematically started reviewing when the scope of Chrome Web Store abuse first became clear years ago, Google would have been in place to catch malicious extensions before they ever went live. Ultimately, users need to have the autonomy to install the extensions of their choice to shape their browsing experience, and the ability to make informed decisions about the risks of using a particular extension. Better review of extensions in Chrome Web Store would promote informed choice far better than limiting the capabilities of powerful, legitimate extensions. Google could have banned remote code execution a long time ago. It could have started responding promptly to extension abuse reports. It could have invested in automated and manual extension review. Instead, after years of missed opportunities, Google has given us Manifest V3: a nineteen-page document with just one paragraph regarding remote code execution—the actual extension capabilities oversight that continues to allow malicious extensions to exfiltrate your browsing history. The next time Google claims that Manifest V3 will be better for user privacy and security, don’t believe their hype. Manifest V3 will do little to prevent the sort of data leaks involved in DataSpii. But Manifest V3 will curtail innovation and hurt the privacy and security of Chrome users. Source: EFF
  22. Kaspersky fingers pro-G filters for letting cyber-muck through Spammers are abusing the preferential treatment Google affords its own apps to score free passes through Gmail's spam filters, it was claimed this week. The ad giant greases the wheels so that incoming messages involving Google Calendar and other Big-G appsvslide through the filters and appear in Gmail inboxes, to ensure stuff generated and shared via its applications aren't silenced by its own webmail product. This situation, according to Kaspersky bods this week, is being exploited by scam artists to lob spam, phishing pages, and links to malicious malware-flinging websites at netizens, in some cases without triggering Gmail's defenses. "The spammer’s main task is to bypass the spam filter and deliver email to your inbox," Kaspersky analyst Maria Vergelis helpfully reminded us. "As it happens, Google services often send email notifications to Gmail inboxes — and Google’s antispam module avoids flagging notifications from its own services as spam." Because Google usually allows these kinds of notifications through, scammers have found they can schedule a load of events in Google Calendar, inviting Gmail users en masse, and, when the set time draws near, generate a wave of reminders that include spam, phishing links, and so on, at least some of which slips through Gmail's filters. For example, the scammer could send a block of Gmail users a Calendar invite with the description being a link to a fake banking site. Rather than catch and filter out the e-nasty, Gmail would let the notification through and, when the person clicked the link, they would then go to the phony bank page. If the recipient has Calendar set to automatically accept invites, they would even get a pop-up notification of the spam message. It is not only Calendar that is being gamed by scam artists. Vergelis noted that Google Photos is also a popular method for evading filters. In that case, the spam would either be placed within the image file or its description – for example the image could be a picture of a check and the description would be instructions on how to claim it, which would typically involve handing over personal information for nothing in return. Again, thanks to Google's overly slick sharing features, the recipient would get a notification in their Gmail inbox that they had a shared photo waiting for them, and the spam itself would be delivered without being troubled by a filter. Additionally, Kaspersky's team said Google Forms is being used to serve up fake surveys that harvest personal information, and Google Drive is being abused to host phishing pages, malware, and ad pages. Even Google Analytics is being turned into a tool for criminals. Vergelis said her team reported seeing businesses targeted with visitor statistics PDF files containing the spammer's links or information. In short, pretty much any Google service that integrates with Gmail can and will be abused to get as much spam into your inbox as possible, and the same goes for other services like Facebook and Twitter that allow users to send each other event notifications. "The main problem is that messages sent through a legal service are assigned its standard headers, so spam filters often view them as harmless," Vergelis explained. "And spam subjects vary widely, so interception requires a high threshold level in the spam filter, which can lead to excessive false positives. Spammers take advantage of this to exploit public services for their own purposes." In response to Kaspersky's findings, a Google spokesperson provided the Russian antivirus biz the following statement, which was shared with El Reg: "Google’s Terms of Service and product policies prohibit the spreading of malicious content on our services, and we work diligently to prevent and proactively address abuse. "Combating spam is a never-ending battle, and while we've made great progress, sometimes spam gets through. We remain deeply committed to protecting all of our users from spam: we scan content on Photos for spam and provide users the ability to report spam in Calendar, Forms, Google Drive, and Google Photos, as well as block spammers from contacting them on Hangouts. "In addition, we offer security protections for users by warning them of known malicious URLs via Google Chrome's Safe Browsing filters." Source
  23. Panic as panic alarms meant to keep granny and little Timmy safe prove a privacy fiasco Simple hack turns them into super secret spying tool A GPS tracker used by elderly people and young kids has a security hole that could allow others to track and secretly record their wearers. The white-label product is manufactured in China and then rebadged and rebranded by a range of companies in the UK, US, Australia and elsewhere including Pebbell 2, OwnFone and SureSafeGo. Over 10,000 people in the UK use the devices. It has an in-built SIM card that it used to pinpoint the location of the user, as well as provide hands-free communications through a speaker and mic. As such it is most commonly used by elderly people in case of a fall and on children whose parents want to be able to know where they are and contact them if necessary. But researchers at Fidus Information Security discovered, and revealed on Friday, that the system has a dangerous flaw: you can send a text message to the SIM and force it to reset. From there, a remote attacker can cause the device to reveal its location, in real time, as well as secretly turn on the microphone. The flaw also enables a third party to turn on and off all the key features of the products such as emergency contacts, fall detection, motion detection and a user-assigned PIN. In other words, a critical safety device can be completely disabled by anybody in the world through a text message. The flaw was introduced in an update to the product: originally the portable fob communicated with a base station that was plugged into a phone line: an approach that provided no clear attack route. But in order to expand its range and usefulness, the SIM card was added so it was not reliant on a base station and would work over the mobile network. The problem arises from the fact that the Chinese manufacturer built in a PIN to the device so it would be locked to the telephone number programmed into the device. Which is fine, except the PIN was disabled by default and the PIN is currently not needed to reboot or reset the device. And so it is possible to send a reset command to the device – if you know its SIM telephone number – and restore it to factory settings. At that point, the device is wide open and doesn't need the PIN to make changes to the other functions. Which all amounts to remote access. Random access memory But how would you find out the device's number? Well, the researchers got hold of one such device and its number and then ran a script where they sent messages to thousands of similar numbers to see if they hit anything. They did. "Out of the 2,500 messages we sent, we got responses from 175 devices (7 per cent)," they wrote. "So this is 175 devices being used at the time of writing as an aid for vulnerable people; all identified at a minimal cost. The potential for harm is massive, and in less than a couple of hours, we could interact with 175 of these devices!" The good news is that it is easy to fix: in new devices. You would simply add a unique code to each device and require it be used to reset the device. And you could limit the device to only receive calls or texts from a list of approved contacts. But in the devices already on the market, the fix is not so easy: even by using the default PIN to lock it down, the ability to reset the device is still possible because it doesn't require the PIN to be entered. The researchers say they have contacted the companies that use the device "to help them understand the risks posed by our findings" and say that they are "looking into and are actively recalling devices." But it also notes that some have not responded. In short, poor design and the lack of a decent security audit prior to putting the updated product on the market has turned what is supposed to provide peace of mind into a potential stalking and listening nightmare. Source
  24. PortExpert - Cybersecurity at your finger tips PortExpert give you a detailed vision of your personnal computer cybersecurity. It automatically monitors all applications connected to the Internet and give you all the information you might need to identify potential threats to your system. Monitor of application using TCP/UDP communications User-friendly interface Identifies remote servers (WhoIs service) Allows to open containing folder of any applications Allow to easily search for more info online Automatic identification of related service : FTP, HTTP, HTTPS,... Capability to show/hide system level processes Capability to show/hide loopbacks Time freeze function Web page : https://www.kcsoftwares.com/?portexpert Download : https://kcsoftwares.com/files/portexpert_lite.exe
  25. Department of Justice report highlights several problems with the FBI's automated breach notifications. The Federal Bureau of Investigations does a poor job at notifying victims of a cyber-attack, a US government report released earlier this week concluded. FBI notifications arrive either too late or contain insufficient information for victims to take action, a report from the Department of Justice's Office of the Inspector General (DOJ-OIG) has concluded. The report analyzed Cyber Guardian, an FBI application for storing information about tips and ongoing investigations. The system also allows agents to enter details about suspected victims, which Cyber Guardian can later notify via automated messages. But the DOJ-OIG report said FBI agents are not using the system as it is intended. FBI agents not using the system as designed For example, interviews with 31 agents revealed that 29 entered victim information in a lead category called "Action," rather than the standard "Victim Notification." Action-labeled leads are treated as active investigations and don't necessarily trigger immediate breach notification emails, as standard entries in the Victim Notification category would do. By the time agents finish an Action-labelled investigation, victims lose crucial time during which they could have learned of the breach and taken protecting actions. Furthermore, the DOJ-OIG audit also found that FBI agents often made mistakes when filling in victim information. Investigators found typos, incorrect dates, and errors in classifying the incident's severity. Breach notifications varied in quality The report also revealed that victims notifications also varied in quality, which investigators attributed to the FBI agent entering the data. Some agents were very descriptive about the incidents they logged in Cyber Guardian, leading to victims receiving useful notifications containing IP addresses linked to the malicious activity, date ranges, and instructions to deal with the attack's aftermath. On the other hand, some agents provided very few details. According to the DOJ-OIG report, many of these incomplete notifications were created by the same agents, an aspect that investigators said could be corrected through better training. Auditors also found that the breach notification process, overall, could also be improved if the FBI cooperated with other agencies and allowed these agencies to enter data in Cyber Guardian as well, which should help enrich the quality of some notifications. As a last observation, the DOJ-OIG also found that the FBI also failed to notify victims of their rights under the Attorney General Guidelines for Victim and Witness Assistance, a document about the rights and legal recourse victims are entitled to. "The FBI is developing a new system called CyNERGY to replace Cyber Guardian and, although we were unable to test the system," the DOJ-OIG said. "We believe that if CyNERGY operates as intended, it could provide improvements to the current system." Source
×
×
  • Create New...