Jump to content

Search the Community

Showing results for tags 'data collection'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...

Found 12 results

  1. I IN NO WAY TAKE ANY CREDIT FOR THIS IT WAS TAKEN FROM MDL FORUM AND SOME POSTS BY MEMBERS ON THIS FORUM! Manual: Tools: Microsoft Telemetry Tools Bundle v1.77 Windows 10 Lite v9 Private WinTen v0.75b Blackbird v6 v1.0.79.3 [Works with Win 7/8/8/1/10] O&O ShutUp10 v1.8.1410 WPD - Windows Privacy Dashboard v1.3.1532 WindowsSpyBlocker v4.29.0 Spybot Anti-Beacon v3.5 [Works with Win 7/8/8/1/10] W10Privacy v3.4.0.1 SharpApp v0.44.20 Debotnet v0.7.8 Disable Windows 10 Tracking v3.2.3 Destroy Windows Spying v1.0.1.0 [Works with Win 7/8/8/1/10] [NOT RECOMMENDED AS NOT UPDATED ANYMORE]
  2. An analysis of 11,430 Play Store apps found that 14.2% used a privacy policy with contradicting statements about user data collection practices. A large number of Android mobile apps listed on the official Google Play Store contain self-contradictory language in their privacy policies in regards to data collection practices. In an academic study published last year, researchers created a tool named PolicyLint that analyzed the language used in the privacy policies of 11,430 Play Store apps. They found that 14.2% (1,618 apps) contained a privacy policy with logical contradicting statements about data collection. Examples include privacy policies that stated in one section that they do not collect personal data, only to contradict themselves in subsequent sections, where they state they collect emails or customer names -- which are clearly personally-idenfiable information. In some cases, templates are to blame While the research team could not determine the app maker's intent in using contradicting statements in their privacy policy, researchers feel the primary purpose was to mislead users if they ever took the time to read the policies. However, they also discovered evidence of the contrary. For example, the research team found 59 apps that used online services to auto-generate a privacy policy. A deeper look at the online services revealed that the self-contradicting statements were part of the template itself, and not the app maker's addition. "I think we found four-five different templates," said Benjamin Andow, of IBM Research, and one of the study's authors. However, the vast majority of other privacy policies were unique to each app, and did not appear to be the result of an accident. In these cases, the research team says these app makers are susceptible to fines from EU and US privacy watchdogs. "Self-contradictions can lead to the identification of deceptive statements, which are enforceable by the FTC and the DPAs (data protection authorities) of the EU," Andow said, suggesting that their research could be used to track down GDPR abusers. Notifying vendors Furthermore, part of the process of verifying the accuracy of the PolicyLint tool, the research team also took a sample of 510 privacy policies with contradicting statements and manually verified their correctness. Since this process involved a careful analysis of the entire app's policy, the research team also took to notifying the app maker about its inaccurate privacy policy. From the 510 apps, the research team found contact emails for 260 developers, which they notified via email. Of the 260, 244 received the email, as 16 of the public contact email addresses ended up being either invalid or unreachable. Of the 244 emails they send, researchers said they only received 11 replies, following which, only three developers corrected their policies. More details are available in the team's white paper, entitled "PolicyLint: Investigating Internal Privacy Policy Contradictions on Google Play," available for download in PDF format from here or here. The team includes researchers and academics from North Carolina State University, University of Illinois at Urbana-Champaign, and IBM Research. The paper's findings are somewhat consistent to another 2019 study named "On The Ridiculousness of Notice and Consent:Contradictions in App Privacy Policies." This separate study analyzed a bigger sample of Play Store apps for inconsistencies between data collection practices and what was explicitly disclosed in privacy policies. The research team found out that 10.5% of 68,051 apps they analyzed shared personal data with third-party services, yet they did not declare it in their privacy policies. Further, only 22.2% of the 68,051 apps explicitly named third-party partners or affiliates in their privacy policies, with the vast majority of apps hiding where collected user data ends up. Source
  3. Over the weekend, privacy concern were raised regarding how Microsoft Edge is uploading the URLs to SmartScreen without hashing them first. After further testing by BleepingComputer, we learned that Windows 10 also transmits a great deal of potentially sensitive information about your applications to SmartScreen when you attempt to run them. Over the weekend, security researcher Matt Weeks spotted Microsoft Edge sending the URL of a site being visited to SmartScreen. When sent this, this URL was not obfuscated or hashed in any way. which raised concerns that Microsoft could track what sites you visit. When communicating with SmartScreen, Edge will send a JSON encoded POST request to https://nav.smartscreen.microsoft.com/windows/browser/edge/service/navigate/4/sync that includes information about the URL that is being checked. BleepingComputer was able to confirm this behavior using Fiddler that showed the following JSON being sent to Microsoft over a secure connection. Unhashed URL being sent to SmartScreen In addition to sending the URL in an unhashed form, Microsoft Edge for some reason also sent the logged in user's SID, or Security Identifier, to Microsoft. A SID is a unique identifier created by Windows when a new account is added to the operating system. Sending a users SID Many of the users in the Twitter thread have expressed concerns that sending the URL in an unhashed form is a privacy risk as it could allow Microsoft to see a user's browsing history. The addition of also sending a user's SID just added to the concerns. SmartScreen for applications exposes even more data While Weeks' research focused on how SmartScreen operates when browsing the web, in tests by BleepingComputer you can see that SmartScreen also exposes a great deal of private information when launching an executable. By default, Windows 10 will enables a feature called "Check apps and files" that uses Windows Defender SmartScreen to warn you if a file is malicious before you execute it. Check apps and files setting After downloading a file and attempting to open it, Windows 10 will connect to https://checkappexec.microsoft.com/windows/shell/service/beforeExecute/2 and send a variety of information about the file. In our tests, some of the information transmitted by Windows 10 includes the full path to the file on your computer and the URL you downloaded the file from. None of this information is hashed in any way. For example, I uploaded a small utility called md5sum.exe to WeTransfer.com. I then downloaded that file on another Windows 10 PC and tried to execute it. As you can see from the image below, Windows transmitted to the SmartScreen service the URL where the file was downloaded from and the full path to file's location on my test computer. File information sent to Microsoft This information could expose a tremendous amount of sensitive and private information to Microsoft. This includes private download URLs for sensitive files and the folder structure of internal Windows systems and networks. While we do not recommend you do this, the only way to prevent this information from being shared is to disable this feature. Microsoft has always disclosed that urls and file info are shared After reading Weeks' tweet, many users immediately cried foul at Microsoft, but the reality is that Microsoft is not doing anything they haven't said they were doing. As shown by Microsoft Edge developer Eric Lawrence, Microsoft has clearly stated from as early as 2005 and in more recent documentation that the URL and file information is being sent to Microsoft over a secure connection when using SmartScreen. Information sent to SmartScreen While they are not doing anything sneaky, Microsoft can modify how URLs are sent so that they are hashed in a similar way that Chrome SafeBrowsing does it. In a world where people are finally waking up to how little control they have over their data and how it is being used, this tradeoff may be worth it to put customers at ease. Chromium-based Microsoft Edge no longer sends SID The sending of the SID was an odd thing and does not seem to be referenced anywhere in Microsoft's SmartScreen documentation. The good news is that the new Chromium-based Microsoft Edge no longer sends the SID during a SmartScreen request. It does, though, continue to send an unhashed URL. That practice will only end if and when Microsoft decides to start hashing the URLs, which probably would require significant code changes across many of their products. Source
  4. Faces for cookware: data collection industry flourishes as China pursues AI ambitions PINGDINGSHAN, China (Reuters) - In a village in central China’s Henan province, amid barking dogs and wandering chickens, villagers gather along a dirt road to trade images of their faces for kettles, pots and tea cups. At the front of the line, a woman stands in front of a camera zip-tied to a tripod. She holds a photograph of her head with the eyes and the nose cut out in front of her face and slowly rotates side to side. Villagers waiting their turn take a numbered ticket. Some of them say it’s the third or fourth time they’ve come to do this sort of work. The project, run out of a sleepy courtyard village house adorned with posters of former China leader Mao Zedong, is collecting material that could train AI software to distinguish between real facial features and still images. “The largest projects have tens of thousands of people, all of whom live in this area.” said Liu Yangfeng, CEO at Qianji Data Co Ltd, which collects and labels data for several of China’s largest tech firms and is based in the nearby city of Pingdingshan. “We are creating more data sets to serve more AI algorithm companies, so they can serve the development of artificial intelligence in China,” said Liu, declining to disclose his clients. The boom in demand for data to train AI algorithms is feeding a new global industry that gathers information such as photos and videos, which are then labeled to tell the machines what they are seeing. Companies involved in data labeling or data annotation as it is also called include crowdsourcing platforms such as Amazon.com’s (AMZN.O) Mechanical Turk which offer users small amounts of money in return for simple tasks, outsourcing firms such as India’s Wipro Ltd (WIPR.NS) as well as professional labellers like Qianji. Cognilytica, a U.S. research firm specializing in AI, estimates the global market for machine-learning related data annotation grew 66% to $500 million in 2018 and is set to more than double by 2023. Some industry insiders say, however, that much of the work done is not disclosed, making accurate estimates difficult. FILE PHOTO: Employees work on labeling different items for data collection on computer screens, which would serve for developing artificial intelligence (AI) and machine learning technology, at the Qian Ji Data Co in Jia county, Henan province, China March 20, 2019. REUTERS/Irene Wang WEAK PRIVACY LAWS, CHEAP LABOR China has emerged as a key hub for data collection and labeling thanks to insatiable demand from a burgeoning artificial intelligence sector backed by the ruling Communist Party, which sees AI as an engine of economic growth and a tool for social control. A plethora of firms have invested heavily in an area of AI known as machine learning, which is at the core of facial recognition technology and other systems based on finding patterns in data. These include tech giants Alibaba Group Holding Ltd (BABA.N), Tencent Holding Ltd (0700.HK), Baidu Inc (BIDU.O) as well as younger companies such as AI specialist SenseTime Group Ltd and speech recognition firm Iflytek Co Ltd (002230.SZ). The result has been a proliferation of AI products and services in China, from facial recognition-based payment systems to automated surveillance and even AI-animated state media news anchors. Chinese consumers mostly see these technologies as novel and futuristic, despite concerns raised by some over more invasive applications. Weak data privacy laws and cheap labor have also been a competitive advantage for China as it races to become a global leader in AI. The Henan villagers were happy to trade several sessions in front of a camera for a tea cup, or several hours for a stove-top pot. OVERSEAS CUSTOMERS Beijing-based BasicFinder, a leading data labeling firm with locations across Hebei, Shandong and Shanxi provinces, boasts a robust mix of domestic and overseas clients. At a recent visit to its Beijing offices, some staff were labeling images of sleepy people that will be used by an autonomous driving project to identify drivers who might be falling asleep at the wheel. Others were labeling British documents from the 1800s for a Western online ancestry service, marking fields for dates, names and genders on birth and death certificates. According to BasicFinder Chief Executive Du Lin, hiring trained labellers in China is cheaper than using Western crowdsourcing marketplaces. A Princeton University project related to autonomous driving initially put a task on Amazon’s Mechanical Turk but as the task became more complicated, people began making mistakes and BasicFinder was brought in to help correct the results, said Du. In that project, one trained BasicFinder labeler was able to do the work of three crowdsourced labellers, he added. “Gradually they saw they were paying less for labeling from us, so they hired us to label all the works from the very beginning,” said Du. Princeton declined to comment. For labeling employees, the reasons for joining China’s data industry are straightforward. The work, though sometimes tedious, is an upgrade on other jobs available to young workers who want to return home to small Chinese cities and villages. Labellers at Qianji make roughly 100 yuan ($14.50) a day marking data points on photographs of people, surveillance footage and street images. The work is usually simple, according to the employees, though some overseas content poses a challenge. “One time we thought we were classifying Europe-style cooker machines that have a washer attached,” said Jia Yahui, a labeler at Qianji. “Later we were told it’s actually two separate things, a stove and a dishwasher.” The labeling work brings some of the employment benefits of the tech sector to rural areas, but those benefits may prove short-lived if AI improves enough to perform many of the tasks labellers do. “We think this industry will still exist in three to five years. It may not be a long-term career - we can only think of the five-year plan for now,” said Qianji CEO Liu. Source: Faces for cookware: data collection industry flourishes as China pursues AI ambitions
  5. Two House lawmakers are pushing an amendment that would effectively defund a massive data collection program run by the National Security Agency unless the government promises to not intentionally collect data of Americans. The bipartisan amendment — just 15 lines in length — would compel the government to not knowingly collect communications — like emails, messages and browsing data — on Americans without a warrant. Reps. Justin Amash (R-MI, 3rd) and Zoe Lofgren (D-CA, 19th) have already garnered the support from some of the largest civil liberties and rights groups, including the ACLU, the EFF, FreedomWorks, New America and the Sunlight Foundation. The Amash-Lofgren amendment Under the current statute, the NSA can use its Section 702 powers to collect and store the communications of foreign targets located outside the U.S. by tapping into the fiber cables owned and run by U.S. telecom giants. But this massive data collection effort also inadvertently vacuums up Americans’ data, who are typically protected from unwarranted searches under the Fourth Amendment. The government has consistently denied to release the number of how many Americans are caught up in the NSA’s data collection. For the 2018 calendar year, the government said it made more than 9,600 warrantless searches of Americans’ communications, up 28% year-over-year. In a letter to lawmakers, the groups said the amendment — if passed into law — would “significantly advance the privacy rights of people within the United States.” A coalition of tech giants — including Apple, Facebook, Google and Microsoft — also rallied behind the amendment. “RGS believes this amendment is a step in the right direction for U.S. foreign intelligence surveillance policy,” said the Reform Government Surveillance group in a statement. (Verizon Media, which owns TechCrunch, is also a coalition member.) Last year, Section 702 was reauthorized with almost no changes, despite a rash of complaints and concerns raised by lawmakers following the Edward Snowden disclosures into mass surveillance. The EFF said in a blog post Tuesday that lawmakers “must vote yes in order to make this important corrective.” Updated with statement from the tech coalition. Source
  6. A year on from launch, Click looks at the impact of GDPR, and how getting access to your data may still not be as easy as you think.
  7. Apple pitches itself as the most privacy-minded of the big tech companies, and indeed it goes to great lengths to collect less data than its rivals. Nonetheless, the iPhone maker will still know plenty about you if you use many of its services: In particular, Apple knows your billing information and all the digital and physical goods you have bought from it, including music, movie and app purchases. A different approach: But even for heavy users, Apple uses a number of techniques to either minimize how much data it has or encrypt it so that Apple doesn't have access to iMessages and similar personal communications. Between the lines: Apple is able to do this, in part, because it makes its money from selling hardware, and increasingly from selling services, rather than through advertising. (It does have some advertising business, and it also gets billions of dollars per year from Google in exchange for being Apple's default search provider.) But Apple maintains that its commitment to privacy is based not just on its business model but on core values. How it works: In order to collect less data, Apple tries to do as much work on its devices as possible, even if that sometimes means algorithms aren't as well tuned, processing is slower, or the same work gets done on multiple devices. Photos are a case in point. Even if you store your images in Apple's iCloud, Apple does the work of facial identification, grouping, labeling and tagging images on the Mac or iOS device, rather than on the service's own computers. Some of the most sensitive data that your device collects, including your fingerprint or Face ID, stay on the device. Maps While Apple does need to do some processing in the cloud, it takes a number of steps to protect privacy beyond its competitors. First, the identification and management of significant locations like your home and work is done on the device. And the location information that does get sent up to the cloud is tied to a unique identifier code rather than a specific individual's identity — and that identifier changes over time. Location information Beyond Apple's Maps program, other applications, including some from Apple, can make use of location data with user permission. Apple is adding new options with iOS 13, due this coming fall, including: The ability for users to share their location with an app just once, rather than giving ongoing access. For apps that are making routine background use of location, Apple is also letting users review a map of the locations these apps are seeing, so they can decide if that is information they really want to be sharing. Email If you get your mail provided by Apple (via icloud.com, mac.com, etc.), the company will store your email and will scan it for spam, viruses and child pornography, as is common in the industry. Email will also be made available to law enforcement when Apple is presented with a lawful warrant. iCloud This is the area where Apple stores potentially the most personal information, although it doesn't make use of it for advertising or other business purposes. iCloud backups can include messages, photos and Apple email, though Apple stresses it won't look at the information and will only hand it over to others if forced by a court to do so. Messages Apple messages, the ones with the blue bubble, are encrypted end-to-end, so that only the sender and recipient can see them — not Apple, nor a carrier or any other intermediary. However, if you back up your messages to iCloud, a copy is kept on Apple's servers so if you lose your device and need to replace it, Apple you can restore them. Users can make an encrypted back up using iTunes on a Mac or PC, or keep no backup at all. Safari If you use Apple's Safari browser, Apple stores your bookmarks tied to your Apple ID; they're encrypted, but Apple holds a key. Beginning in iOS 13 and Catalina, the next MacOS, Safari browsing history will be fully encrypted and Apple will have no access. There's also data that goes to Apple's search partners. Google is the default, but you can also choose Yahoo, Bing or DuckDuckGo. You can also choose whether to send each keystroke as you type in the search bar, enabling autocomplete, or just to send the data when you hit "enter." Siri Many Apple devices have a chip that is listening for the "Hey Siri" wake word, but it's only at that point that Apple starts recording audio. Some commands, like what's next on your schedule, can be processed locally, while others do get sent to Apple's servers. Apple doesn't tie this data directly to a person's Apple ID, but uses a unique identifier. A user can reset that identifier, but then Siri will lose the personalization it has gained. Per Apple, "User voice recordings are saved for a six-month period so that the recognition system can utilize them to better understand the user’s voice. After six months, another copy is saved, without its identifier, for use by Apple in improving 
and developing Siri for up to two years." Apple Pay Apple doesn't store your payment information or purchase record as part of Apple Pay (It does have history and payment information for your Apple purchases). Apple Pay merchants get a token, not your actual credit card information. TV and Music Apple knows the music, shows and apps you purchase. In addition, in order to deliver on the feature of the TV App that allows users to pick up where they left off across multiple shows, multiple apps, and multiple devices, and to make personalized recommendations, Apple does capture and store viewing history. But it says it notifies users, stores as little data as possible for as little time as possible, and allows users to opt out (although this prevents some features from fully working). What you can do Users have a number of choices to further minimize what Apple knows, though there are often downsides. You can choose to download an encrypted iCloud backup only to your Mac or PC rather than keep it on Apple's server, but if you lose that device or forget the password for the backup file, Apple won't be able to help recover lost data. You can also download the information Apple has on you at privacy.apple.com. You can delete data stored on your device, such as email, messages, photos, and Safari data like history and bookmarks. You can delete your data stored on iCloud. You can reset your Siri identifier by turning Siri and Dictation off and back on, which effectively restarts your relationship with Siri and Dictation. Source
  8. Don’t worry. They want it to be safe.🤣 Justin Paine sits in a pub in Oakland, California, searching the internet for your most sensitive data. It doesn't take him long to find a promising lead. On his laptop, he opens Shodan, a searchable index of cloud servers and other internet-connected devices. Then he types the keyword "Kibana," which reveals more than 15,000 databases stored online. Paine starts digging through the results, a plate of chicken tenders and fries growing cold next to him. "This one's from Russia. This one's from China," Paine said. "This one is just wide open." From there, Paine can sift through each database and check its contents. One database appears to have information about hotel room service. If he keeps looking deeper, he might find credit card or passport numbers. That isn't far-fetched. In the past, he's found databases containing patient information from drug addiction treatment centers, as well as library borrowing records and online gambling transactions. Paine is part of an informal army of web researchers who indulge an obscure passion: scouring the internet for unsecured databases. The databases -- unencrypted and in plain sight -- can contain all sorts of sensitive information, including names, addresses, telephone numbers, bank details, Social Security numbers and medical diagnoses. In the wrong hands, the data could be exploited for fraud, identity theft or blackmail. The data-hunting community is both eclectic and global. Some of its members are professional security experts, others are hobbyists. Some are advanced programmers, others can't write a line of code. They're in Ukraine, Israel, Australia, the US and just about any country you name. They share a common purpose: spurring database owners to lock down your info. The pursuit of unsecured data is a sign of the times. Any organization -- a private company, a nonprofit or a government agency -- can store data on the cloud easily and cheaply. But many software tools that help put databases on the cloud leave the data exposed by default. Even when the tools do make data private from the start, not every organization has the expertise to know it should leave those protections in place. Often, the data just sits there in plain text waiting to be read. That means there'll always be something for people like Paine to find. In April, researchers in Israel found demographic details on more than 80 million US households, including addresses, ages and income level. No one knows how big the problem is, says Troy Hunt, a cybersecurity expert who's chronicled on his blog the issue of exposed databases. There are far more unsecured databases than those publicized by researchers, he says, but you can only count the ones you can see. What's more, new databases are constantly added to the cloud. "It's one of those tip-of-the-iceberg situations," Hunt said. To search out databases, you have to have a high tolerance for boredom and a higher one for disappointment. Paine said it would take hours to find out whether the hotel room service database was actually a cache of exposed sensitive data. Poring over databases can be mind-numbing and tends to be full of false leads. It isn't like searching for a needle in a haystack; it's like searching fields of haystacks hoping one might contain a needle. What's more, there's no guarantee the hunters will be able to prompt the owners of an exposed database to fix the problem. Sometimes, the owner will threaten legal action instead. Database jackpot The payoff, however, can be a thrill. Bob Diachenko, who hunts databases from his office in Ukraine, used to work in public relations for a company called Kromtech, which learned from a security researcher that it had a data breach. The experience intrigued Diachenko, and with no experience he dove into hunting databases. In July, he found records on thousands of US voters in an unsecured database, simply by using the keyword "voter." "If me, a guy with no technical background, can find this data," Diachenko said, "then anybody in the world can find this data." In January, Diachenko found 24 million financial documents related to US mortgages and banking on an exposed database. The publicity generated by the find, as well as others, helps Diachenko promote SecurityDiscovery.com, a cybersecurity consulting business he set up after leaving his previous job. Publicizing a problem Chris Vickery, a director of cyberrisk research at UpGuard, says big finds raise awareness and help drum up business from companies anxious to make sure their names aren't associated with sloppy practices. Even if the companies don't choose UpGuard, he said, the public nature of discoveries helps his field grow. Earlier this year, Vickery looked for something big by searching on "data lake," a term for large compilations of data stored in multiple file formats. The search helped his team make one of the biggest finds to date, a cache of 540 million Facebook records that included user's names, Facebook ID numbers and about 22,000 unencrypted passwords stored in the cloud. The data had been stored by third-party companies, not Facebook itself. "I was swinging for the fences," Vickery said, describing the process. Getting it secured Facebook said it acted swiftly to get the data removed. But not all companies are responsive. When database hunters can't get a company to react, they sometimes turn to a security writer who uses the pen name Dissent. She used to hunt unsecured databases herself but now spends her time prompting companies to respond to data exposures that other researchers find. "An optimal response is, 'Thank you for letting us know. We're securing it and we're notifying patients or customers and the relevant regulators,'" said Dissent, who asked to be identified by her pen name to protect her privacy. Not every company understands what it means for data to be exposed, something Dissent has documented on her website Databreaches.net. In 2017, Diachenko sought her help in reporting exposed health records from a financial software vendor to a New York City hospital. The hospital described the exposure as a hack, even though Diachenko had simply found the data online and didn't break any passwords or encryption to see it. Dissent wrote a blog post explaining that a hospital contractor had left the data unsecured. The hospital hired an external IT company to investigate. Tools for good or bad The search tools that database hunters use are powerful. Sitting in the pub, Paine shows me one of his techniques, which has let him find exposed data on Amazon Web Services databases and which he said was "hacked together with various different tools." The makeshift approach is necessary because data stored on Amazon's cloud service isn't indexed on Shodan. First, he opens a tool called Bucket Stream, which searches through public logs of the security certificates that websites need to access encryption technology. The logs let Paine find the names of new "buckets," or containers for data, stored by Amazon, and check whether they're publicly viewable. Then he uses a separate tool to create a searchable database of his findings. For someone who searches for caches of personal data down between the couch cushions of the internet, Paine doesn't display glee or dismay as he examines the results. This is just the reality of the internet. It's filled with databases that should be locked behind a password and encrypted but aren't. Ideally, companies would hire experts to do the work he does, he says. Companies, he says, should "make sure your data isn't leaking." If that happened more often, Paine would have to find a new hobby. But that might be hard for him. "It's a little bit like a drug," he said, before finally getting around to digging into his fries and chicken. Source
  9. Senator Marco Rubio (R-Fla.) introduced a bill Wednesday aimed at creating federal standards of privacy protection for major internet companies like Facebook, Amazon, and Google. The bill, titled the American Data Dissemination Act, requires the Federal Trade Commission to make suggestions for regulation based on the Privacy Act of 1974. Congress would then have to pass legislation within two years, or the FTC will gain the power to write the rules itself (under current laws, the FTC can only enforce existing rules). While Rubio’s bill is intended to reign in the data collection and dissemination of companies like Facebook, Amazon, Apple, Google, and Netflix, it also requires any final legislation to protect small businesses from being stifled by new rules. “While we may have disagreements on the best path forward, no one believes a privacy law that only bolsters the largest companies with the resources to comply and stifles our start-up marketplace is the right approach,” Rubio wrote in an op-ed for The Hill, announcing his bill. The caveat comes when one considers states’ rights to create their own privacy laws. Under Rubio’s legislation, any national regulations would preempt state laws—even if the state’s are more strict. According to Rubio, “a state-by-state patchwork of laws is simply not an effective means of dealing with an issue of this magnitude.” This is a sentiment echoed by the major internet companies, who argue navigating widespread federal regulation is simpler than potentially managing dozens of different laws. Democrats have said they would only support federal regulation if it can hold a candle to state laws like those expected to go into effect in California in 2020. According to Axios, other privacy proposals are expected, including one from a bipartisan group of senators. Rubio’s bill reportedly has no co-sponsors at this time. Source
  10. Microsoft’s Obscure ‘Self Service for Mobile’ Office Activation Microsoft requires a product activation after installing. Users of Microsoft Office currently are facing trouble during telephone activation. After dealing with this issue, I came across another obscure behavior, Microsoft’s ‘Self Service for Mobile’ solution to activate Microsoft Office via mobile devices. Microsoft describes how to activate Microsoft Office 2013, 2016 and Office 365 within this document. There are several possibilities to activate an installed product, via Internet or via Telephone for instance. Activation by phone is required, if the maximum Internet activation threshold is reached. But Office activation by phone fails Within my blog post Office Telephone activation is no longer supported error I’ve addressed the basis issue. If a user re-installs Office, the phone activation fails. The activation dialog box shows the message “Telephone activation is no longer supported for your product“. Microsoft has confirmed this issue for Office 2016 users having a non subscriber installation. But also users of Microsoft Office 2010 or Microsoft Office 2013 are affected. A blog reader posted a tip: Use Mobile devices activation… I’ve posted an article Office 2010: Telefonaktivierung eingestellt? – Merkwürdigkeit II about the Office 2010 telephone activation issue within my German blog, back in January 2017. Then a reader pointed me within a comment to a Self Service for Mobile website. The link http: // bit.ly/2cQPMCb, shortened by bit.ly, points to a website https: // microsoft.gointeract.io/mobileweb/… that provides an ability to activate Microsoft Office (see screenshot below). After selecting a 6 or 7 Digits entry, an activation window with numerical buttons to enter the installation id will be shown (see screenshots shown below). The user has to enter the installation id and receives the activation id – plain and simple. Some users commented within my German blog, that this feature works like a charm. Obscurity, conspiracy, oh my God, what have they done? I didn’t inspect the posted link until writing last Fridays blog post Office Telephone activation is no longer supported error. My idea was, to mention the “Self Service for Mobile” page within the new article. I managed to alter the link to direct it to the English Self Service for Mobile language service site. Suddenly I noticed, that both, the German and also the English “Self Service for Mobile” sites uses https, but are flagged as “unsecure” in Google Chrome (see the screenshot below, showing the German edition of this web page. The popup shown for the web site „Self Service for Mobile“ says, that there is mixed content (images) on the page, so it’s not secure. That catches my attention, and I started to investigate the details. Below are the details for the German version of the web site shown in Google Chrome (but the English web site has the same issues). First of all, I noticed, that the „Self Service for Mobile“ site doesn’t belongs to a microsoft.com domain – in my view a must for a Microsoft activation page. Inspecting the details, I found out, the site contains mixed content (an image contained within the site was delivered via http). The content of the site was also delivered by Cloudflare (I’ve never noticed that case for MS websites before). The image flagged in the mixed content issue was the Microsoft logo, shown within the sites header, transferred via http. The certificate was issued by Go Daddy (an US company) and ends on March 2017. I’ve never noticed, that Go Daddy belongs to Microsoft. I came across Go Daddy during analyzing a phishing campaign months ago. A compromised server, used as a relay by a phishing campaign, has been hosted (according to Whois records) by Go Daddy. But my take down notice send to Go Daddy has never been answered. That causes all alarm bells ringing in my head, because it’s a typical behavior used in phishing sites. Also my further findings didn’t calm the alarm bells in my head. The subdomain microsoft used above doesn’t belongs to a Microsoft domain, it points to a domain gointeract.io. Tying to obtain details about the owner of gointeract.io via WhoIs ended with the following record. Domain : gointeract.io Status : Live Expiry : 2021-03-14 NS 1 : ns-887.awsdns-46.net NS 2 : ns-1211.awsdns-23.org NS 3 : ns-127.awsdns-15.com NS 4 : ns-1980.awsdns-55.co.uk Owner OrgName : Jacada Check for 'gointeract.sh' --- http://www.nic.sh/go/whois/gointeract.sh Check for 'gointeract.ac' --- http://www.nic.ac/go/whois/gointeract.ac Pretty short, isn’t it? No Admin c, no contact person, and Microsoft isn’t mentioned at all, but the domain has been registered till 2021. The Owner OrgName Jacada was unknown to me. Searching the web didn’t gave me more insights at first. Overall, the whole site looks obscure to me. The tiny text, shown within the browser’s lower left corner, was a hyperlink. The German edition of the „Self Service for Mobile“ site opens a French Microsoft site – the English site opens an English Microsoft site. My first conclusion was: Hell, I was tricked by a phishing comment – somebody set up this site to grab installation ids of Office users. So I deactivated the link within the comment and I posted a warning within my German blog post, not to use this „Self Service for Mobile“ site. I also tried to contact the user, who has posted the comment, via e-mail. … but “Microsoft” provides these links … User JaDz responded immediately in an additional comment, and wrote, that the link shortened via bit.ly has been send from Microsoft via SMS – after he tried the telephone activation and selected the option to activate via a mobile device. I didn’t noticed that before – so my conclusion was: Hell, this obscure „Self Service for Mobile“ site is indeed related to Microsoft. Then I started again a web search, but this time with the keywords Jacada and Microsoft. Google showed several hits, pointing to the site jacada.com (see screenshot below). It seems that Jacada is a kind of service provider for several customers. I wasn’t able to find Microsoft within the customer reference. But I know, that Microsoft used external services for some activities. Now I suppose, that somebody from Jacada set up the „Self Service for Mobile“ activation site. The Ajax code used is obviously able to communicate with Microsoft’s activation servers and obtain an activation id. And Microsoft’s activation mechanism provides an option to send the bit.ly link via SMS. Closing words: Security by obscurity? At this point I was left really puzzled. We are not talking about a startup located within a garage. We are having dealing with Microsoft, a multi billion company, that claims to run highly secured and trustable cloud infrastructures world wide. But what’s left, after we wipe of the marketing stuff? The Office activation via telephone is broken (Microsoft confirmed that, after it was reported by customers!). As a customer in need to activate a legal owned, but re-installed, Microsoft Office is facing a nasty situation. Telephone activation is refused, the customers will be (wrongly) notified, that this option is no longer supported. Internet activation is refused due “to many online activations” – well done. But we are not finish yet. They set up a „Self Service for Mobile“ activation site in a way, that is frequently used by phishers. They are sending links via SMS to this site requesting to enter sensitive data like install ids. A site that is using mixed content via https, and is displaying an activation id. In my eyes a security night mare. But maybe I’ve overlooked or misinterpreted something. If you have more insights or an idea, or if my assumptions a wrong, feel free, to drop a comment. I will try to reach out and ask Microsoft for a comment about this issue. Article in German Source Alternate Source reading - AskWoody: Born: Office activation site controlled by a non-Microsoft company
  11. Google has far more data about us than Facebook. Yet unlike Mark Zuckerberg’s social networking empire, which has been under fire for improperly leaking user data, Google has sidestepped controversy. You may wonder: Why is that? After all, we turn to Google for not only our internet searches but also for our emails, calendaring, maps, photo uploads, video streaming, mobile phones and web browsers. That’s far more pervasive than the baby photos and comments that we post on Facebook. To help get an answer, I downloaded a copy of all of the information that Google has on me. Then I compared the trove to all the data that I already knew Facebook had obtained on me. What I found was that my Google data archive was much larger than my Facebook file — about 12 times larger, in fact — but it was also packed with fewer unpleasant surprises. Most of what I saw in my Google file was information I already knew I had put in there, like my photos, documents and Google emails, while my Facebook data contained a list of 500 advertisers with my contact information and a permanent record of friends I thought I had “deleted” years ago, among other shockers. Whenever I was perturbed by parts of my Google data, like a record of the Android apps I had opened over the past several years, I was relieved to find out I could delete the data. In contrast, when I downloaded my Facebook data, I found that a lot of what I saw could not be purged. Aaron Stein, a Google spokesman, said the company had spent many years developing tools for people to download their information. “It should be easy for people to understand and control their Google data,” he said. “We encourage everyone to use these tools so they can make the privacy choices that are right for them.” That’s not to say that we should be complacent. Tech companies like Google and Facebook have an incredible amount of power over us that only increases with the more that they know. So downloading and analyzing your Google data, and determining what information you want to keep around or delete, is an exercise I highly recommend. Here’s how I did it — and what I learned. If interested, please read the entire article < here >.
  12. NSA general counsel Rajesh De contradicts months of angry denials from big companies like Yahoo and Google De said communications content and associated metadata harvested by the NSA occurred with the knowledge of the companies. Photo: KeystoneUSA-Zuma/Rex The senior lawyer for the National Security Agency stated unequivocally on Wednesday that US technology companies were fully aware of the surveillance agency’s widespread collection of data, contradicting months of angry denials from the firms. Rajesh De, the NSA general counsel, said all communications content and associated metadata harvested by the NSA under a 2008 surveillance law occurred with the knowledge of the companies – both for the internet collection program known as Prism and for the so-called “upstream” collection of communications moving across the internet. Asked during a Wednesday hearing of the US government’s institutional privacy watchdog if collection under the law, known as Section 702 or the Fisa Amendments Act, occurred with the “full knowledge and assistance of any company from which information is obtained,” De replied: “Yes.” When the Guardian and the Washington Post broke the Prism story in June, thanks to documents leaked by whistleblower Edward Snowden, nearly all the companies listed as participating in the program – Yahoo, Apple, Google, Microsoft, Facebook and AOL – claimed they did not know about a surveillance practice described as giving NSA vast access to their customers’ data. Some, like Apple, said they had “never heard” the term Prism. De explained: “Prism was an internal government term that as the result of leaks became the public term,” De said. “Collection under this program was a compulsory legal process, that any recipient company would receive.” After the hearing, De said that the same knowledge, and associated legal processes, also apply when the NSA harvests communications data not from companies directly but in transit across the internet, under Section 702 authority. The disclosure of Prism resulted in a cataclysm in technology circles, with tech giants launching extensive PR campaigns to reassure their customers of data security and successfully pressing the Obama administration to allow them greater leeway to disclose the volume and type of data requests served to them by the government. Last week, Facebook founder Mark Zuckerberg said he had called US president Barack Obama to voice concern about “the damage the government is creating for all our future.” There was no immediate response from the tech companies to De’s comments on Wednesday. It is unclear what sort of legal process the government serves on a company to compel communications content and metadata access under Prism or through upstream collection. Documents leaked from Snowden indicate that the NSA possesses unmediated access to the company data. The secret Fisa court overseeing US surveillance for the purposes of producing foreign intelligence issues annual authorisations blessing NSA’s targeting and associated procedures under Section 702. Passed in 2008, Section 702 retroactively gave cover of law to a post-9/11 effort permitting the NSA to collect phone, email, internet and other communications content when one party to the communication is reasonably believed to be a non-American outside the United States. The NSA stores Prism data for five years and communications taken directly from the internet for two years. While Section 702 forbids the intentional targeting of Americans or people inside the United States – a practice known as “reverse targeting” – significant amounts of Americans’ phone calls and emails are swept up in the process of collection. In 2011, according to a now-declassified Fisa court ruling, the NSA was found to have collected tens of thousands of emails between Americans, which a judge on the court considered a violation of the US constitution and which the NSA says it is technologically incapable of fixing. Renewed in December 2012 over the objections of senate intelligence committee members Ron Wyden and Mark Udall, Section 702 also permits NSA analysts to search through the collected communications for identifying information about Americans, an amendment to so-called “minimisation” rules revealed by the Guardian in August and termed the “backdoor search loophole” by Wyden. De and his administration colleagues, testifying before the Privacy and Civil Liberties Oversight Board, strongly rejected suggestions by the panel that a court authorise searches for Americans’ information inside the 702 databases. “If you have to go back to court every time you look at the information in your custody, you can imagine that would be quite burdensome,” deputy assistant attorney general Brad Wiegmann told the board. De argued that once the Fisa court permits the collection annually, analysts ought to be free to comb through it, and stated that there were sufficient privacy safeguards for Americans after collection and querying had occurred. “That information is at the government’s disposal to review in the first instance,” De said. De also stated that the NSA is not permitted to search for Americans’ data from communications taken directly off the internet, citing greater risks to privacy. Neither De nor any other US official discussed data taken from the internet under different legal authorities. Different documents Snowden disclosed, published by the Washington Post, indicated that NSA takes data as it transits between Yahoo and Google data centers, an activity reportedly conducted not under Section 702 but under a seminal executive order known as 12333. The NSA’s Wednesday comments contradicting the tech companies about the firms’ knowledge of Prism risk entrenching tensions with the firms NSA relies on for an effort that Robert Litt, general counsel for the director of national intelligence, told the board was “one of the most valuable collection tools that we have.” “All 702 collection is pursuant to court directives, so they have to know,” De reiterated to the Guardian. Source
  • Create New...