Jump to content

Search the Community

Showing results for tags 'privacy'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 323 results

  1. Apple is crashing CES officially this year. What you need to know Apple is attending CES for the first time in decades. The company's Senior Director of Privacy, Jane Horvath, will attend a privacy roundtable. The event will focus on consumer privacy, how to build it at scale, and how regulation will affect it. After decades without any attendance, Apple is making an official return to the Las Vegas CES technology conference in 2020. Reported by Bloomberg, the company is attending to pitch a new product but to instead talk about consumer privacy. Jane Horvath, Apple's Senior Director of Privacy, will be speaking at a "Chief Privacy Officer Roundtable: What Do Consumers Want?" event at the conference which is set to happen on January 7, according to the CES schedule. The roundtable will also include representatives from Facebook and the Federal Trade Commission. The conference describes the event as a discussion between the invitees to answer a number of questions concerning consumer privacy: Save 40% and get three months of wireless service for just $45 Privacy is now a strategic imperative for all consumer businesses. "The future is private" (Facebook); "Privacy is a human right" (Apple); and "a more private web" (Google). How do companies build privacy at scale? Will regulation be a fragmented patchwork? Most importantly, what do consumers want? It will be moderated by Rajeev Chand, Partner and Head of Research at Wing Venture Capital. The rest of the panel will be made up of representatives from Apple, Facebook, Proctor & Gamble, and the FTC. Below is a list of who will be attending the roundtable, their role, and the company they are representing: Rajeev Chand, Partner and Head of Research, Wing Venture Capital Erin Egan, VP, Public Policy and Chief Privacy Offer for Policy, Facebook Jane Horvath, Senior Director, Global Privacy, Apple Susan Shook, Global Privacy Officer, The Procter & Gamble Company Rebecca Slaughter, Commissioner, Federal Trade Commission Apple had unofficially showed up at CES last year when it hung enormous billboards across Las Vegas during the conference that touted the company's focus on privacy. The billboard featured the back of an iPhone X with the words "what happens on your iPhone, stays on your iPhone." According to Bloomberg, the roundtable talk will mark the first time since 1992 that Apple formally attends the conference. Source
  2. The recent firing of a Google employee demonstrates how you relinquish your privacy—and private data, including personal photos—when you put work accounts on your personal device. The Bill of Rights covers only what the government can do to you. Unless you work for the government, many of your rights to free speech and freedom from search and seizure stop when you walk in, or log in, to your job. “If you’re on your employer’s communications equipment, you’ve got virtually no privacy in theory and absolutely none in practice,” says Lew Maltby, head of the National Workrights Institute. The lack of workplace digital privacy has become a hot topic with the recent firing of four Google employees for what Google says were violations such as unauthorized accessing of company documents and the workers say was retaliation for labor organizing or criticizing company policies. One of them, Rebecca Rivers, recounts how her personal Android phone went blank when she learned that she’d been placed on administrative leave in early November. (Google subsequently fired Rivers.) “At nearly the exact same time, my personal phone was either corrupted or wiped,” she said at a Google worker rally in November. The loss was especially painful for Rivers, who is transgender. “Everything on my phone that was not backed up to the cloud was gone, including four months of my transition timeline photos, and I will never get those back,” she said, her voice quavering. How did this happen? Likely through an Android OS feature called a work profile, which allows employers to run work-related apps that the employer can access and manage remotely. Apple iOS has a similar capability called MDM, mobile device management, in which work apps run in a virtual “container” separate from personal apps. Various companies make MDM applications with varying levels of monitoring capabilities. There are many legitimate reasons why a company might want to use this tech: It allows them to implement security measures for protecting company data in email and other apps that run in the separate work profile or container, for instance. They can easily install, uninstall, and update work apps without you having to bring the device in. But they can also spy on you, or wipe out all your data—whether deliberately or negligently. That’s why mixing work networks and personal devices is a bad idea. My smartphone is your smartphone All modern phones have GPS capability. With a work profile or MDM toehold in your phone, an employer could install an app to track everywhere you go, as Owen Williams at OneZero points out. He gives the example of MDM maker Hexnode, which goes into great detail on how it can track device location at all times.''' Williams also notes that a company may require your phone to connect to the internet through its encrypted virtual private network. This security measure makes sense for business, but it means that all of your data, even personal data, may be passing through the company’s network. That makes the data fair game for the company to look at, since there is simply no law or legal precedent to stop it. “That’s not really different from using your company’s desktop computer to send a personal email from your cubicle,” says attorney and security expert Frederick Lane. “If you send unencrypted personal data across a network owned and controlled by your employer, then you should assume that it can be captured and stored.” Rivers recently tweeted a line of her employment contract that spells this out: “I have no reasonable expectation of privacy in any Google Property or in any other documents, equipment, or systems used to conduct the business of Google.” I asked Google about this policy. A spokeswoman said that it should not come as a surprise and is standard practice at large companies. A notice of the privacy policies also pops up when the phone profile is installed, she said. What happens if you lose your data? What Rivers hadn’t expected was losing personal data on her own device. But this is increasingly common, says Maltby, who calls it a bigger danger than being spied on. “They’re wiping your personal device with the goal of getting rid of the company data, but when you wipe the phone, you wipe everything,” he says. Google told me that a suspended employee may lose personal data because they stored it in a work account, and they can ask Google to retrieve it for them. It’s unclear exactly what happened to Rivers’s phone, or whether Google has a backup. But companies often completely wipe employees’ own phones without providing a way to back up personal information, Maltby says. “It’s not that they want to cause you trouble,” he says of employers. But “they would have to spend a little time and money to set up a system that would protect your privacy for the personal information that happens at work. And they don’t bother to do it.” Worker advocates such as Maltby believe that total wiping of phones should be illegal under a law called the Computer Fraud and Abuse Act. The CFAA basically prohibits unauthorized access to a computing device, such as stealing data or planting malware. But advocates have struggled to find a legal case that can set a precedent for employee cellphones. “The courts insist on seeing tangible monetary damages, and usually there aren’t any,” Maltby says. Of course, losing personal data, like photos documenting key moments of life, is so painful precisely because their value is intangible. There’s also no way to put a monetary value on the hassle of carrying a second phone, or of fighting an employer that’s reluctant to pay for one. But placed side by side, securing your privacy is probably worth more than enduring some inconvenience. Source
  3. After years of freely, or unknowingly, giving up their data, consumers are becoming wary The next decade could see ground rules set for an industry that’s used to making the rules (or not) as it goes. “I swear my phone is listening to me,” said everyone with a smartphone. And they’re not just talking about Siri. Targeted ads and friends’ posts regularly pop up on Instagram and Facebook in the midst of our conversations about those very products and people. Coincidence? No one has been able to conclusively prove otherwise, but we do know Alexa is recording us on our Echo Dots. Technology is pushing the limits in how marketers not only meet, but anticipate, the growing demands for a frictionless customer experience. With the convergence of Big Tech, machine learning, enhanced targeting and personalization, we arguably stand at the precipice where either personalization or privacy will hold sway. The pushback has already begun, with the 2020s poised to be the decade that sets some ground rules for an industry that’s used to making the rules (or not) as it goes. “If you think about healthcare, if you think about financial services, every other industry with high volumes of data collection—they are already regulated, but it hasn’t come to roost for those of us in the marketing and advertising space,” noted Fatemeh Khatibloo, a Forrester analyst with expertise in the privacy/personalization paradox. “And clearly what we’re hearing from consumers and regulators is that it’s time to put those guardrails up.” Indeed, the EU enacted the General Data Protection Regulation in 2016, giving citizens more control over their data, shutting down some businesses overnight, and now this privacy trend has reached our shores. Maine and Nevada quietly enacted their own protections earlier this year, and the California Consumer Privacy Act, which allows residents not only to opt out of having their data collected, shared or used, but to sue businesses for data breaches, takes effect on Jan. 1. Similar bills are in the works for Illinois, Maryland, Massachusetts, New York, Rhode Island and Texas. In time, depending on which way the political wind blows, we may see a comprehensive law protecting data and privacy enacted by Congress. The imminent arrival of 5G, with its blazingly fast broadband speeds and improved mobile networks, will only add to the urgency of the privacy-and-security conversation. It’s obvious why 5G would appeal to consumers, but will they be as thrilled with the potential for marketers (and the government) to have even more access to their every move? For brands, the coming decade promises extraordinary technological advances that will unleash new and exciting ways to enhance their value. But as we know all too well, technology can get away from us and into hands of bad actors. With innovation being a moving target, and the consumers’ thirst for the next, it will be incumbent on marketers and regulators to be able to strike a balance between access and protection. Source
  4. This year, look for a tech startup that solves the digital consumer privacy crisis. Black Friday and Cyber Monday made clear that the online-offline divide in consumers' minds has almost disappeared. Among the big winners for sales in 2019 will be a device that is perhaps the best physical representation of that diminishing online-offline divide: the digital assistant. The main contenders for consumer dollars this year come by way of Amazon, Google, and Apple. Amazon Echo smart home products have been among the company's most popular items for a while now, but they hit new records in the recent four-day stretch from Black Friday to Cyber Monday. Internet connectivity continues its march to omnipresence in everyday consumer goods. Televisions feature built-in internet functionality, and the FBI just released a warning about them. A number of the newer TVs also have built-in cameras. In some cases, the cameras are used for facial recognition so the TV knows who is watching and can suggest programming appropriately. There are also devices coming to market that allow you to video chat with Grandma in 42" glory. Beyond the risk that your TV manufacturer and app developers may be listening to and watching you, that television can also be a gateway for hackers to come into your home. A bad cyber actor may not be able to access your locked-down computer directly, but it is possible that your unsecured TV can give him or her an easy way in the backdoor through your router. Hackers can also take control of your unsecured TV. At the low end of the risk spectrum, they can change channels, play with the volume, and show your kids inappropriate videos. In a worst-case scenario, they can turn on your bedroom TV's camera and microphone and silently cyberstalk you. The conveniences afforded by all this new connected technology are great, but it's important to bear in mind that it also has its downside. Even basic home goods like doorbells and light bulbs are commonly being sold with Wi-Fi connectivity and the ability to integrate into Google Home-, Siri-, or Alexa-enabled networks. These devices don't just talk to one another. They're also providing the companies that manufactured them with a gold mine of data about how they're being used--and, increasingly, who is using them. It's not just IoT gadgets. Tech companies are busy these days trying to weave their way into yourwallet, your entertainment, and your health, all the while mining as much data as possible to leverage into other markets and industries. This has an air of inevitability about it because the right entrepreneur has not yet had the right aha! moment to make it stop being an issue. That said, cracks in the current personal information smash-and-grab approach to consumer data are beginning to appear, and consumers are becoming increasingly wary of how their data is being collected and used as well as who has access to it. Break Out the Torches and Pitchforks If a consumer revolt sounds overly optimistic, consider the uproar earlier this year over revelations that smart home speakers were eavesdropping consistently and sometimes indiscriminately on consumers, and the resulting semi-apologies issued by Apple, Amazon, and Google. Or look at the ongoing civil rights concerns regarding Amazon's Ring surveillance cameras, or the recent lawsuit against TikTok for allegedly offloading user data to China, or the reports of customers abandoning their Fitbits after the company was acquired by Google. The message seems clear to me. Consumers may enjoy the convenience and easy access to the internet, but more and more they bristle at the lack of transparency when it comes to the way their data is being handled and used by third parties, and the seeming inevitability that it will wind up on an unsecured database for any and all to see. While the fantasy of consumers uninstalling and unplugging en masse is common among a small community of sentient eels indigenous to the Malarkey Marshes of Loon Lake, there remains a business opportunity for the larger online community. Will the Genius of Loon Lake Please Stand Up? The effort to create a more privacy- and security-centric internet experience for consumers has largely been led by nonprofit organizations. World Wide Web inventor Tim Berners-Lee has been publicly discussing plans to create a follow-up with the aim of reverting to its original ideals of an open and cooperative global network with built-in privacy protections. Meanwhile, the nonprofit Mozilla organization has revamped its Firefox browser to block several types of ad trackers by default and provide greater security for saved passwords and account information, in addition to publishing an annual guide to score internet-connected devices for their relative privacy friendliness and security. Wikipedia founder Jimmy Wales announced in November a service meant to provide an alternative to Twitter and Facebook reliant on user donations rather than the other social platforms' often Orwellian ad tracking software. Without a user base or killer app to drive adoption, Berners-Lee's new web has been in the works for years, and Wales's idea is a rehashing of a similar project called WikiTribune that also never managed to find its footing. Firefox is a quality browser, but its market share pales next to Google Chrome's. Thus far, nonprofit-driven alternatives have found no lure to drive consumer adoption. The next stage of privacy-centric development may need to have a profit motive to make inroads into the privacy protocols and proxies that dominate apps and devices. It can't be merely self-sustaining, but rather must be compelling for users, developers, and engineers. One such company, Nullafi, has the right idea: anonymizing and individualizing a user's most common digital identifier by creating email burners that redirect to the user's private account. (Full disclosure: I'm an investor.) We need to see more of this kind of development, and we need to see it get adopted. The current large-scale investment in cybersecurity proves there's a market in our post-Equifax-breach world where awareness of data vulnerability and the possibility of getting hacked have hit critical mass. The time for the unicorns to arrive is now. Source
  5. The privacy-focussed search engine, Startpage, has announced a news tab for search results. Currently, Startpage gives web, image, and video but the addition of a news tab will bring its feature parity closer to that of competitors, making it easier for people to switch to. As with its other search features, the news results will not be influenced by any tracking, this ensures users see a more balanced list of results. Many search engines use prior searches and browsing history to display results. While these companies say that the results are more relevant, Startpage and other privacy-oriented search engines like DuckDuckGo argue that these “filter bubbles” are more like traps where some results are hidden from you. Instead of filtering news results based on your browsing history, Startpage will filter results based on the time and date that they were published, this way you’ll be able to keep up with all the latest developments as they evolve. Commenting on the new feature, Startpage said in an e-mail: If you're not familiar with Startpage, it is one of the search engines that has gained popularity in recent years following the revelations made by Edward Snowden surrounding state surveillance. Startpage sets itself apart from the competition by storing no IP addresses, no personal user data, and no tracking cookies. Additionally, users can view results incognito with the Anonymous View tool. Source: Privacy-oriented Startpage search engine adds News tab to results (via Neowin)
  6. The Royal Malaysia Police (PDRM) “are allowed to inspect mobile phones to ensure there are no obscene, offensive, or communication threatening the security of the people and nation,” the Dewan Rakyat was told yesterday. According to a media report from MalaysiaKini, PDRM also have the right to, including “phone bugging” or “tapping” to ensure investigations could be carried out in cases involving security. The article quoted Deputy Home Minister Mohd Azis Jamman who was responding to questions from YB Chan Ming Kai (PH-Alor Star). The Deputy Home Minister also said that “the public should be aware of their rights during a random check, including requesting the identity of the police officer conducting the search for record purposes, in case there is a breach of the standard operating procedures (SOP),”. However, details of the “Police SOP” were not revealed. In 2014, the then Minister in the Prime Minister’s Department Nancy Shukri said that law enforcers (such as PDRM) in the country are empowered under five different laws to tap (wiretap) any communications done by suspects of criminal investigations. This would include intercepting, confiscating and opening any package sent via post, intercepting any messages sent or received through any means of telecommunication (voice/SMS/Internet); intercept, listen to and record any conversations (phone) over telecommunications networks. The provisions are found under Section 116C of the Criminal Procedure Code, and also under Section 27A of the Dangerous Drugs Act 1952, Section 11 of the Kidnapping Act 1961, Section 43 of the Malaysian Anti Corruption Commission Act 2009, and Section 6 of the Security Offences (Special Measures) Act 2012 (SOSMA). According to Malaysia-Today in a 2013 article, Section 6 (SOSMA) gives the Public Prosecutor the power to authorise any police officer to intercept any postal article, as well as any message or conversation being transmitted by any means at all, if he or she deems it to contain information relating to a “security offence”. It also gives the Public prosecutor the power to similarly require a communications service provider like telecommunications companies (including Maxis, Celcom Axiata, Telekom Malaysia, Digi, U Mobile, Yes4G and others) to intercept and retain a specified communication, if he or she considers that it is likely to contain any information related to “the communication of a security offence.” Additionally, it vests the Public Prosecutor with the power to authorise a police officer (PDRM) to enter any premises to install any device “for the interception and retention of a specified communication or communications.” The Malaysia-Today article said such a scope of what the government can do in terms of intercepting people’s messages is troubling – at least to those who understand its implication. In particular, there are those who are anxious that it can be used to tap on detractors and political opponents. “Due to the vagueness and broadness of the ground for executing interception, this provision is surely open to abuse especially against political dissent,” said Bukit Mertajam MP Steven Sim at the time. Stressing that the act does not provide any guidelines on the “interception”, he added: “The government can legally ‘bug’ any private communication using any method, including through trespassing to implement the bugging device and there is not stipulated time frame such invasion of privacy is allowed”. “If that is not enough, service providers such as telcos and internet service providers are compelled by Section 6(2)(a),” At the moment, the Malaysian Government has not revealed on the number of people/communication it has tapped/intercept in the past decade. [Update, 20 November 2019]: Deputy Home Minister Datuk Mohd Azis Jamman released a statement saying that the Royal Malaysia Police (PDRM) can confiscate the cell phones of the suspected and those involved in any ongoing investigation, and not conduct random checks on the public. Source: Malaysia Police (PDRM) can Intercept your Voice Calls/SMS, check your Handphone (via Malaysian Wireless) p/s: Deputy Home Minster later clarified that phone checking can only be done, if individuals are suspected of committing wrongdoings under the following acts (a warrant will be required as part of SOP): Penal Code (Act 574) Section 233 under the Communications and Multimedia Act (Act 588) Sedition Act 1948 (Act 15) Security Offences (Special Measures) 2012 Act (747) Anti-Trafficking in Persons and Anti-Smuggling of Migrants 2007 (Act 670) Prevention of Terrorism Act 2015 (Act 769). The people can report to Standard Compliance Department (Jabatan Integriti dan Pematuhan Standard or JIPS) if the enforcement officers randomly checking phones without a proper warrant and/or SOP. Source: Home Ministry: PDRM can only check phones belonging to suspects and individuals involved in ongoing investigations (via The Star Online) p/s 2: The original title of the news is added with "(update: Home Minister says cannot)" to reflect that although previous news quoted that Police can check (and intercept) the public's devices randomly, but with a new clarification from Home Ministry (stated on p/s part), the police cannot check (and intercept) the devices without a proper warrant that reflects any one (or more) from 6 acts listed.
  7. Intelligence agencies stopped the practice last year American intelligence agencies quietly stopped the warrantless collection of US phone location data last year, according to a letter from the Office of the Director of National Intelligence released today. Last year, in a landmark decision, the Supreme Court ruled against authorities looking to search through electronic location data without a warrant. Citing the ruling, Sen. Ron Wyden (D-OR), a privacy hawk in Congress, wrote a letter to then-Director of National Intelligence Dan Coats asking how agencies like the National Security Agency would apply the court’s decision. In a response to Wyden released today, a representative for the office said intelligence agencies have already stopped the practice of collecting US location data without a warrant. Previously, agencies collected that information through surveillance powers granted under the Patriot Act. But since the Supreme Court’s decision, the agencies have stopped the practice, and they now back up those searches through a warrant, under the legal standard of probable cause. In the letter to Wyden, the intelligence community official writes that the Supreme Court’s decision presented “significant constitutional and statutory issues,” but would not explicitly rule out using the tools in the future. The letter says that “neither the Department of Justice nor the Intelligence Community has reached a legal conclusion” on the matter. Next month, provisions of the Patriot Act — specifically, Section 215 — are set to expire, raising questions about potential reforms. “Now that Congress is considering reauthorizing Section 215, it needs to write a prohibition on warrantless geolocation collection into black-letter law,” Wyden said in a statement. “As the past year has shown, Americans don’t need to choose between liberty and security — Congress should reform Section 215 to ensure we have both.” Source: The NSA has stopped collecting location data from US cellphones without a warrant (via The Verge)
  8. A Facebook VP says the company is looking into it Facebook might have another security problem on its hands, as some people have reported on Twitter that Facebook’s iOS app appears to be activating the camera in the background of the app without their knowledge. Facebook says it’s looking into what’s happening. There are a couple ways that this has been found to happen. One person found that the camera UI for Facebook Stories briefly appeared behind a video when they flipped their phone from portrait to landscape. Then, when they flipped it back, the app opened directly to the Stories camera. You can see it in action here (via CNET😞 It’s also been reported that when you view a photo on the app and just barely drag it down, it’s possible to see an active camera viewfinder on the left side of the screen, as shown in a tweet by web designer Joshua Maddux: Maddux says he could reproduce the issue across five different iPhones, which were all apparently running iOS 13.2.2, but he reportedly couldn’t reproduce it on iPhones running iOS 12. Others reported they were able to replicate the issue in replies to Maddux’s tweet. CNET and The Next Web said they were able to see the partial camera viewfinder as well, and The Next Web noted that it was only possible if you’ve explicitly given the Facebook app access to the camera. In my own attempts, I couldn’t reproduce the issue on my iPhone 11 Pro running iOS 13.2.2. Guy Rosen, Facebook’s VP of integrity, replied to Maddux this morning to say that the issue he identified “sounds like a bug” and that the company is looking into it. With the second method, the way the camera viewfinder is just peeking out from the left side of the screen suggests that the issue could be a buggy activation of the feature in the app that lets you swipe from your home feed to get to the camera. (Though I can’t get this to work, either.) I don’t know what might be going on with the first method — and with either, it doesn’t appear that the camera is taking any photos or actively recording anything, based on the footage I’ve seen. But regardless of what’s going on, unexpectedly seeing a camera viewfinder in an app is never a good thing. People already worry about the myth that Facebook is listening in to our conversations. A hidden camera viewfinder in its app, even if it’s purely accidental, might stoke fears that the company is secretly recording everything we do. Hopefully Facebook fixes the issues soon. And you might want to revoke the Facebook app’s camera access in the meantime, just to be safe. Source: Facebook’s iOS app might be opening the camera in the background without your knowledge (via The Verge) p/s: The news was posted under Security & Privacy News, instead of Mobile News as this news talks about privacy issue on Facebook's iOS app with regards to the camera bug.
  9. Apple may have known for months Apple stakes a lot of its reputation on how it protects the privacy of its users, as it wants to be the only tech company you trust. But if you send encrypted emails from Apple Mail, there’s currently a way to read some of the text of those emails as if they were unencrypted — and allegedly, Apple’s known about this vulnerability for months without offering a fix. Before we go any further, you should know this likely only affects a small number of people. You need to be using macOS, Apple Mail, be sending encrypted emails from Apple Mail, not be using FileVault to encrypt your entire system already, and know exactly where in Apple’s system files to be looking for this information. If you were a hacker, you’d need access to those system files, too. Apple tells The Verge it’s aware of the issue and says it will address it in a future software update. The company also says that only portions of emails are stored. But the fact that Apple is still somehow leaving parts of encrypted emails out in the open, when they’re explicitly supposed to be encrypted, obviously isn’t good. The vulnerability was shared by Bob Gendler, an Apple-focused IT specialist, in a Medium blog published on Wednesday. Gendler says that while trying to figure out how macOS and Siri suggest information to users, he found macOS database files that store information from Mail and other apps which are then used by Siri to better suggest information to users. That isn’t too shocking in and of itself — it makes sense that Apple needs to reference and learn from some of your information to provide you better Siri suggestions. But Gendler discovered that one of those files, snippets.db, was storing the unencrypted text of emails that were supposed to be encrypted. Here’s an image he shared that’s helpful to explain what’s going on: The circle on the left is around an encrypted email, which Gendler’s computer is not able to read, because Gendler says he removed the private key which would typically allow him to do so. But in the circle on the right, you can make out the text of that encrypted email in snippets.db. Gendler says he tested the four most recent macOS releases — Catalina, Mojave, High Sierra, and Sierra — and could read encrypted email text from snippets.db on all of them. I was able to confirm the existence of snippets.db, and found that it stored portions of some of my emails from Apple Mail. I couldn’t find a way to get snippets.db to store encrypted emails I sent to myself, though. Gendler first reported the issue to Apple on July 29th, and he says the company didn’t even offer him a temporary solve until November 5th — 99 days later — despite repeated conversations with Apple about the issue. Even though Apple has updated each of the four versions of macOS where Gendler spotted the vulnerability in the months since he reported it, none of those updates contained a true fix. If you want to stop emails from being collected in snippets.db right now, Apple tells us you can do so by going to System Preferences > Siri > Siri Suggestions & Privacy > Mail and toggling off “Learn from this App.” Apple also provided this solution to Gendler — but he says this temporary solution will only stop new emails from being added to snippets.db. If you want to make sure older emails that may be stored in snippets.db can no longer be scanned, you may need to delete that file, too. If you want to avoid these unencrypted snippets potentially being read by other apps, you can avoid giving apps full disk access in macOS Catalina, according to Apple — and you probably have very few apps with full disk access. Apple also says that turning on FileVault will encrypt everything on your Mac, if you want to be extra safe. Again, this vulnerability probably won’t affect that many people. But if you do rely on Apple Mail and believed your Apple Mail emails were 100 percent encrypted, it seems that they’re not. As Gendler says, “It brings up the question of what else is tracked and potentially improperly stored without you realizing it.” Source: Apple is fixing encrypted email on macOS because it’s not quite as encrypted as we thought (via The Verge)
  10. LISBON: Facebook will outline on Nov 6 plans to expand encryption across its Messenger platform, despite warnings from regulators and government officials that the enhanced security will help protect paedophiles and other criminals. Executives told Reuters they will also detail safety measures, including stepped-up advisories for recipients of unwanted content. The moves follow complaints by top law enforcement officials in the United States, United Kingdom and Australia that Facebook's plan to encrypt messaging on all its platforms would put child sex predators and pornographers beyond detection. The changes, supported by civil rights groups and many technology experts, will be more fully described by company executives at a Lisbon tech conference later in the day. Facebook messaging privacy chief Jay Sullivan and other executives said the company would press ahead with the changes while more carefully scrutinising the data that it collects. Sullivan plans to call attention to a little-publicised option for end-to-end encryption that already exists on Messenger. The firm hopes increased usage will give the company more data to craft additional safety measures before it makes private chats the default setting. "This is a good test bed for us," Sullivan said. "It’s part of the overarching direction." The company will also post more on its pages for users about how the Secret Conversations function works. The feature has been available since 2016 but is not easy to find and takes extra clicks to activate. The company is also considering banning new Messenger accounts not linked to regular Facebook profiles. The vast majority of Messenger accounts are associated with Facebook profiles but a greater proportion of stand-alone accounts are used for crime and unwelcome communications, executives said. "We’re considering a registration process where prospective Messenger users will only be able to sign up for Messenger by creating or logging into a Facebook account,” a Facebook spokesperson said. Requiring a link to Facebook would reduce the privacy protections of those Messenger users but give the company more information it could use to warn or block troublesome accounts or report suspected crimes to police. The enhanced safety measures the company plans include sending reminders to users to report unwanted contacts and inviting recipients of unwanted content to send plain-text versions of the chats to Facebook to ban senders or potentially report them to police. Facebook might also send more prompts to users reached by people with no shared friends or who have had many messages or friend requests rejected. Facebook had previously said it wanted to ease user reporting of misconduct as it gradually moves toward more encryption, but it has given few details. Source: Facebook to expand encryption drive despite warnings over crime (via The Star Online)
  11. Russia's slowly building its own Great Firewall model, centralizing internet traffic through government servers. Today, a new "internet sovereignty" law entered into effect in Russia, a law that grants the government the ability to disconnect the entire country from the global internet. The law was formally approved by President Putin back in May. The Kremlin government cited the need to have the ability to disconnect Russia's cyberspace from the rest of the world in the event of a national emergency or foreign threat, such as a cyberattack. In order to achieve these goals, the law mandates that all local ISPs route traffic through special servers managed by the Roskomnadzor, the country's telecoms regulator. These servers would act as kill-switches and disconnect Russia from external connections while re-routing internet traffic inside Russia's own internet space, akin to a country-wide intranet -- which the government is calling RuNet. The Kremlin's recent law didn't come out of the blue. Russian officials have been working on establishing RuNet for more than half a decade. Past efforts included passing laws that force foreign companies to keep the data of Russian citizens on servers located in Russia. However, internet infrastructure experts have called Russia's "disconnect plan" both impractical and idealistic, pointing to the global DNS system as the plan's Achille's heel. Even US officials doubt that Russia would be able to pull it off. Speaking on stage at the RSA 2019 security conference in March, NSA Director General Paul Nakasone said he didn't expect Russia to succeed and disconnect from the global internet. The technicalities of disconnecting an entire country are just to complex not to cripple Russia's entire economy, plunging modern services like healthcare or banking back into a dark age. IT'S A LAW ABOUT SURVEILLANCE, NOT SOVEREIGNTY The reality is that experts in Russian politics, human rights, and internet privacy have come up with a much more accurate explanation of what's really going on. Russia's new law is just a ruse, a feint, a gimmick. The law's true purpose is to create a legal basis to force ISPs to install deep-packet inspection equipment on their networks and force them to re-route all internet traffic through Roskomnadzor strategic chokepoints. These Roskomnadzor servers are where Russian authorities will be able to intercept and filter traffic at their discretion and with no judicial oversight, similar to China's Great Firewall. The law is believed to be an upgrade to Russia's SORM (System for Operative Investigative Activities). But while SORM provides passive reconnaissance capabilities, allowing Russian law enforcement to retrieve traffic metadata from ISPs, the new "internet sovereignty" law provides a more hands-on approach, including active traffic shaping capabilities. Experts say the law was never about internet sovereignty, but about legalizing and disguising mass surveillance without triggering protests from Russia's younger population, who has gotten accustomed to the freedom the modern internet provides. Experts at Human Rights Watch have seen through the law's true purpose ever since it was first proposed in the Russian Parliament. Earlier this year, they've called the law "very broad, overly vague, and [that it vests] in the government unlimited and opaque discretion to define threats." This vagueness in the law's text allows the government to use it whenever it wishes, for any circumstance. Many have pointed out that Russia is doing nothing more than copying the Beijing regime, which also approved a similarly vague law in 2016, granting its government the ability to take any actions it sees fit within the country's cyberspace. The two countries have formally cooperated, with China providing help to Russia in implementing a similar Great Firewall technology. PLANNED DISCONNECT TEST But while Russia's new law entered into effect today, officials sill have to carry out a ton of tests. Last week, the Russian government published a document detailing a scheduled test to take place this month. No exact date was provided. Sources at three Russian ISPs have told ZDNet this week that they haven't been notified of any such tests; however, if they take place, they don't expect the "disconnect" to last more than a few minutes. Tens of thousands protested this new law earlier this year across Russia; however, the government hasn't relented, choosing to arrest protesters and go forward with its plans. Source: Russia's new 'disconnect from the internet' law is actually about surveillance (via ZDNet)
  12. Allowing facial recognition technology to spread without understanding its impact could have serious consequences. In the last few years facial recognition has been gradually introduced across a range of different technologies. Some of these are relatively modest and useful; thanks to facial recognition software you can open you smartphone just by looking at it, and log into your PC without a password. You can even use your face to get cash out of an ATM, and increasingly it's becoming a standard part of your journey through the airport now. And facial recognition is still getting smarter. Increasingly it's not just faces that can be recognised, but emotional states too, if only with limited success right now. Soon it won't be too hard for a camera to not only recognise who you are, but also to make a pretty good guess at how you are feeling. But one of the biggest potential applications of facial recognition on the near horizon is, of course, for law and order. It is already being used by private companies to deter persistent shoplifters and pickpockets. In the UK and other countries police have been testing facial recognition in a number of situations, with varying results. There's a bigger issue here, as the UK's Information Commissioner Elizabeth Denham notes: "How far should we, as a society, consent to police forces reducing our privacy in order to keep us safe?" She warns that when it comes to live facial recognition "never before have we seen technologies with the potential for such widespread invasiveness," and has called for police, government and tech companies to work together to eliminate bias in the algorithms used; particularly that associated with ethnicity. She is not the only one to be raising questions about the use of facial recognition by police; similar questions are being asked in the US, and rightly so. There is always a trade-off between privacy and security. Deciding where to draw the line between the two is key. But we also have to make the decision clearly and explicitly. At the moment there is a great risk that as the use of facial recognition technology by government and business spreads, the decision will be taken away from us. In the UK we've already built up plenty of the infrastructure that you'd need if you were looking to build a total surveillance state. There are probably somewhere around two million private and local government security cameras in the UK; a number that is rising rapidly as we add our own smart doorbells or other web-connected security cameras to watch over our homes and businesses. In many cases it will be very easy to add AI-powered facial recognition analysis to all those video streams. I can easily see a scenario where we achieve an almost-accidental surveillance state, through small steps, each of which make sense on their own terms but which together combine to hugely reduce our privacy and freedoms, all in the name of security and efficiency. It is much easier to have legitimate concerns about privacy addressed before facial recognition is a ubiquitous feature of society. And the same applies other related technologies like gait recognition or other biometric systems that can recognise us from afar. New technology rolled out in the name of security is all but impossible to roll back. For sure, these technologies can have many benefits, from making it quicker to unlock your phone or recognising criminals in the street. But allowing these technologies to become pervasive without rigorous debate about the need for them, the effectiveness of them and their broader impact on society is deeply unwise and could leave us facing much bigger problems ahead. Source: We must stop smiling our way towards a surveillance state (via ZDNet)
  13. A university lecturer in east China is suing a wildlife park for breach of contract after it replaced its fingerprint-based entry system with one that uses facial recognition, according to a local newspaper report. Guo Bing, an associate law professor at Zhejiang Sci-tech University, bought an annual pass to Hangzhou Safari Park for 1,360 yuan (RM803) in April, Southern Metropolis Daily reported on Sunday. But when he was told last month about the introduction of the new system he became concerned it might be used to “steal” his identity and asked for a refund, the report said. The park declined to return his money, so Guo filed a civil lawsuit last week at a district court in Fuying, Hangzhou, the capital of Zhejiang province. The report said the court had accepted the case, in which Guo is demanding 1,360 yuan (RM803) compensation plus costs. “The purpose of the lawsuit is not to get compensation but to fight the abuse of facial recognition,” he was quoted as saying. Guo said that when he bought the ticket – which offers 12 months’ unlimited visits to the park for himself and a family member – he was required to provide his name, phone number and fingerprints. He complied with the request and said he had visited the park on several occasions since. However, when the attraction upgraded its admission system, all annual pass-holders where asked to update their records – including having their photograph taken – before Oct 17 or they would no longer be allowed to enter, the report said. Guo said he believed the change was an infringement of his consumer rights. Zhao Zhanling, a lawyer at the Beijing Zhilin Law Firm, said it was possible the park had breached the conditions of the original contract. “The plaintiff’s concern is totally understandable,” he said. Facial identities were “highly sensitive”, Zhao said, and the authorities “should strictly supervise how data is collected, used, stored and transferred.” A manager at the park, who gave her name only as Yuan, said the upgrade was designed to improve efficiency at the entrance gates. Since the launch of the central government’s “smart tourism” initiative in 2015, more than 270 tourist attractions around the country have introduced facial recognition systems, Xinhua reported earlier. Source: Chinese professor sues wildlife park after it introduces facial recognition entry system (via The Star Online)
  14. Google announced “quantum supremacy” last week, a technological achievement that has huge repercussions, not only for the company and its role in the world but for all of us individuals who want to maintain a semblance of the right to privacy. Google researchers have developed a computer called Sycamore, which is exponentially more powerful in its processing power than a “standard” supercomputer. The workings behind Sycamore are what make it such a breakthrough, since it uses an algorithm that would take 10,000 years to give a similar output on a classical computer but only 200 seconds on Google's processor. We should all be very concerned that an industry with a questionable track record on data protection, privacy and political neutrality now has access to the world’s most powerful computer. There is still time for our governments to play catch up and protect consumers. Although Google’s Sycamore is advanced, it is still not capable of fulfilling every data scientist’s deepest desires. The charge sheet against Facebook, rather than Google, is the longest – and is still growing. There have been long-standing concerns about the amount of data Facebook is harvesting from its users and what it would (or could) be used for. However, these issues are systemic and industry-wide, and in my opinion, the scandals involving Facebook in recent months and years could just as easily have affected Google, or perhaps even Microsoft. These issues came to a head around the Cambridge Analytica scandal, where Facebook was implicated in allowing a Russian-linked firm to harvest a huge amount of personal data, including political preferences, and allowing that knowledge to be used to meddle in the 2016 US presidential election. Now that the processing power available to manipulate and use large amounts of data has increased, the stakes are raised in what big data can be used for. The industry, however, doesn’t seem to accept these dangers. The implicit aim of tech companies is to acquire more users, more data, and ultimately more advertisers. The symbiotic relationship between these three factors underpins most tech companies’ business models, including the current wave of startups in Silicon Valley and elsewhere. This will not change. But what must change is the regulation around data security and its implementation and enforcement. Regulation is, by and large, already present: in almost every developed country, it is illegal for someone to hold data without a range of rigorous checks and balances on how it is sourced, held and transferred between parties. A range of international treaties, such as the European Union’s General Data Protection Regulation (GDPR) and the EU-US Privacy Shield mean that data can only achieve “freedom of movement” by fulfilling strict criteria. The largest data owners like Facebook and Google tend to follow these rules closely, meaning that the main concern is not control of data, but the data’s actual power. Big data can already predict an individual’s consumer habits and personal desires to a somewhat eerie extent. As processing power grows exponentially, will we have Facebook ads that can penetrate deeper and deeper into our lives and consciousness? What will be the effect on our mental health? Our family relationships? And at the macro level, our economies? None of these deeper questions appear to be being asked by either the industry or the regulators. Inevitably, they will become relevant as processing power increases; it is a matter of when, not if. There is still time for our governments to play catch up and protect consumers. Although Google’s Sycamore is advanced, it is still not capable of fulfilling every data scientist’s deepest desires. The Sycamore chip is a 54-qubit processor. That is relatively limited, and is one of the many reasons that the discovery is not practically useful. Researchers want a 100-qubit - or even 200-qubit - system before they are really able to put it to the test and see whether the dreams of quantum computing are realised. Rather than just controlling data transfer, it is time for a wider conversation about data usage. Which uses of data - regardless of who owns it and how it has been sourced - are ethical and safe? And which are unethical and dangerous? As lawmakers like US congresswoman Alexandria Ocasio-Cortez seem to enjoy grilling tech executives like Mark Zuckerberg on the minutiae of data usage, I hope we do not lose sight of the bigger picture. The stakes are too high, and the processing power is now too big, for us to be complacent. By Jamal Ahmed the founder of Kazient Privacy Experts Source
  15. Google's decade-old feature used in Chrome to reduce browsing history is now available for location services in Android. After announcing it twice in the past year, Google is keeping its promise and unrolling Incognito mode for Maps for its Android users, the company confirmed in a blog post. Modeled after the same tool that can be used in Chrome since 2008 to visit web pages without any browsing history being recorded within the platform, the new feature will prevent users' activity in Maps from being saved to their Google account. This means that, when it is switched on, you can search and view locations without having any information added to your Google account history – making, for instance, Google's personalized recommendations a lot more neutral, since they are based on your personal data. Maps will also stop sending you notifications, updating your location history and sharing your location. Google first announced that Incognito would be released for Maps a few months ago, and more recently reiterated that the feature was coming soon. Eric Miglia, director of privacy and data protection office at Google, said: "managing your data should be just as easy as making a restaurant reservation or using Maps to find the fastest way back home". While Incognito is indeed easy to switch on – users simply have to tap the option on their profile picture in Maps – there is a caveat. "Turning on Incognito mode in Maps does not affect how your activity is used or saved by internet providers, other apps, voice search, and other Google services", reads the announcement. In other words, turning the feature on minimizes the information stored in users' personal Google accounts, but it doesn't do much to stop third-parties from accessing that data. It is therefore useful for those who wish to get rid of personalized recommendations prompted by Google, but it should not be seen as an entirely reliable privacy tool. When it is switched on, Incognito also stops some key features from running, which include Google Assistant's microphone in navigation, so it might not be a tool that commuters will be using at all times. As well as Incognito for Maps, Google teased two other services to enhance privacy protection in its services last month. YouTube will have a history auto-delete option, and Google Assistant will be getting voice commands that let users manage the Assistant's own privacy settings. The company's attempts to strengthen privacy controls for its users comes at the same time as loopholes emerged in Chrome's Incognito mode. Websites were found to be able to detect visitors based on whether or not an API was available in Chrome's FileSystem, which let them enforce free article limits in the case of news websites, for instance. Although Google modified its FileSystem in Chrome 76 to prevent this, website developers have again been crafting methods to bypass the new system. Incognito for Maps is expected to hit iOS soon, but no precise date was confirmed by Google. Source: Google Maps on Android user? Now you can switch to incognito mode (via ZDNet) p/s: While this news talks about new feature on Google Maps app in Android and iOS and initially intending to post under Mobile Software News, this news is better suited to be posted under Secuirty and Privacy News section, as this news holds greater emphasis on security and privacy features on Google Maps app, including Incognito mode.
  16. More scrutiny for the Chinese company The United States has opened a national security review into TikTok’s parent company over its acquisition of social media app Musical.ly, Reuters reports. In 2017, China-based TikTok owner Beijing ByteDance Technology bought up the popular American lip-syncing app — and its user base — for $1 billion. Last year, the app was fully rebranded as part of TikTok. But in the time since the deal closed, TikTok has faced substantial pressure from US lawmakers who have questioned how the company moderates its political content and stores its user data. In a letter last month, Sen. Marco Rubio (R-FL) called for an investigation into the company, writing that “Chinese-owned apps are increasingly being used to censor content and silence open discussion on topics deemed sensitive by the Chinese Government and Community Party.” The letter followed reports that TikTok was censoring political content that was offensive to the Chinese government. (The company has said its moderation decisions are based in the US and “are not influenced by any foreign government.”) Rubio’s note was followed by one from Sens. Tom Cotton (R-AR) and Chuck Schumer (D-NY) also calling for a review. According to Reuters, the US has now launched such a review through the Committee on Foreign Investment in the United States, or CFIUS, which is responsible for reviewing deals with national security implications. The news service reports that TikTok did not go through a CFIUS review when it made the Musical.ly acquisition, and are in talks about national security concerns now. The investigation is the latest hurdle for the company, which has dealt with intense scrutiny as the tech industry as a whole faces renewed questions about Chinese censorship online. Last month, in one notable example, Apple was criticized for pulling an app used by pro-democracy protestors in Hong Kong. “While we cannot comment on ongoing regulatory processes, TikTok has made clear that we have no higher priority than earning the trust of users and regulators in the US,” a company spokesperson told The Verge in a statement. “Part of that effort includes working with Congress and we are committed to doing so.” Source: US launches national security review of TikTok, Reuters reports (via The Verge)
  17. DNA matching can produce interesting data on family trees, but may also expose us to serious risk. DNA testing is no longer simply a tool in the medical field -- in recent years, DNA profiling has become a product offered by private companies and third-party services. These tests, often conducted with a home swab and posted away for analysis, can reveal family matches and possible connections, as well as clues to our ethnic heritage. As records pile up in the databases of companies including Ancestry.com and MyHeritage, third-party websites -- such as GEDmatch -- can also be used to compare DNA sequences submitted by other people. It is undisputably interesting to learn more about our genetic traits and family trees, but as noted by academics from the University of Washington, there may be a trade-off when it comes to our privacy and security. GEDmatch is the focus of new research into the security risks of DNA profiling. The paper (.PDF), published by University of Washington academics and accepted at the Network and Distributed System Security Symposium for presentation in February, explains how small numbers of comparisons made through the platform can be used to "extract someone's sensitive genetic markers," as well as construct fake profiles to impersonate relatives. "People think of genetic data as being personal -- and it is. It's literally part of their physical identity," said lead author Peter Ney from the UW Paul G. Allen School of Computer Science & Engineering. "This makes the privacy of genetic data particularly important. You can change your credit card number but you can't change your DNA." The researchers created an account on GEDmatch and uploaded different genetic profiles by sourcing data from anonymous genetic profiles. The platform then assigned these profiles an ID. When one-to-one comparisons are made, GEDmatch creates graphics to show how two samples match or differ, including a bar for each 22 non-sex chromosome. It is this bar that the researchers honed in on, creating four "extraction profiles" to try and deduce the target profile's DNA by making continual comparisons. "Genetic information correlates to medical conditions and potentially other deeply personal traits," added co-author Luis Ceze. "Even in the age of oversharing information, this is most likely the kind of information one doesn't want to share for legal, medical and mental health reasons. But as more genetic information goes digital, the risks increase." Millions of us have already submitted our DNA for tests, and as more individuals jump on the trend, the risks are likely to increase. Another GEDmatch graphic, together with 20 experimental profiles, revealed that larger samples could be exploited to target a single record with an average of 92 percent of a test profile's unique sequences becoming harvested with roughly 98 percent accuracy. False relationships, too, are a possibility. The researchers created a fake child containing 50 percent of its DNA from one of their experimental profiles. After launching a comparison, GEDmatch came back with an estimated parent-child relationship. By doing so, it is theoretically possible for attackers to also create any family relationship they want by changing shared DNA fractions. "If GEDmatch users have concerns about the privacy of their genetic data, they have the option to delete it from the site," Ney said. "The choice to share data is a personal decision, and users should be aware that there may be some risk whenever they share data." The academics reached out to GEDMatch prior to publication and said that the platform is "working to resolve these issues." The research was funded in part by the University of Washington Tech Policy Lab, with the help of a grant from the Defense Advanced Research Projects Agency (DARPA) Molecular Informatics Program. GEDmatch told ZDNet: Source: GEDmatch highlights security concerns of DNA comparison websites (via ZDNet)
  18. Facial recognition technology is failing to recognise transgender people new research has shown, raising concerns about discrimination as the use of the software becomes increasingly prevalent. Researchers at the US University of Colorado Boulder tested facial recognition systems from tech giants IBM, Amazon, Microsoft and Clarifai on photographs of trans men and found they were misidentified as women 38% of the time. Cisgender women and men – or those who identify as their birth gender – were correctly identified 98.3% of the time and 97.6% of the time respectively. The software also failed to recognise people who did not define themselves as male or female – also known as nonbinary, agender or genderqueer – 100% of the time. The results highlight that even the most up-to-date technology only view gender in two set categories, the report's lead author Morgan Klaus Scheuerman said in a statement. "While there are many different types of people out there, these systems have an extremely limited view of what gender looks like," Scheuerman said. Facial recognition remains highly controversial but is gaining in use by police and immigration services. The market for the technology is predicted to double in the next 15 years, according to research group MarketsandMarkets. Software that excludes trans and nonbinary people may prove discriminatory, rendering such persons invisible to a technology that is becoming increasingly incorporated into daily life. Misidentification can even be actively harmful, such as at airport security where trans people are often subject to invasive body searches or harassment if their ID does not match their gender. The Transportation Security Administration (TSA) is currently rolling out facial recognition at airports across the United States. A spokesman from LGBT+ group Stonewall said: "It's concerning to hear that facial recognition software is misgendering trans people. The experience of being deliberately misgendered is deeply hurtful for trans people. "We would encourage technology developers to bring in and consult with trans communities to make sure their identity is being respected." The study also suggested the software relies on outdated gender stereotypes in its facial analysis. Scheuerman, who is male with long hair, was categorised as female half of the time. "When you walk down the street you might look at someone and presume that you know what their gender is, but that is a really quaint idea from the '90s and it is not what the world is like anymore," said senior author Jed Brubaker, an assistant professor of Information Science. "As our vision and our cultural understanding of what gender is has evolved, the algorithms driving our technological future have not. That's deeply problematic." Source: Facial recognition technology struggles to see past gender binary (via The Star Online)
  19. Communications and Multimedia Minister Gobind Singh Deo said a clear message needs to be delivered on the importance of securing user data in the digital space. Gobind added that as his ministry is looking into tightening and strengthening the current Personal Data Protection Act 2010 (PDPA), he is aware of concerns about the number of data breach incidents reported in Malaysia so far. "This matter definitely has my attention and I'm putting major focus on it. My ministry is in the process of tightening the law so we can give out a clear signal that data security needs to be guaranteed," he said during an interview at the Maxis Business Spark Summit in Kuala Lumpur today. Gobind believes that if there are any issues related to a data breach, stern action needs to be taken. "We have to show that we are serious about preventing data breaches and we have to be strict about enforcing the law." According to Gobind, data security and protection should be guaranteed for Malaysians as the country moves toward the digital era. "If we want to ask more Malaysians to be a part of digital technology such as e-commerce and so on, we have to encourage them to use the Internet to expand their business. We also have to give them guarantee that their data will be safe. We are paying attention to that." In October, a series of data breach incidents involving a public university and two different ministries has been reported in the media. Gobind declined to comment further on any specific cases as he is still awaiting a full report on the matter. "The matter is still under investigation. I don't want to comment until I have all the facts," Gobind said, adding that he will make an announcement on PDPA amendments later. Source: Gobind: Ministry needs to send out clear signal on the importance of data safety (via The Star Online)
  20. LA wants Uber’s location data, but the ride-hailing company says it’s worried about privacy The fight between the city of Los Angeles and scooter companies over location data is heating up. On Monday, Uber filed a lawsuit against LA’s Department of Transportation (LADOT) pushing back against the requirement that scooter operators share anonymized real-time location data with the city. The suit, which was first reported by CNET but has yet to be filed in LA Superior Court, centers on LADOT’s use of a digital tool called the mobility data specification program (MDS). The department created the tool as a way to track and regulate all of the electric scooters that are operating on its streets. MDS provides the city with data on where each bike and scooter trip starts, the route each vehicle takes, and where each trip ends. LADOT has said the data won’t be shared with police without a warrant, won’t contain personal identifiers, and won’t be subject to public records requests. Naturally, MDS has proven controversial with scooter companies, which have balked over having to share location data with the city. It’s growing into a bigger problem beyond LA. Cities such as Columbus, Chattanooga, Omaha, San Jose, Seattle, Austin, and Louisville are demanding scooter companies agree to share data through MDS as a condition for operating on their streets. Uber, which owns the dockless scooter and bike company Jump, said MDS would lead to “an unprecedented level of surveillance” and vowed to stop it. It’s leaning on a recent analysis by California’s Legislative Counsel to make its argument. The counsel said MDS could violate the California Electronic Communications Privacy Act, which was signed into law in 2015. In August, Uber and Lyft sent a letter to California Attorney General Xavier Becerra, in which the companies argued that LADOT was exceeding its authority with MDS. “While we support the creation of a global standard for data-sharing for local municipalities, it appears that certain city MDS requirements may be in violation of CalECPA,” the companies wrote. “We have repeatedly raised concerns directly with these municipalities throughout the development and implementation of MDS, and yet they continue to require the MDS as a condition of our operating permits.” In a statement, Uber said that it has exhausted its options and had “no choice” but to sue the city. A spokesperson for LADOT did not immediately respond to a request for comment. In an interview with The Verge on September 9th, LADOT director Seleta Reynolds said that the city “encoded” privacy protections into the regulations in order to give them “the force of law.” She added that it’s a “Day One job and a forever job” of city officials to make sure that the “open source tools that we build do not become tools that people can use to invade the privacy of others.” Source: Uber sues Los Angeles as the fight over scooter data escalates (via The Verge)
  21. Last month, Google announced a plan to encrypt DNS — or DNS over HTTPS (DoH) — in Chrome. In the United States, this was met with criticism from Internet Service Providers for limiting monitoring capabilities, but supported by privacy activists. Google today is pushing back against “misconceptions” regarding its rollout. The current lack of encryption when browsers make requests to DNS providers means that others could track what sites you’re visiting or maliciously redirect you to another page. Chrome and other browser solutions involve secure DNS connections with DNS-over-HTTPS. Google starts by noting that it is not changing a user’s DNS provider to its own 8.8.8.8 service. Rather, Chrome is just supporting those secure connections if you’re using a current provider that supports DoH. Another concern has been how encrypted DNS in Chrome will interfere with parental controls offered by ISPs that block inappropriate websites. There should be no actual impact. So far, Chrome only has plans to roll out DoH support for 1% of users. Still an “experiment,” Google wants to monitor performance and reliability, while Chrome 79 will offer the ability to opt-out via a new flag: chrome://flags/#dns-over-https. Source: Google addresses ‘misconceptions’ about Chrome’s encrypted DNS push (via 9to5Google)
  22. Australia’s consumer watchdog has sued Google and its local subsidiary, accusing the Alphabet Inc. company of misleading users in the way it gets permission to track their location. At issue is Google’s Location History setting on Android mobile devices. The way Google represents it to users would lead them to believe that turning the feature off would be enough to stop the company from storing their location data, the Australian Competition and Consumer Commission alleges in its lawsuit. But users in fact needed to switch off "Web & App Activity” tracking to truly block storage of location data, it said in its filing. "We allege that Google misled consumers by staying silent about the fact that another setting also had to be switched off,” said ACCC chair Rod Sims in the statement announcing the action. A Google spokesman did not immediately respond to a request for comment. Google’s privacy controls have drawn criticism in the past and the company has taken steps to centralise and make them more transparent. Even so, options remain fragmented across multiple settings. Google’s smartphone services store users’ locations even when privacy settings are adjusted to shut these features off, according to a report by the Associated Press that was confirmed by Princeton University researchers. Google said at the time that Location History is entirely opt-in but – even if it’s disabled – the company will continue to use location to improve user experience in search or navigation, for instance. The period addressed by the complaint in Australia’s Federal Court spans from January 2017 until late 2018, and there’s a supplementary issue raised relating to the second half of 2018. The ACCC’s additional allegation is that Google misled consumers into thinking that "the only way they could prevent Google from collecting, keeping and using their location data was to stop using certain Google services, including Google Search and Google Maps”. That hid the fact, it argued, that disabling location tracking could in truth "be achieved by switching off both ‘Location History’ and ‘Web & App Activity’.” The ACCC seeks penalties and the setup of a compliance program for future activities, among other measures. The watchdog now has the power to levy penalties as high as 10% of revenue, Sims told reporters. Source: Google sued for misleading Australian users on location tracking (via The Star Online)
  23. The Tor Project has announced the release of Tor Browser 9.0, the new update brings several updates to the user experience, integrating more features into the browser directly and scrapping the onion button. Additionally, localisation has been improved with support added for the Macedonian and Romanian languages, bringing the total amount of supported languages to 27. With Tor Browser 9.0, Firefox 68.2.0 is used as the foundation. In order to scrap the onion button that came with old releases, the Tor Project has altered the actual interface of Firefox adding circuit information to the ‘i’ button in the address bar, adding more Tor settings into about:preferences, and including a new identity button in the toolbar and in the menu. One of the ways that Tor users can be identified by websites is by the size of the browser window. For several releases now, when the user maximises the Tor Browser window a notification would appear warning users not to do that. In order to make things simpler for users, a new feature called letterboxing has been added, this essentially restricts the amount of space a webpage can use; even if the browser is maximised, the user will just see a grey border around the webpage. In order to get the new update, either download a fresh copy of the browser from the official website or if you have Tor already installed just continue using the browser and it should update automatically. Source: The Tor Project releases Tor Browser 9.0 with several UX improvements (via Neowin)
  24. The team behind the Tails Operating System have announced the availability of Tails 4.0, the first major released to be based on Debian 10. With the re-basing of the operating system, new software is included – two important software packages that were updated are the Linux kernel which adds support for new hardware, and the Tor Browser which was bumped to version 9.0 and stops websites identifying you based on the size of the browser windows using a technique called letterboxing. Other software packages that were updated include KeePassXC which has replaced KeePassX, OnionShare has been upgraded to 1.3.2 bringing usability improvements, the metadata cleaner tool, MAT, has been upgraded to 0.8.0 and loses its graphical interface and now appears in the right-click menu instead, Electrum has been upgraded to 3.3.8 and works in Tails again, Enigmail was updated to mitigate OpenPGP certificate flooding, and Audacity, GIMP, Inkscape, LibreOffice, git, and Tor all received upgrades too. Another major change is to the Tails Greeter. With this update, languages which have too little translations to be useful have been removed, the list of keyboard layouts has been simplified, the options chosen in the Formats settings are now applied because they weren’t before, and finally, it’s now possible to open help pages in other languages than English, when available. The final thing worth mentioning about this update pertains to performance and usability improvements. Tails 4.0 starts 20% faster, requires 250MB less RAM, has a smaller download footprint, adds support for Thunderbolt devices, the on-screen keyboard is easier to use, and USB tethering from iPhone is now supported. Unfortunately, users on previous version will have to perform a manual update to Tails 4.0 but it shouldn’t take too long to do, you can find out more information on the Tails install guide. Source: Tails OS is now based on Debian 10 and ships major Tor Browser update (via Neowin)
  25. After the company’s many privacy sins, people are apparently hesitant to put a Facebook device with a camera in their living rooms. Imagine that. Surprise, surprise. Facebook’s Portal video-chat device—which puts a camera and a sensitive microphone in your living room—isn’t flying off the shelves, say supply-chain sources and store sales reps. The device, which launched a little over a year ago, has been plagued by the privacy concerns of would-be buyers from the start. And Creative Strategies analyst Ben Bajarin tells Fast Company that his sources at the companies that supply parts for the Portal say that shipments of the devices are “very low.” “The orders for the components were not high to begin with, and the build volumes were low,” Bajarin says. “They [Facebook] never meant to build up a large inventory.” Bajarin says the Portal is selling in the “hundreds of thousands” per year range, not in the millions. Rakuten, which tracks online device sales, says Portal accounts for 0.6% of units sold in the overall smart-speaker category, and 3.9% of smart speakers with screens (such as Amazon’s Echo Show and Google Home). Anecdotal accounts seem to echo Bajarin’s sources. A sales rep at a Best Buy store in midtown Manhattan said they’d sold “hardly any” of the Portals at all. And a rep at the main San Francisco store said the devices sold reasonably well when they launched last year but have slowed since then. Facebook didn’t immediately respond to a request for comment. The irony is that, from a pure technology perspective, the Portal may be the best device on the market. The device uses superior computer-vision AI to focus on the people in conference calls, as if a real cameraman were framing the shots. “I’ve always said that Portal is a really great product and had some other big tech company like Google or Amazon or Microsoft launched it, this hardware would have done much better,” Bajarin said. Facebook doubled down on Portal this year with the June announcement of a second generation of the device, which supports WhatsApp video chats, not just Facebook Messenger chats. The Portal competes with the Google Home and Amazon’s Echo Show devices. While the Portal supports Amazon’s Alexa, the assistant is limited to a narrow set of communications-related functions. In the Echo Show, Alexa can perform more tasks, including controlling smart-home devices. The Assistant in Google Home can also perform a full range of smart-home control functions. Facebook delayed the release of the original Portal from May until December last year because the company was embroiled in data-privacy scandals and feared the public might not be in the mood to accept a Facebook camera in the living room. It’s unlikely the public’s mood has improved much since then. “I do think the heart of the matter is that Facebook has lost people’s trust to the point that Facebook may never evolve beyond what it is as an online place people see stuff from their friends,” Bajarin said. “Basically, a social wall.” Source
×
×
  • Create New...