Jump to content

Search the Community

Showing results for tags 'privacy'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...

Found 329 results

  1. Our online presence has infiltrated the workplace, but is what we post affecting our promotions at work? With the world being always accessible in the palm of our hands, it's often difficult to disconnect. It's hard to believe that in 2005 only 5% of adults in the US used at least one social media platform. Now, in 2019, almost three-out-of-four (72%) people are active on social media, so it's no wonder that the lines between our home and work lives are becoming blurred. We are suffering from social media overload. But what happens when your co-workers (or even boss) decide they want to follow you online? Vancouver, BC-based job interview company The Interview Guys studied data from 1,024 employees who had been followed by a friend or colleague across their social media accounts. It wanted to discover more about their self-censorship behavior online. Almost all respondents (97.8%) reported being followed or friended by their colleagues on Facebook, followed by 82.1% on Instagram, 75.6% on Snapchat, and 65.2% on Twitter. Most of these connections (94.8%) came from co-workers they interacted with daily. Almost half of these friend requests (48%) however, came from people higher up in the organization such as supervisors or managers. However, several respondents reported having an account purely for work purposes. Almost one in five (19.5%) had a work account on Twitter, 15.1% on Facebook, 13.9% on Instagram and 13.3% on Snapchat. Three in 10 employees accepted friend requests to keep the peace at work, as having a good relationship with your co-worker is crucial to job-related success. Is it worth accepting a friend request from someone you do not particularly like, or have a difficult relationship with at work? Many employees self-censor their posts because colleagues can see them. Employees over 50 years old self-censored their posts 20% more than employees in their 20s. Over two out of five (41%) employees in their 20s admitted to avoiding posting content that involves drinking or drug use on their social media profiles. Seventy-seven percent of respondents acknowledged using privacy settings on their social posts, followed by around 76% who did the same for their photos. The main topics employees avoided posting about because their co-workers could see them were: Political feelings (36.0%), drinking or drug use (34.2%), and anti-company statements (32.0%). Around 31% of respondents in their 50s or older avoided posting about the company they worked for. Two-out-of-five employees say their company is strict on their social media usage outside of work, and one in three people report knowing someone whose employer terminated them based on their actions on social media. Over 30% of people say companies should screen job applicants' social media as part of the hiring process. One in 10 employees said they were required to disclose their social media profiles when they applied for their current position. Social media contributed to the burnout that many experience at work, and added to anxiety about their colleagues monitoring their social activity. Perhaps it is time to take a step back. If you keep your social profiles public, don't be surprised if you find co-workers lurking on your content. You might want to scrub your social media posts and job hop to a better career. It might be time to revisit your privacy settings and make sure your private posts stay that way. Source
  2. Dear friends, Nowadays our privacy is very important. I am interested to know which VPN service do you use and which is the best according to your opinion. Not to all vpn services are enough secure. Recently, has been discovered that HotSpot Shield in some cases could show your real ip. Have a look here : 1.Android 2. Windows Thanks for your time spent with this poll ! :)
  3. After establishing its work with the U.S. police, Amazon’s AI will probably start helping European units too. A Europol conference presentation analyzed the potential of such a collaboration with Alexa. The AI could soon be integrated into the police investigation procedures, as well as proactive policing. During the recent EDEN (Europol Data Protection Experts Network) Conference on Data Protection in Law Enforcement in Copenhagen, Amazon’s AI Alexa was officially put into the picture. 25 speakers analyzed how tools like Alexa can help to support police investigations in front of an audience of 140 members of the Danish National Police, law enforcement experts, academics, and representatives of private entities. The five pillars of the presentations concerned data retention methods, email address-based tracking, data protection considerations, global access to criminal evidence, and predictive policing. This last element is a field where we see an increasing involvement of AI and specially crafted crime prediction algorithms. The panel analyzed the methods that can be used to deter potential crime while at the same time, respecting people’s fundamental rights. On the data retention, the panel presented the damage that is caused by restricting Police access to citizen communications data on the basis of privacy, and discussed possible ways to get around that. Similarly, EU data protection principles were discussed, as well as how to create a surveillance system that won’t violate the regulation. In addition to this, making criminal evidence globally accessible and allowing the authorities to access data on smart devices was also proposed. Finally, there were talks about the searching for email addresses and other forms of identification or tracking data on the dark web. Alexa can play a pivotal role in the above, helping investigators not only to access the data they want but to find specific information quickly. Amazon’s AI can quickly locate a person or a device, provide additional evidence such as video feeds or audio recordings, and help the Police determine the innocence of a suspect. In the same way, the Police could receive data from Alexa devices that would be fed into crime prediction systems and generate a warning when there’s a high risk of crime. For example, an intense fight between people near an Amazon Echo could result in a visit by a police unit. As we’ve seen in August, Amazon has been closely collaborating with the U.S. police since quite a while now, helping them track individuals, get video feeds from Ring doorbell devices, and convince citizens to opt-in the mass surveillance programs. While EDEN’s proposals are just proposals, for now, they are a clear indication of where we’re heading to. Source
  4. The publication by WikiLeaks of documents it says are from the CIA's secret hacking program describe tools that can turn a world of increasingly networked, camera- and microphone-equipped devices into eavesdroppers. Smart televisions and automobiles now have on-board computers and microphones, joining the ubiquitous smartphones, laptops and tablets that have had microphones and cameras as standard equipment for a decade. That the CIA has created tools to turn them into listening posts surprises no one in the security community. Q: How Worried Should Consumers Be Who Have Surrounded Themselves with These Devices? A: Importantly, the intrusion tools highlighted by the leak do not appear to be instruments of mass surveillance. So, it's not as if everyone's TV or high-tech vehicle is at risk. "It's unsurprising, and also somewhat reassuring, that these are tools that appear to be targeted at specific people's (devices) by compromising the software on them -- as opposed to tools that decrypt the encrypted traffic over the internet," said Matt Blaze, University of Pennsylvania computer scientist. The exploits appear to emphasize targeted attacks, such as collecting keystrokes or silently activating a Samsung TV's microphone while the set is turned off. In fact, many of the intrusion tools described in the documents are for delivery via "removable device." Q: Once Devices Are Compromised They Need To Be Internet-Connected in Order To Share Collected Intelligence with Spies. What Can Be Done To Stop That? A: Not much if you don't want to sacrifice the benefits of the device. "Anything that is voice-activated or that has voice- and internet-connected functionality is susceptible to these types of attacks," said Robert M. Lee, a former U.S. cyberwar operations officer and CEO of the cybersecurity company Dragos. That includes smart TVs and voice-controlled information devices like the Amazon Echo, which can read news, play music, close the garage door and turn up the thermostat. An Amazon Echo was enlisted as a potential witness in an Arkansas murder case. To ensure a connected device can't spy on you, unplug it from the grid and the internet and remove the batteries, if that's possible. Or perhaps don't buy it, especially if you don't especially require the networked features and the manufacturer hasn't proven careful on security. Security experts have found flaws in devices -- like WiFi-enabled dolls -- with embedded microphones and cameras. Q: I Recently Began Using WhatsApp and Signal on my Smartphone for Voice and Text Communication Because of Their Strong Encryption. Can the Exploits Described in the WikiLeaks Documents Break Them? A: No. But exploits designed to infiltrate the operating system on your Android smartphone, iPhone, iPad or Windows-based computer can read your messages or listen in on conversations on the compromised device itself, though communications are encrypted in transit. "The bad news is that platform exploits are very powerful," Blaze tweeted. "The good news is that they have to target you in order to read your messages." He and other experts say reliably defending against a state-level adversary is all but impossible. And the CIA was planting microphones long before we became networked. Q: I'm Not a High-Value Target for Intelligence Agencies. But I Still Want To Protect Myself. How? A: It may sound boring, but it's vital: Keep all your operating systems patched and up-to-date, and don't click links or open email attachments unless you are sure they are safe. There will always be exploits of which antivirus companies are not aware until it's too late. These are known as zero-day exploits because no patches are available and victims have zero time to prepare. The CIA, National Security Agency and plenty of other intelligence agencies purchase and develop them. But they don't come cheap. And most of us are hardly worth it. Source
  5. Leaked documents show that Microsoft’s contractors are paid between $12 and $14 an hour and are asked to transcribe as many as 200 audio clips per hour to train the Cortana virtual assistant. "Stop listening to me" is one example of a command a Cortana user may utter, according to a training manual for the human contractors Microsoft hires to listen to and classify users' speech. Apple, Google, Amazon, and most recently Facebook have been found hiring human workers to transcribe audio captured by their own products. Motherboard found Microsoft does the same for some Skype calls, and is still doing so despite other companies suspending their reliance on contractors. A cache of leaked documents obtained by Motherboard gives insight into what the human contractors behind the development of tech giants' artificial intelligence services are actually doing: laborious, repetitive tasks that are designed to improve the automated interpretation of human speech. This means tasks tech giants have promised are completed by virtual assistants and artificial intelligence are trained by the monotonous work of people. The work is magnified by the large footprint of speech recognition tools: Microsoft's Cortana product, similar to Apple's Siri, is implemented in Windows 10 machines and Xbox One consoles, and is also available as on iOS, Android, and smart speakers. "The bulk of the work I've done for Microsoft focused on annotating and transcribing Cortana commands," one Microsoft contractor said. Motherboard granted the source anonymity to speak more candidly about internal Microsoft processes, and because they had signed a non-disclosure agreement. The instruction manuals on classifying this sort of data go on for hundreds of pages, with a dizzying number of options for contractors to follow to classify data, or punctuation style guides they're told to follow. The contractor said they are expected to work on around 200 pieces of data an hour, and noted they've heard personal and sensitive information in Cortana recordings. A document obtained by Motherboard corroborates that for some work contractors need to complete at least 200 tasks an hour. The pay for this work varies. One contract obtained by Motherboard shows pay at $12 an hour, with the possibility of contractors being able to reach $13 an hour as a bonus. A contract for a different task shows $14 an hour, with a potential bonus of $15 an hour. One section of the training materials focuses especially on how the trigger command "Hey, Cortana" is pronounced in different languages and accents, including German, Chinese, Japanese, and Australian, Canadian, and American variations of English. Notably, one document tells contractors to transcribe a word as "Cortana" even if the user mispronounced Cortana as, say, "Cortona" or "Cortina," because, Microsoft believes, activating Cortana was the intent. "There are tasks where we're required to clearly capitalize proper names that relate to a contact, or other personal info," the contractor said. A Microsoft spokesperson told Motherboard in an emailed statement, "We’re always looking to improve transparency and help customers make more informed choices. Our disclosures have been clear that we use customer content from Cortana and Skype Translator to improve these products, we engage third party expertise to assist in this process, and we take steps to de-identify this content to protect people’s privacy." After Motherboard reported that contractors were listening to some Skype calls made using the service's translator function, Microsoft updated its privacy policy and other pages to explicitly include that humans may listen to collected audio. As for the work itself, one main task for contractors working with Cortana data is to classify it. Contractors are asked to bucket each transcription into a "domain" or topic. These over two dozen domains include "Calendar" for anything around appointments; "Alarm" for commands related to timers or alarms; and "Capture" for tasks that involve using the camera. Other domains include gaming, email, communication, feedback, events, home automation, note, media control, and "Orderfood," according to the documents. The "common" domain is for generic commands that could fit into more than one domain, the documents add. Each domain then has several different "intents." For the Alarm domain, that includes set alarm, turn off alarm, find alarm, change alarm, snooze, set timer, find timer, and more. Microsoft's human contractors analyze these Cortana commands, and then decide the appropriate domain and intent. Another document shows how intents are frequently removed or added to different domains, giving contractors more classifiers to work with. Some audio also relates to "double intent," where a user is asking Cortana to complete two tasks at once, which a contractor also has to look out for, the documents adds. One intent contractors are asked to classify data under is called "are_you_listening." Source
  6. They’re especially concerned with a recent security flaw in Messenger Kids. Senators are questioning Facebook again. This time their concerns are related to a design flaw that let thousands of kids join group chats with unauthorized users, The Verge reports. Senators Edward Markey (D-Mass.) and Richard Blumenthal (D-Conn.) wrote a letter to Mark Zuckerberg today, asking whether Facebook has done enough to protect children's online safety. Last month, a report by The Verge revealed the design flaw in Messenger Kids. The app is supposed to require parental permission before kids chat with other users. But a loophole allowed approved users to invite kids to group chats, even if unauthorized users were there too. In response, Facebook reportedly alerted parents to the flaw and shut down group chats created through the loophole. Markey and Blumenthal want to know how long Facebook knew about the flaw, how long it existed, if all parents have been notified and what measures Facebook will take to prevent similar issues in the future. Because the app is intended for users between the ages of six and 12, it must comply with the Children's Online Privacy Protection App (COPPA). It's unclear if this recent glitch violates COPPA, but privacy experts and advocacy groups previously filed an unrelated complaint with the Federal Trade Commission (FTC). They claim that Messenger Kids collects children's personal information without clear disclosures or parental consent. Facebook is not the only tech company struggling to protect children's privacy. Both Google and TikTok have been slapped with multimillion dollar fines for violating COPPA. This spring privacy advocates filed a complaint alleging that Amazon stores kids' conversations and data even after parents attempt to delete it. Whether or not Facebook is fined for this Messenger Kids flaw or other potential COPPA violations, it will need to do a better job protecting children's privacy if it wants to fulfill its "future is private" promise. Engadget has reached out to Facebook for comment. Source
  7. Key Points David Marcus, the head of Facebook's digital currency project, says during a hearing in the U.S. Senate on Tuesday that authorities in Switzerland will oversee data and privacy protections for its new cryptocurrency, Libra. But a spokesperson for the Swiss agency, the Federal Data Protection and Information Commissioner, says it has not yet been contacted by Facebook. Several senators who questioned Marcus on Tuesday express concern over Facebook's reputation with data privacy. Facebook said on Tuesday that Switzerland's data protection agency will oversee data and privacy protections for its new cryptocurrency, Libra. But Facebook hasn't reached out to the Swiss regulator, a spokesman for the agency told CNBC. In his testimony before the Senate Banking Committee on Tuesday, David Marcus, the head of Facebook's digital currency project, said, "For the purposes of data and privacy protections, the Swiss Federal Data Protection and Information Commissioner (FDPIC) will be the Libra Association's privacy regulator." Asked about the agency's role regulating Libra, Hugo Wyler, head of communication at the FDPIC, said in a statement to CNBC: "We have taken note of the statements made by David Marcus, Chief of Calibra, on our potential role as data protection supervisory authority in the Libra context. Until today we have not been contacted by the promoters of Libra," Wyler said. "We expect Facebook or its promoters to provide us with concrete information when the time comes. Only then will we be able to examine the extent to which our legal advisory and supervisory competence is given. In any case, we are following the development of the project in the public debate." A Facebook spokesperson confirmed that the company hasn't yet met with the FDPIC. Facebook's cryptocurrency project has already been met with skepticism from policymakers around the world. U.S. Treasury Secretary Steven Mnuchin and Federal Reserve Chairman Jerome Powell both said they have "serious concerns" about Libra related to money laundering, financial stability and regulation. Many of the senators who questioned Marcus on Tuesday also brought up data privacy concerns tied to Libra. While FDPIC would handle data privacy issues, the Swiss Financial Markets Supervisory Authority, or FINMA, would be the main financial regulator of Libra, Marcus said in his testimony. FINMA confirmed to CNBC it was in contact with initiators of the Libra project. Source
  8. When Apple CEO Tim Cook privately hosted six Democratic lawmakers at the company's space-age headquarters this spring, he opened the conversation with a plea - for Congress to finally draft privacy legislation after years of federal inaction. Image: Apple displayed its privacy message at the tech conference CES 2019 in January. "It was the first issue he brought up," said Washington Rep. Suzan DelBene, one of the lawmakers who made the trip to Cupertino, California. The Apple chief "really talked about the need for privacy across the board," said DelBene, a former Microsoft executive. But when DelBene discussed her own privacy bill, which would require companies to obtain consent before using consumers' most sensitive information in unexpected ways, Cook didn't specifically endorse it, she said. A number of privacy advocates and U.S. lawmakers - who did not attend the meeting - say Apple has not put enough muscle behind any federal effort to tighten privacy laws. And state lawmakers, who are closest to passing rules to limit data sharing, say Apple is an ally in name only - and in fact has contributed to lobbying efforts that might undermine some new data-protection legislation. While Apple formally supports the notion of a federal privacy law, the company has yet to formally back any bills proposed on the Hill - unlike Microsoft. "I would argue there's a need for Apple to be a more vocal part of this debate," said Virginia Sen. Mark Warner, a fierce critic of tech companies for their privacy violations. And in California, Washington and Illinois, home to the most significant state privacy bills, the iPhone giant has sought to battle back or soften local legislators' proposed bills, often through its trade associations. That has frustrated lawmakers such as California assemblyman Marc Levine, who has introduced two privacy bills in the Golden State's legislature this year. He and others argue that states represent the best hope for privacy legislation given the lack of federal progress. "While the headlines from Tim Cook have him being really forward on advancing the idea that policy can help control how data is used and mismanaged and abused, that hasn't played out in policy making," he said. "I would welcome a stronger presence by Apple and I would also welcome their advocacy on what best practices should be." Apple indirectly opposed the legislation, via trade groups it funded. On the other hand, Levine noted that Apple had approached him directly to discuss California's plastic bag ban. "They lobby in all these other areas. They're just not face forward on privacy." "We believe privacy is a fundamental human right and is at the core of what it means to be an American. To that end, we advocate for strong federal legislation that protects everyone regardless of which state they may live," said Apple spokesman Fred Sainz. "We understand the frustration at the state level - we are frustrated too - but this topic is so important we need to be united across America." The states' privacy protections could influence federal lawmakers as they try to craft a national standard, experts say. Politicians and aides said they hoped Apple would counterbalance the more active Google and Facebook, whose businesses are highly reliant on data-sharing. Still, by focusing any support purely behind a potential federal law, Apple's position for now allows it to function in an economy in which there is very little regulation of privacy. Apple and other tech giants are scheduled to testify on Capitol Hill Tuesday in front of a House Judiciary subcommittee focused on antitrust issues, the latest blockbuster hearing to highlight one of Washington's most pressing questions: How to regulate large technology companies that have come to sway markets, influence elections and impact the social fabric of society. On the issue of privacy, Apple itself has helped create the sky-high expectations with its public pronouncements. For months, it has been running advertisements touting its privacy bona fides, including one plastered to the side of a hotel during a major tech conference that promised, "What happens on your iPhone, stays on your iPhone." Last year, Apple chief executive Cook sharply attacked Facebook for its practice of collecting personal information on users in the wake of the Cambridge Analytica scandal. Months later, he made a rare appearance before the European Parliament, calling for a U.S. version of Europe's tough data-protection rules, known by the acronym GDPR. In a recent commencement speech at Stanford University, Cook painted a frightening picture of a world in which the collection of consumer data continues unabated. "Even if you have done nothing wrong other than think differently, you begin to censor yourself. Not entirely at first. Just a little, bit by bit. To risk less, to hope less, to imagine less, to dare less, to create less, to try less, to talk less, to think less. The chilling effect of digital surveillance is profound, and it touches everything," he said. Despite Apple's public stance on privacy, a Washington Post investigation earlier this year found Apple allows iPhone apps to include tracking software that surreptitiously sends the personal data of Apple customers to outside companies. Apple said it requires its app developers to have a clear privacy policy. Apple announced in June that will prohibit that form of tracking in apps for children later this year. "If you are going to use the value of privacy in your marketing, I think you have an obligation to your consumers to tell us what that means," said India McKinney, a legislative analyst for the Electronic Frontier Foundation, a civil liberties organization that advocates for internet privacy and security. McKinney noted that Apple hasn't signed on to privacy legislation that other companies, such as web browser DuckDuckGo, have supported, such as an amendment to the new California law that prevents consumer data collection by default and gives citizens the right to sue tech companies for violations. If Apple were to throw its weight behind strong privacy protections even at the state level, it would help counter pressure from other large tech companies to water down the legislation, she said. "That would make headlines. That would be really useful," she said. In many cases, though, Apple finds itself aligned with the companies it criticizes in seeking to ward off state legislative proposals, often through lobbying organizations in which they share membership, such as TechNet and CompTIA. While Apple sat on the sidelines, other tech companies, such as Amazon, Google and Facebook, have been actively opposing laws in states including California, Illinois and Washington that would protect consumers and pushing for amendments that would roll back some of the provisions in the California Consumer Protection Act, the landmark state law, according to lawmakers in those states. Silicon Valley companies such as Facebook opted not to stand in the way of that law passing, according to people involved in getting the legislation passed, only after calculating that the alternative - putting privacy legislation in front of voters as a ballot measure - was less attractive. Facebook, Google and Amazon declined to comment. (Amazon CEO Jeff Bezos owns The Washington Post.) Apple's business model stands in stark contrast to its rivals such as Google and Facebook because its bottom line doesn't depend on collecting user data for the purpose of advertising. In recent weeks, Cook and other Apple executives have been making the rounds in Washington to meet with members of Congress and the Federal Trade Commission, touting the company's privacy practices in what many see as an attempt to draw contrast with tech giants such as Facebook, according to people who have attended the meetings. The FTC is expected to play a stepped-up role policing the privacy practices of tech giants if lawmakers pass legislation. On antitrust issues, meanwhile, the Department of Justice newly plans to scrutinize Apple for its business practices, the Post previously reported. Cook took his privacy pitch directly to lawmakers in May, on the heels of Apple's unveiling of its new D.C. flagship store, meeting with House Speaker Nancy Pelosi and Republican Sen. Roger Wicker, who leads his chamber's top tech-focused committee. Apple spokesman Sainz said the company has discussed its views on privacy in more than 100 meetings with lawmakers around the country. But Apple hasn't given its explicit stamp of approval to any federal bills that have been introduced, which would set a national privacy standard - something tech companies, including Apple, say they would prefer over a patchwork of state laws. A collection of Democrats and Republicans in the House and Senate have said they plan to offer a national privacy proposal in the coming months. Warner offered an example of Apple's absence: A bill that would essentially outlaw so-called "dark patterns" that trick users into surrendering their personal information when they sign up for a service. The senator said the bill he introduced in April had garnered early support from Microsoft and Mozilla, a non-profit known for its privacy-focused Firefox internet browser. Not Apple. "We would be the first to say we can do more and constantly challenge ourselves to do so," Sainz said in the statement. "We have offered to help write the legislation and reiterate this offer." DelBene, who also visited other tech companies during the Democratic lawmakers' tour of Silicon Valley, said she "didn't think of it as a negative" that Cook didn't publicly endorse her legislation because most companies have backed concepts rather than specific bills so far in the privacy debate. Apple's history with lawmakers is complicated. Steve Jobs, Apple's late co-founder, openly disdained lobbying. Under Cook, though, Apple has stepped up its presence in D.C., particularly around privacy and government surveillance. It engaged in a legal battle with the FBI in 2015 when law-enforcement officials sought to force it to crack a password-protected iPhone at the heart of the deadly shooting in San Bernardino, California. Cook has embraced his powerful role as Apple's chief political spokesman. In private meetings spanning from Trump's golf course in Bedminster, New Jersey, to the Oval Office, the Apple chief executive has convinced the president to spare his company's iPhones, iPads and other products from the stiff tariffs that have affected other goods coming from China. Apple also lobbies for issues that would further its interests, such as lowering corporate taxes and reforming the U.S. patent system. Politically, though, Apple lags far behind competitors. Alphabet, Google's parent company, spent $22 million on federal lobbying efforts in 2018, and Facebook spent $13 million. That's compared to Apple's $7 million. And unlike its competitors, Apple doesn't donate directly to political candidates. On privacy legislation, most of the action is taking place outside of Washington. Over the past year, policymakers have considered at least 24 bills targeting data privacy, according to a tally by the National Conference of State Legislatures. Many raced to act in the months after California adopted toughest-in-America rules last year that provide web users with more information about what happens to their data - and more ability to prevent it from being sold. Alastair Mactaggart, a real estate developer who championed California's privacy bill, for months couldn't snag sufficient support from tech companies, including Apple. One sticking point was a provision in his original proposal that gave Californians the right to sue companies caught violating their privacy. After several trips to Cupertino to meet with Jane Horvath, a former Google lawyer who now heads Apple's privacy efforts, Apple offered an olive branch: Mactaggart could tell lawmakers that if the bill narrowed citizens' right to sue to cases in which consumer data was leaked due to a company's negligence, Apple would "dislike the bill less," he said. "It allowed me to go to legislators and say the biggest company in the world is willing to live with this." Mactaggart said Apple's stance was instrumental in getting the bill passed, but Apple was far from a privacy champion. "I don't think Apple was super thrilled about it. They weren't going rah-rah-rah." Apple's stance opposing a citizen's right to sue companies for privacy violations is a strike against its pro-privacy reputation, said Neema Singh Guliani, senior legislative counsel for the American Civil Liberties Union. She also knocked Apple for supporting the idea of a single federal law that would override state laws. "Any company that says they are for strong privacy protections should not try to use federal legislation to wipe out the ability of states to put in place higher privacy standards," she said. "They should also be in favor of strong enforcement," which includes the ability to sue, she said. For tech giants, the stakes in the states are high. "The states will influence privacy legislation," said University of California, Berkeley law professor Paul Schwartz, who has studied the interplay between state and federal privacy legislation. California State Sen. Hannah-Beth Jackson described a lack of support from Apple after she introduced legislation in February that would have expanded the ability of Californians to sue tech companies for violations of their privacy rights. Instead of directly weighing in on the legislation, Apple deferred to the industry associations it belongs to with Facebook, Amazon and Google, she said, which have been effective in stalling privacy legislation in California. TechNet, like many other business groups, actively opposed the measure. "It was all the different tech associations comprised of all the different companies. Nobody had to take responsibility," Jackson said. In a statement, TechNet said its agenda is a "collaborative effort" from its 80 corporate members. "At both the federal level and in 24 states this year, TechNet has promoted consumer privacy legislation that strikes the appropriate balance of these priorities," said spokeswoman Natalie McLaughlin. "America needs to continue its leadership to protect consumers while not discouraging companies from delivering new and innovative products and services," said Alexi Madon, CompTIA's vice president for state government affairs. In the state of Washington, meanwhile, Rep. Zack Hudgins said he also lacked Apple's help earlier this year in battling large tech companies over provisions in a privacy bill that ultimately failed in part because of industry opposition. "Apple could be doing a lot more than they did in Washington state," he said. "They could have put forth stronger legislation and they could have advocated for some of the legislation that was stronger on artificial intelligence." Hudgins, after speaking directly with Microsoft president Brad Smith and meeting directly with representatives from PayPal, Twitter, Google, IBM and Facebook, said he asked an Apple lobbyist to put him in touch with Apple's Cook. Apple did not make anyone from the company available, he said. Apple did step in when Illinois lawmakers proposed legislation that would have criminalized the unauthorized collection of audio by devices such as Amazon's Echo and Apple's HomePod - but only to ensure the wording of the law would not open Apple up to lawsuits, according to Abe Scarr, state director for the Illinois Public Interest Research Group. He said Apple wanted to make sure the language was sufficiently "tight," so that Apple wasn't liable for apps in the iOS ecosystem that might violate the law. The bill, in its final form, was significantly weakened. Apple was "opposed and remained opposed after we made lots of concessions," he said. When it came to that particular privacy bill, Apple wasn't an advocate for consumer privacy protections, he said. "They were an obstacle." Original Paywall Source NON Paywall Source
  9. Don’t look now: why you should be worried about machines reading your emotions Machines can now allegedly identify anger, fear, disgust and sadness. ‘Emotion detection’ has grown from a research project to a $20bn industry ‘Some developers claim that algorithms will become attuned to our innermost feelings, vastly improving interaction with our devices.’ ‘Some developers claim that algorithms will become attuned to our innermost feelings, vastly improving interaction with our devices.’ Photograph: Indeed/Getty, ibrandify via Noun Project Could a program detect potential terrorists by reading their facial expressions and behavior? This was the hypothesis put to the test by the US Transportation Security Administration (TSA) in 2003, as it began testing a new surveillance program called the Screening of Passengers by Observation Techniques program, or Spot for short. While developing the program, they consulted Paul Ekman, emeritus professor of psychology at the University of California, San Francisco. Decades earlier, Ekman had developed a method to identify minute facial expressions and map them on to corresponding emotions. This method was used to train “behavior detection officers” to scan faces for signs of deception. But when the program was rolled out in 2007, it was beset with problems. Officers were referring passengers for interrogation more or less at random, and the small number of arrests that came about were on charges unrelated to terrorism. Even more concerning was the fact that the program was allegedly used to justify racial profiling. Ekman tried to distance himself from Spot, claiming his method was being misapplied. But others suggested that the program’s failure was due to an outdated scientific theory that underpinned Ekman’s method; namely, that emotions can be deduced objectively through analysis of the face. In recent years, technology companies have started using Ekman’s method to train algorithms to detect emotion from facial expressions. Some developers claim that automatic emotion detection systems will not only be better than humans at discovering true emotions by analyzing the face, but that these algorithms will become attuned to our innermost feelings, vastly improving interaction with our devices. But many experts studying the science of emotion are concerned that these algorithms will fail once again, making high-stakes decisions about our lives based on faulty science. Your face: a $20bn industry Emotion detection technology requires two techniques: computer vision, to precisely identify facial expressions, and machine learning algorithms to analyze and interpret the emotional content of those facial features. Typically, the second step employs a technique called supervised learning, a process by which an algorithm is trained to recognize things it has seen before. The basic idea is that if you show the algorithm thousands and thousands of images of happy faces with the label “happy” when it sees a new picture of a happy face, it will, again, identify it as “happy”. A graduate student, Rana el Kaliouby, was one of the first people to start experimenting with this approach. In 2001, after moving from Egypt to Cambridge University to undertake a PhD in computer science, she found that she was spending more time with her computer than with other people. She figured that if she could teach the computer to recognize and react to her emotional state, her time spent far away from family and friends would be less lonely. Kaliouby dedicated the rest of her doctoral studies to work on this problem, eventually developing a device that assisted children with Asperger syndrome read and respond to facial expressions. She called it the “emotional hearing aid”. In 2006, Kaliouby joined the Affective Computing lab at the Massachusetts Institute of Technology, where together with the lab’s director, Rosalind Picard, she continued to improve and refine the technology. Then, in 2009, they co-founded a startup called Affectiva, the first business to market “artificial emotional intelligence”. At first, Affectiva sold their emotion detection technology as a market research product, offering real-time emotional reactions to ads and products. They landed clients such as Mars, Kellogg’s and CBS. Picard left Affectiva in 2013 and became involved in a different biometrics startup, but the business continued to grow, as did the industry around it. Amazon, Microsoft and IBM now advertise “emotion analysis” as one of their facial recognition products, and a number of smaller firms, such as Kairos and Eyeris, have cropped up, offering similar services to Affectiva. Beyond market research, emotion detection technology is now being used to monitor and detect driver impairment, test user experience for video games and to help medical professionals assess the wellbeing of patients. Kaliouby, who has watched emotion detection grow from a research project into a $20bn industry, feels confident that this growth will continue. She predicts a time in the not too distant future when this technology will be ubiquitous and integrated in all of our devices, able to “tap into our visceral, subconscious, moment by moment responses”. A database of 7.5m faces from 87 countries As with most machine learning applications, progress in emotion detection depends on accessing more high-quality data. According to Affectiva’s website, they have the largest emotion data repository in the world, with over 7.5m faces from 87 countries, most of it collected from opt-in recordings of people watching TV or driving their daily commute. These videos are sorted through by 35 labelers based in Affectiva’s office in Cairo, who watch the footage and translate facial expressions to corresponding emotions – if they see lowered brows, tight-pressed lips and bulging eyes, for instance, they attach the label “anger”. This labeled data set of human emotions is then used to train Affectiva’s algorithm, which learns how to associate scowling faces with anger, smiling faces with happiness, and so on. A face with lowered brows and tight-pressed lips meant 'anger' to a banker in the US and to a hunter in Papua New Guinea This labelling method, which is considered by many in the emotion detection industry to be the gold standard for measuring emotion, is derived from a system called the Emotion Facial Action Coding System (Emfacs) that Paul Ekman and Wallace V Friesen and developed during the 1980s. The scientific roots of this system can be traced back to the 1960s, when Ekman and two colleagues hypothesized that there are six universal emotions – anger, disgust, fear, happiness, sadness and surprise – that are hardwired into us and can be detected across all cultures by analyzing muscle movements in the face. To test the hypothesis, they showed diverse population groups around the world photographs of faces, asking them to identify what emotion they saw. They found that despite enormous cultural differences, humans would match the same facial expressions with the same emotions. A face with lowered brows, tight-pressed lips and bulging eyes meant “anger” to a banker in the United States and a semi-nomadic hunter in Papua New Guinea. Over the next two decades, Ekman drew on his findings to develop his method for identifying facial features and mapping them to emotions. The underlying premise was that if a universal emotion was triggered in a person, then an associated facial movement would automatically show up on the face. Even if that person tried to mask their emotion, the true, instinctive feeling would “leak through”, and could therefore be perceived by someone who knew what to look for. Throughout the second half of the 20th century, this theory – referred to as the classical theory of emotions – came to dominate the science of emotions. Ekman made his emotion detection method proprietary and began selling it as a training program to the CIA, FBI, Customs and Border Protection and the TSA. The idea of true emotions being readable on the face even seeped into popular culture, forming the basis of the show Lie to Me. And yet, many scientists and psychologists researching the nature of emotion have questioned the classical theory and Ekman’s associated emotion detection methods. ‘You’re already seeing recruitment companies using these techniques to gauge whether a candidate is a good hire or not’. In recent years, a particularly powerful and persistent critique has been put forward by Lisa Feldman Barrett, professor of psychology at Northeastern University. Barrett first came across the classical theory as a graduate student. She needed a method to measure emotion objectively and came across Ekman’s methods. On reviewing the literature, she began to worry that the underlying research methodology was flawed – specifically, she thought that by providing people with preselected emotion labels to match to photographs, Ekman had unintentionally “primed” them to give certain answers. She and a group of colleagues tested the hypothesis by re-running Ekman’s tests without providing labels, allowing subjects to freely describe the emotion in the image as they saw it. The correlation between specific facial expressions and specific emotions plummeted. Since then, Barrett has developed her own theory of emotions, which is laid out in her book How Emotions Are Made: the Secret Life of the Brain. She argues there are no universal emotions located in the brain that are triggered by external stimuli. Rather, each experience of emotion is constructed out of more basic parts. “They emerge as a combination of the physical properties of your body, a flexible brain that wires itself to whatever environment it develops in, and your culture and upbringing, which provide that environment,” she writes. “Emotions are real, but not in the objective sense that molecules or neurons are real. They are real in the same sense that money is real – that is, hardly an illusion, but a product of human agreement.” Barrett explains that it doesn’t make sense to talk of mapping facial expressions directly on to emotions across all cultures and contexts. While one person might scowl when they’re angry, another might smile politely while plotting their enemy’s downfall. For this reason, assessing emotion is best understood as a dynamic practice that involves automatic cognitive processes, person-to-person interactions, embodied experiences, and cultural competency. “That sounds like a lot of work, and it is,” she says. “Emotions are complicated.” Kaliouby agrees – emotions are complex, which is why she and her team at Affectiva are constantly trying to improve the richness and complexity of their data. As well as using video instead of still images to train their algorithms, they are experimenting with capturing more contextual data, such as voice, gait and tiny changes in the face that take place beyond human perception. She is confident that better data will mean more accurate results. Some studies even claim that machines are already outperforming humans in emotion detection. But according to Barrett, it’s not only about data, but how data is labeled. The labelling process that Affectiva and other emotion detection companies use to train algorithms can only identify what Barrett calls “emotional stereotypes”, which are like emojis, symbols that fit a well-known theme of emotion within our culture. According to Meredith Whittaker, co-director of the New York University-based research institute AI Now, building machine learning applications based on Ekman’s outdated science is not just bad practice, it translates to real social harms. “You’re already seeing recruitment companies using these techniques to gauge whether a candidate is a good hire or not. You’re also seeing experimental techniques being proposed in school environments to see whether a student is engaged or bored or angry in class,” she says. “This information could be used in ways that stop people from getting jobs or shape how they are treated and assessed at school, and if the analysis isn’t extremely accurate, that’s a concrete material harm.” Kaliouby says that she is aware of the ways that emotion detection can be misused and takes the ethics of her work seriously. “Having a dialogue with the public around how this all works and where to apply and where not to apply it is critical,” she told me. Having worn a headscarf in the past, Kaliouby is also keenly aware of the importance of building diverse data sets. “We make sure that when we train any of these algorithms the training data is diverse,” she says. “We need representation of Caucasians, Asians, darker skin tones, even people wearing the hijab.” This is why Affectiva collects data from 87 countries. Through this process, they have noticed that in different countries, emotional expression seems to take on different intensities and nuances. Brazilians, for example, use broad and long smiles to convey happiness, Kaliouby says, while in Japan there is a smile that does not indicate happiness, but politeness. Affectiva have accounted for this cultural nuance by adding another layer of analysis to the system, compiling what Kaliouby calls “ethnically based benchmarks”, or codified assumptions about how an emotion is expressed within different ethnic cultures. But it is precisely this type of algorithmic judgment based on markers like ethnicity that worries Whittaker most about emotion detection technology, suggesting a future of automated physiognomy. In fact, there are already companies offering predictions for how likely someone is to become a terrorist or pedophile, as well as researchers claiming to have algorithms that can detect sexuality from the face alone. Several studies have also recently shown that facial recognition technologies reproduce biases that are more likely to harm minority communities. One published in December last year shows that emotion detection technology assigns more negative emotions to black men’s faces than white counterparts. When I brought up these concerns with Kaliouby she told me that Affectiva’s system does have an “ethnicity classifier”, but that they are not using it right now. Instead, they use geography as a proxy for identifying where someone is from. This means they compare Brazilian smiles against Brazilian smiles, and Japanese smiles against Japanese smiles. “What about if there was a Japanese person in Brazil,” I asked. “Wouldn’t the system think they were as Brazilian and miss the nuance of the politeness smile?” “At this stage,” she conceded, “the technology is not 100% foolproof.” https://www.theguardian.com/technology/2019/mar/06/facial-recognition-software-emotional-science
  10. Tech giants want medical data and privacy advocates are worried Google is being sued in a potential class-action lawsuit which accuses the tech giant of inappropriately accessing sensitive medical records belonging to hundreds of thousands of hospital patients. The lawsuit, filed on Wednesday, is the latest example of how tech giants’ forays into the trillion-dollar healthcare industry are being met by concerns over privacy. In recent years, companies including Microsoft, Apple, and Google have all pitched their services to medical institutions, promising that they can help organize medical data and use this information to develop new AI diagnostic tools. But these plans are often met with resistance from privacy advocates, who say that this data will give tech giants an unprecedented view into the lives of their customers. The lawsuit in question, which was first reported by The New York Times, is concerned with a deal made in 2017 between Google and the University of Chicago Medical Center (also a defendant). Google was given access to patient records from the University of Chicago Medicine between 2009 and 2016, which it said it would use to develop new AI tools. In a blog post at the time, Google said it was ready to start “accurately predicting medical events — such as whether patients will be hospitalized, how long they will stay, and whether their health is deteriorating.” The company also noted it would use “de-identified medical records” from Chicago that would be “stripped of any personally identifiable information.” Wednesday’s lawsuit claims that the company failed to do this. “In reality, these records were not sufficiently anonymized and put the patients’ privacy at grave risk,” it says. Crucially, the lawsuit says Google received records of when patients were admitted and discharged from the medical center, a potential violation of the federal health data privacy regulation known as HIPAA. This information, says the suit, could be combined with location data collected by Google’s Android mobile OS to reveal individual patients’ identities. The rest of the information covered in the records is detailed. It includes individuals’ height, weight, and vital signs; whether they suffer from diseases like cancer or AIDS; and records of recent of recent medical procedures, including transplants and and abortions. The suit says the University of Chicago Medical Center also failed in its duties. “[T]he University did not notify its patients, let alone obtain their express consent, before turning over their confidential medical records to Google for its own commercial gain.” In the UK, Google’s DeepMind subsidiary made “inexcusable” errors while handling patient data, used to create its assistant Streams app. The lawsuit notably is similar to complaints made against Google’s AI subsidiary DeepMind in the UK. There, DeepMind made a deal in 2015 to access patient records from the UK’s National Health Service (NHS), which it used to develop an app for doctors and nurses. An investigation by the UK’s data watchdog found that the deal “failed to comply with data protection law,” and that DeepMind made “inexcusable” errors while handling the data. DeepMind later rewrote its contracts with the NHS and established new independent advisory boards to scrutinize its activities. These boards were shut down when the DeepMind department concerned, DeepMind Health, was absorbed into Google. Google and the University of Chicago Medical Center both deny the accusations laid out in the lawsuit. A spokesperson for Google told the New York Times: “We believe our health care research could help save lives in the future, which is why we take privacy seriously and follow all relevant rules and regulations in our handling of health.” A spokesperson for the University of Chicago also told the Times that the claims were “without merit.” Lawsuits such as these are often launched with the intent of attracting more plaintiffs. The lawsuit currently focuses on a single complaint by Matt Dinerstein, who was admitted to the Chicago University Medical Center in 2015. Source
  11. In an embarrassing security incident, the WeTransfer file sharing service announced that for two days it was sending it's users shared files to the wrong people. As this service is used to transfer what are considered private, and potentially sensitive files, this could be a big privacy issue for affected users. Starting today, users began to receive emails from WeTransfer [1, 2, 3] stating that on June 16th and 17th, files sent using the WeTransfer service were also delivered to people that they were not meant to go to. The email goes on to say that the team doesn't know what happened and that they are working to contain the situation. Email sent to WeTransfer users The full text of this email reads: Dear WeTransfer user, We are writing to let you know about a security incident in which a number of WeTransfer service emails were sent to the wrong people. This happened on June 16th and 17th. Our team has been working tirelessly to correct and contain this situation and find out how it happened. We have learned that a transfer you sent or received was also delivered to some people it was not meant to go to. Our records show those files have been accessed, but almost certainly by the intended recipient. Nevertheless, as a precaution we blocked the link to prevent further downloads. As your email address was also included in the transfer email, please keep an eye out for any suspicious or unusual emails you receive. We understand how important your data is and never take your trust in our service for granted. If you have any questions or concerns, just reply to this email to contact our support team. The WeTransfer Team WeTransfer posted a security notice on their web site that some accounts were logged out and had their passwords reset to protect their accounts and that they blocked access to the Transfer links that were involved in the incident. They did not, though, provide any further details on how this happened in the first place. "This incident took place on June 16th and 17th, and upon discovery, we immediately took precautionary security measures to protect our users," stated WeTransfer's security notice. "This means that users might have been logged out of their account or asked to reset their password in order to safeguard their account. Additionally, we have blocked Transfer links to ensure the security of our users’ Transfers." If this was simply a programming mistake on WeTransfer's end, it is peculiar that they had to reset user's passwords or felt the need to protect them. This could indicate a more serious issue, such as a breach of their network. BleepingComputer has contacted WeTransfer about this incident but had not heard back at the time of this publication. Thx to John for the tip! Source
  12. Apple pitches itself as the most privacy-minded of the big tech companies, and indeed it goes to great lengths to collect less data than its rivals. Nonetheless, the iPhone maker will still know plenty about you if you use many of its services: In particular, Apple knows your billing information and all the digital and physical goods you have bought from it, including music, movie and app purchases. A different approach: But even for heavy users, Apple uses a number of techniques to either minimize how much data it has or encrypt it so that Apple doesn't have access to iMessages and similar personal communications. Between the lines: Apple is able to do this, in part, because it makes its money from selling hardware, and increasingly from selling services, rather than through advertising. (It does have some advertising business, and it also gets billions of dollars per year from Google in exchange for being Apple's default search provider.) But Apple maintains that its commitment to privacy is based not just on its business model but on core values. How it works: In order to collect less data, Apple tries to do as much work on its devices as possible, even if that sometimes means algorithms aren't as well tuned, processing is slower, or the same work gets done on multiple devices. Photos are a case in point. Even if you store your images in Apple's iCloud, Apple does the work of facial identification, grouping, labeling and tagging images on the Mac or iOS device, rather than on the service's own computers. Some of the most sensitive data that your device collects, including your fingerprint or Face ID, stay on the device. Maps While Apple does need to do some processing in the cloud, it takes a number of steps to protect privacy beyond its competitors. First, the identification and management of significant locations like your home and work is done on the device. And the location information that does get sent up to the cloud is tied to a unique identifier code rather than a specific individual's identity — and that identifier changes over time. Location information Beyond Apple's Maps program, other applications, including some from Apple, can make use of location data with user permission. Apple is adding new options with iOS 13, due this coming fall, including: The ability for users to share their location with an app just once, rather than giving ongoing access. For apps that are making routine background use of location, Apple is also letting users review a map of the locations these apps are seeing, so they can decide if that is information they really want to be sharing. Email If you get your mail provided by Apple (via icloud.com, mac.com, etc.), the company will store your email and will scan it for spam, viruses and child pornography, as is common in the industry. Email will also be made available to law enforcement when Apple is presented with a lawful warrant. iCloud This is the area where Apple stores potentially the most personal information, although it doesn't make use of it for advertising or other business purposes. iCloud backups can include messages, photos and Apple email, though Apple stresses it won't look at the information and will only hand it over to others if forced by a court to do so. Messages Apple messages, the ones with the blue bubble, are encrypted end-to-end, so that only the sender and recipient can see them — not Apple, nor a carrier or any other intermediary. However, if you back up your messages to iCloud, a copy is kept on Apple's servers so if you lose your device and need to replace it, Apple you can restore them. Users can make an encrypted back up using iTunes on a Mac or PC, or keep no backup at all. Safari If you use Apple's Safari browser, Apple stores your bookmarks tied to your Apple ID; they're encrypted, but Apple holds a key. Beginning in iOS 13 and Catalina, the next MacOS, Safari browsing history will be fully encrypted and Apple will have no access. There's also data that goes to Apple's search partners. Google is the default, but you can also choose Yahoo, Bing or DuckDuckGo. You can also choose whether to send each keystroke as you type in the search bar, enabling autocomplete, or just to send the data when you hit "enter." Siri Many Apple devices have a chip that is listening for the "Hey Siri" wake word, but it's only at that point that Apple starts recording audio. Some commands, like what's next on your schedule, can be processed locally, while others do get sent to Apple's servers. Apple doesn't tie this data directly to a person's Apple ID, but uses a unique identifier. A user can reset that identifier, but then Siri will lose the personalization it has gained. Per Apple, "User voice recordings are saved for a six-month period so that the recognition system can utilize them to better understand the user’s voice. After six months, another copy is saved, without its identifier, for use by Apple in improving 
and developing Siri for up to two years." Apple Pay Apple doesn't store your payment information or purchase record as part of Apple Pay (It does have history and payment information for your Apple purchases). Apple Pay merchants get a token, not your actual credit card information. TV and Music Apple knows the music, shows and apps you purchase. In addition, in order to deliver on the feature of the TV App that allows users to pick up where they left off across multiple shows, multiple apps, and multiple devices, and to make personalized recommendations, Apple does capture and store viewing history. But it says it notifies users, stores as little data as possible for as little time as possible, and allows users to opt out (although this prevents some features from fully working). What you can do Users have a number of choices to further minimize what Apple knows, though there are often downsides. You can choose to download an encrypted iCloud backup only to your Mac or PC rather than keep it on Apple's server, but if you lose that device or forget the password for the backup file, Apple won't be able to help recover lost data. You can also download the information Apple has on you at privacy.apple.com. You can delete data stored on your device, such as email, messages, photos, and Safari data like history and bookmarks. You can delete your data stored on iCloud. You can reset your Siri identifier by turning Siri and Dictation off and back on, which effectively restarts your relationship with Siri and Dictation. Source
  13. Facebook is doing whatever it takes to curb any repercussions from the Cambridge Analytics scandal. Still, the list of lawsuits the company hasn’t ended yet. Right now Facebook is defending itself against a class-action lawsuit related to the scandal. According to a report by Law360, the company’s CEO Orin Synder has made a comment that people who use social media sites “have no expectation of privacy.” “There is no invasion of privacy at all, because there is no privacy,” he said on Wednesday in an attempt to wrap up the case. Synder argued that Facebook is more of a “town square” where people come and share personal information. He added that you need to closely guard something closely to have “a reasonable expectation of privacy.” However, he did try to assure that Facebook has a focus on privacy for the future. District Judge Vince Chhabria was quick to turn down Synder’s argument and said it was contrary to Facebook’s stance on privacy. This comes at a time when it’s not just Facebook, all major tech companies are being questioned over privacy. Facebook’s CEO Mark Zuckerberg is often found on stage, talking about how the social network is improving privacy over the platform and that it cares about the safety of its users. In fact, the CEO has even called Facebook an “innovator in privacy.” Source
  14. DUBLIN (Reuters) - The European Court of Justice (ECJ) will hear a landmark privacy case regarding the transfer of EU citizens’ data to the United States in July, after Facebook’s bid to stop its referral was blocked by Ireland’s Supreme Court on Friday. The case, which was initially brought against Facebook by Austrian privacy activist Max Schrems, is the latest to question whether methods used by technology firms to transfer data outside the 28-nation European Union give EU consumers sufficient protection from U.S. surveillance. A ruling by Europe’s top court against the current legal arrangements would have major implications for thousands of companies, which make millions of such transfers every day, including human resources databases, credit card transactions and storage of internet browsing histories. The Irish High Court, which heard Schrems’ case against Facebook last year, said there were well-founded concerns about an absence of an effective remedy in U.S. law compatible with EU legal requirements, which prohibit personal data being transferred to a country with inadequate privacy protections. The High Court ordered the case be referred to the ECJ to assess whether the methods used for data transfers - including standard contractual clauses and the so called Privacy Shield agreement - were legal. Facebook took the case to the Supreme Court when the High Court refused its request to appeal the referral, but in a unanimous decision on Friday, the Supreme Court said it would not overturn any aspect the ruling. The High Court’s original five-page referral asks the ECJ if the Privacy Shield - under which companies certify they comply with EU privacy law when transferring data to the United States - does in fact mean that the United States “ensures an adequate level of protection”. Facebook came under scrutiny last year after it emerged the personal information of up to 87 million users, mostly in the United States, may have been improperly shared with political consultancy Cambridge Analytica. More generally, data privacy has been a growing public concern since revelations in 2013 by former U.S. intelligence contractor Edward Snowden of mass U.S. surveillance caused political outrage in Europe. The Privacy Shield was hammered out between the EU and the United States after the ECJ struck down its predecessor, Safe Harbour, on the grounds that it did not afford Europeans’ data enough protection from U.S. surveillance. That case was also brought by Schrems via the Irish courts. “Facebook likely again invested millions to stop this case from progressing. It is good to see that the Supreme Court has not followed,” Schrems said in a statement. Source
  15. There’s yet another effort underway in Washington to establish an enforceable Do Not Track system that would provide a one-click mechanism for people to opt out of persistent web tracking by advertisers and social media platforms. Sen. Hawley The latest push comes in the form of the Do Not Track Act, a bill unveiled this week by Sen. Josh Hawley (R-Mo.) that emulates the structure of the Do Not Call registry. It would establish a method for consumers to send a signal to online companies that would block them from collecting any information past what is necessary to deliver their services. The bill also would stop companies from building profiles of the people who activate the DNT mechanism or discriminating against them if they use the option. Hawley’s bill makes the Federal Trade Commission the enforcement authority for the system and any person who violates the measure would be liable for penalties of $50 per user affected by a violation for every day that the violation is ongoing. “Big tech companies collect incredible amounts of deeply personal, private data from people without giving them the option to meaningfully consent. They have gotten incredibly rich by employing creepy surveillance tactics on their users, but too often the extent of this data extraction is only known after a tech company irresponsibly handles the data and leaks it all over the internet,” Hawley said. “The American people didn't sign up for this, so I'm introducing this legislation to finally give them control over their personal information online.” In practice, Hawley’s proposed Do Not Track system would involve an app or extension that people could download and would then “sends the DNT signal to every website, online service, or online application to which the device connects each time the device connects to such website, service, or application; and permits the user of the connected device to designate websites, services, or applications to which such signal should not be sent, but does not exempt any website, service, or application from receiving such signal if it is not so designated.” "I think we should make it compulsory and give it the force of law and give consumers real choice and force the companies to comply. This puts the ball is the consumer’s court." The Do Not Track Act is an attempt to rectify what has become an epidemic of online tracking and profile-building. Advertisers, website operators, and social media platforms all are heavily invested in monitoring users’ movements around the web, tracking where and when they interact with other sites and content. That tracking allows sites to build profiles of visitors and their interests and further target ads and other content. Those tracking methods and techniques are completely opaque for most people, and the existing mechanisms for opting out or preventing tracking range from mostly useless to pretty effective, but can also affect people’s browsing experience in a major way. The Do Not Track option that’s built into most browsers today falls on the mostly useless end of the spectrum. Enabling the option sends a signal to sites that the visitor does not want to be tracked, but there is no enforcement for it and site owners have no obligation to respect it. Ad blockers and other similar browser extensions can be quite effective, but they don’t prevent all tracking and can also break certain elements on some sites and makes others nearly unusable. Hawley’s bill seeks to remedy this situation by establishing the FTC as the enforcement authority and providing monetary penalties for violations. In a hearing of the Senate Judiciary Committee on Monday, Hawley said the bill was necessary to give consumers control over what data they share and whether they’re tracked. “Google and Facebook are doing something different in this market. They’re not using traditional advertising models. They track us every single day. [The bill] just says that a consumer can make a one time choice to not be tracked. I think we should make it compulsory and give it the force of law and give consumers real choice and force the companies to comply. This puts the ball is the consumer’s court,” Hawley said. Hawley’s bill is similar to draft legislation written earlier this month by staffers at DuckDuckGo, the privacy focused search engine provider, although the penalties are structured differently. Source
  16. Panic as panic alarms meant to keep granny and little Timmy safe prove a privacy fiasco Simple hack turns them into super secret spying tool A GPS tracker used by elderly people and young kids has a security hole that could allow others to track and secretly record their wearers. The white-label product is manufactured in China and then rebadged and rebranded by a range of companies in the UK, US, Australia and elsewhere including Pebbell 2, OwnFone and SureSafeGo. Over 10,000 people in the UK use the devices. It has an in-built SIM card that it used to pinpoint the location of the user, as well as provide hands-free communications through a speaker and mic. As such it is most commonly used by elderly people in case of a fall and on children whose parents want to be able to know where they are and contact them if necessary. But researchers at Fidus Information Security discovered, and revealed on Friday, that the system has a dangerous flaw: you can send a text message to the SIM and force it to reset. From there, a remote attacker can cause the device to reveal its location, in real time, as well as secretly turn on the microphone. The flaw also enables a third party to turn on and off all the key features of the products such as emergency contacts, fall detection, motion detection and a user-assigned PIN. In other words, a critical safety device can be completely disabled by anybody in the world through a text message. The flaw was introduced in an update to the product: originally the portable fob communicated with a base station that was plugged into a phone line: an approach that provided no clear attack route. But in order to expand its range and usefulness, the SIM card was added so it was not reliant on a base station and would work over the mobile network. The problem arises from the fact that the Chinese manufacturer built in a PIN to the device so it would be locked to the telephone number programmed into the device. Which is fine, except the PIN was disabled by default and the PIN is currently not needed to reboot or reset the device. And so it is possible to send a reset command to the device – if you know its SIM telephone number – and restore it to factory settings. At that point, the device is wide open and doesn't need the PIN to make changes to the other functions. Which all amounts to remote access. Random access memory But how would you find out the device's number? Well, the researchers got hold of one such device and its number and then ran a script where they sent messages to thousands of similar numbers to see if they hit anything. They did. "Out of the 2,500 messages we sent, we got responses from 175 devices (7 per cent)," they wrote. "So this is 175 devices being used at the time of writing as an aid for vulnerable people; all identified at a minimal cost. The potential for harm is massive, and in less than a couple of hours, we could interact with 175 of these devices!" The good news is that it is easy to fix: in new devices. You would simply add a unique code to each device and require it be used to reset the device. And you could limit the device to only receive calls or texts from a list of approved contacts. But in the devices already on the market, the fix is not so easy: even by using the default PIN to lock it down, the ability to reset the device is still possible because it doesn't require the PIN to be entered. The researchers say they have contacted the companies that use the device "to help them understand the risks posed by our findings" and say that they are "looking into and are actively recalling devices." But it also notes that some have not responded. In short, poor design and the lack of a decent security audit prior to putting the updated product on the market has turned what is supposed to provide peace of mind into a potential stalking and listening nightmare. Source
  17. Psiphon Pro By Psiphon Inc. This is the pro version of Psiphon which is a secure VPN application for Android. The application allows you to navigate freely on the internet. You will be connected to all hindered sites that are blocked due to censorship or other factors. You will also be safe when you do this. You will be able to connect to any site that has been exposed to Psyphon Pro and has blocked access. Psiphon’s work structure is quite simple. As with other VPN applications, a tunnel opens and you appear to be connecting through other countries. Whether you want to use the application only on the browser, you can use it in all applications. One of the features that Psiphon has provided is the ability to display your internet traffic. If you want to use the internet for free, Psiphon is for you. Site: https://workupload.com Sharecode: /file/xv2cRNKM
  18. What do tech giants know about you? A new tool shows you just how much We rely on social media and smartphone apps, from dating and connecting to online shopping and browsing the web. We constantly give out private data online -- but what exactly do we share with these platforms? From locations and home addresses to private messages and phone numbers - we give away precious private information to online services everyday and we do not even realise it. Yet we have agreed that companies can extract our personal data for their own use. How many times have you read a privacy policy from an online software platform right to the end? Nope, me neither. Fortunately online security platform vpnMentorhas delved through the privacy policies of some of the most popular applications, creating an interactive tool that shows how these companies track our every move. With over 7.2 billion accounts held across the services studied, including platforms like Google, Facebook, Amazon, and Tinder, how many of us are aware of the finer details of the privacy policies that we automatically accept? Facebook and Instagram seem to be the biggest offenders, seemingly tracking as much as they can about their users. Is it time that we thought twice about what we are accepting within the terms and conditions? Some of the surprising details tracked include: Location: Of the 21 services within the study, 18 tracked your current location at all times when using the app. Some of these, such as Tinder, continue to track this even when the app is not in use. Facebook and Instagram not only track your location but also the location of businesses and people nearby, as well as saving your home address and your most commonly visited locations. Your Messages: Do you think nobody will ever know about your DMs? Think again. Facebook, LinkedIn and Instagram use the information you share on their messaging services to learn more about you, while Twitter and Spotify both openly state they have access to any messages you send on their platforms. Device Information: Many services and apps track more of your device information than appears to be needed. Facebook and Instagram, track your battery level, signal strength, nearby Wi-Fi spots and phone masts, app and file names on your device amongst others. Google and Amazon keep voice recordings from searches and Alexa, and Apple Music tracks phone calls made and emails sent and received on the devices the service is used on. If you do not hold an account with these services this will not stop your online moves being tracked. Google keeps track of your activity on third party sites that use Google features like adverts. Facebook partners (8.4 million sites across the web) send both Facebook and Instagram data collected through Facebook Business Tools such as the Like button – regardless of whether or not you have a Facebook account or are logged in. Source
  19. mona

    Best VPN 2018

    Best VPN 2018 February 24, 2018 by Sven Taylor With all the alarming developments in mass surveillance, ISP spying, online censorship, and content restrictions, you are probably looking for the best VPN to stay safe online. But be careful! To find the best VPN, you’ll need to watch out for VPN scams, VPNs that lie about logs (PureVPN), VPNs that leak IP addresses (VPN Unlimited), and even malicious VPNs with hidden tracking libraries (Betternet). So tread carefully my friends. The rankings of the best VPN services below are based on extensive test results to check for IP address leaks, DNS leaks, connection issues, app performance, reliability, speed, and whether the features work correctly. Additionally, I also considered company policies, jurisdiction, logging practices, and the trustworthiness of the provider. Best VPNs 2018 Now we will take a deep dive into the top five best VPN services for 2018, discussing the pros, cons, features, and testing results for every provider. ExpressVPN ExpressVPN is a trusted and highly-recommended service that remains one of the best all-around VPNs on the market. It is based in the British Virgin Islands and offers a great lineup of applications for all devices. Extensive testing for the ExpressVPN review found the apps to be very secure, with exceptional performance throughout the server network. ExpressVPN is also a service that continues to get better. In the past six months they have made significant improvements to their apps to protect users against rare leak scenarios. These efforts culminated in the public release of their leak testing tools, which can be used to test any VPN for flaws/failures (open source and available on GitHub). ExpressVPN’s logging policies (only anonymized stats) were recently put to the test when authorities in Turkey seized one of their servers to obtain user data. But no customer data was affected as authorities were not able to obtain any logs (further explained here). This event showed that ExpressVPN remains true to its core mission of protecting customer privacy and data. ExpressVPN is also one of the best VPN providers you will find for streaming. Whether you are using a VPN with Kodi or streaming Netflix with a VPN, ExpressVPN offers applications to support all devices as well as a high-bandwidth network with great performance. Their support is also superb, with 24/7 live chat assistance and a 30 day money-back guarantee. Exclusive discount – ExpressVPN is currently offering an exclusive 49% discount on select plans, which reduces the monthly rate down to $6.67 (the non-discount price is $8.32 per month). ExpressVPN Windows client. + Pros User-friendly and reliable apps Exceptional speeds throughout the server network 30 day money-back guarantee Split tunneling feature (for Mac OS, Windows, and routers) Great for Netflix and other streaming services Strong encryption and leak protection settings 24/7 live chat support – Cons Apps collect anonymized connection stats, but users can opt out (IP addresses not logged) Perfect Privacy After testing out many different VPN services, Perfect Privacy holds the top spot as the best VPN for advanced online anonymity. You may have never heard of Perfect Privacy because they largely ignore marketing and instead focus on providing a high quality, privacy-focused service with very advanced features. Nonetheless, this is a well-respected VPN provider that has earned high praise from the tech community for exposing massive vulnerabilities with other VPNs. Their network is composed entirely of dedicated servers that provide you with fast speeds, great reliability, and plenty of bandwidth at all times (you can see real-time server bandwidth here). They have also passed real-world tests when two of their servers were seized by Dutch authorities last year. However, no customer data was affected due to no logs and all servers operating in RAM disk mode with nothing being saved on the server. For features they offer multi-hop VPN chains, advanced firewall configuration options (DNS and IP leak protection), port forwarding, NeuroRouting, Socks5 and Squid proxies, obfuscation features to defeat VPN blocking (Stealth VPN), and a customizable TrackStop feature to block tracking, malware, advertising and social media domains. They also give you an unlimited number of device connections and offer full IPv6 support (giving you both an IPv4 and IPv6 address). While Perfect Privacy offers very advanced features that you won’t find anywhere else, it also comes with a Swiss price tag at €8.95 per month. Additionally, these advanced features may be overkill for some users, especially if you are new to VPNs. Nonetheless, for those seeking the highest levels of online anonymity, security, and overall performance, Perfect Privacy is a solid choice. The Perfect Privacy Windows client, using a four-hop VPN cascade. + Pros Unlimited number of device connections Multi-hop VPN chains, up to 4 servers (self-configurable) NeuroRouting (dynamic, server-side multi-hop that can be used with all devices) Absolutely no logs without any restrictions Dedicated servers operating only in RAM disk mode Full IPv6 support (provides both IPv4 and IPv6 addresses) Customizable firewall/port-forwarding options TrackStop advertisement, tracking, and malware blocker – Cons Higher price Full VPN Manager client not available for Mac OS (but BETA client available, along with other installation options) VPN.ac VPN.ac is Romania-based VPN service with excellent overall quality for a very reasonable price. It was created by a team of network security professionals with a focus on security, strong encryption, and high-quality applications. Their VPN network is composed entirely of dedicated servers with secure, self-hosted DNS. VPN.ac’s server network provides you with great speeds and reliability (see the review for details). Performance is maximized with reliable applications and excellent bandwidth on their network at all times. (You can see their real-time bandwidth stats by selecting VPN Nodes Status at the top of the website.) For a lower-priced VPN service, VPN.ac offers an impressive lineup of features: maximum encryption strength, obfuscation features, double-hop VPN server configurations, and a secure proxy browser extension. All support inquiries are handled internally by the network security professionals who built the infrastructure. The one drawback I found is that VPN.ac maintains connection logs – but all data is erased daily. , which they clearly explain on their website. When you consider everything in relation to the price, this is one of the best values you’ll find for a premium VPN service. The VPN.ac Windows client, using a double-hop configuration. + Pros High-security VPN server network (dedicated servers, with self-hosted encrypted DNS) Excellent speeds with lots of available bandwidth Multi-hop (double VPN) server configurations Obfuscation features – Cons Advanced encryption (7 available protocols) Low price for a very advanced VPN (good value) Connection logs (no activity, erased daily) NordVPN NordVPN is a popular no logs VPN service based in Panama. Just like with ExpressVPN, NordVPN is a service that has made significant improvements over the past year. It performed well in testing for the latest update to the NordVPN review. The NordVPN apps have undergone some great updates to further protect users against the possibility of data leaks, while also adding a newly-improved kill switch to block all non-VPN traffic. As another improvement, NordVPN has rolled out a CyberSec feature that blocks advertisements, tracking, and malicious domains. And finally, NordVPN continues to work with Netflix and other streaming services. NordVPN is a great choice for privacy-focused users. Aside from the Panama jurisdiction and no-logs policies, NordVPN also provides advanced online anonymity features. These include double-hop server configurations, Tor-over-VPN servers, and also a lineup of obfuscated servers to conceal VPN traffic. NordVPN’s customer service is also top-notch. They provide 24/7 live chat support directly through their website, and all plans come with a 30 day money-back guarantee. NordVPN discount – NordVPN is currently offering a massive 77% discount on select plans, which drops the monthly rate down to only $2.75. (This is significantly cheaper than their standard rate with the annual plan at $5.75 per month.) The NordVPN Windows client. + Pros User-friendly apps 30 day money-back guarantee Multi-hop (double VPN) server configurations 24/7 live chat support No logs Competitive price Ad blocking feature – Cons Variable speeds with some servers VPNArea VPNArea is not the biggest name in the VPN industry, but this Bulgaria-based provider did well in testing for the review. They take customer privacy very seriously, with a strict no logs policy, good privacy features, and Switzerland hosting for business operations. Being based in Bulgaria, they do not fall under data-retention or copyright violation laws, which further protects their users. Aside from being a privacy-focused service, VPNArea also offers numerous servers that are optimized for streaming and torrenting. It continues to work well with Netflix, BBC iPlayer, Amazon Prime, Hulu and others. Torrenting and P2P downloads are allowed without any restrictions. They continue to improve their service with new features, including obfuscation (Stunnel) and ad-blocking through their self-hosted DNS servers. VPNArea is also one of the few VPNs that offer dedicated IP addresses. VPNArea Windows client. + Pros Competitive price No logs Great for streaming and torrenting Ad-blocking DNS servers 6 simultaneous connections (which can be shared with others) Dedicated IP addresses available – Cons Apps are somewhat busy DNS leak protection must be manually configured # # # Considerations for finding the best VPN As we already discussed, choosing the best VPN all boils down to determining which factors you consider the most important. In other words, it’s a very subjective process. Here are seven important factors to consider: Test results – How well does the VPN perform in testing? This includes both performance testing (speed and reliability) and leak testing (IP leaks and DNS leaks). Privacy jurisdiction – Where the VPN is legally based affects customer privacy. Many people avoid VPNs based in the US and other surveillance countries for this reason. For more of a discussion on this topic, see the guide on Five Eyes / 14 Eyes and VPNs. Server network – Three considerations when examining VPN servers are quality, locations, and bandwidth. Some VPNs prioritize server quality, while others prioritize locations. Also, see if you can find a real-time server status page to get an idea of available bandwidth, which will indicate performance. Privacy features – One good privacy feature for more online anonymity is a multi-hop VPN configuration. This will encrypt your traffic across two or more servers, offering more protection against surveillance and targeted monitoring. Operating system – Be sure to check out if the VPN you are considering supports the operating system you will be using. Obfuscation – Obfuscation is a key feature if you are using a VPN in China or anywhere that VPNs may be blocked. Obfuscation is also key for school and work networks that may restrict VPN use. Company policies – It’s always good to read through the company policies to see if it’s a good fit. Privacy policies, refund policies, and torrenting policies are all good to consider before signing up. There are many other factors you may want to consider when selecting the best VPN – but this is a good starting point. Best VPN speed and performance Many people are wondering how to achieve the best VPN speed. Others are wondering which VPNs are fastest. If you are using a good VPN service, you really shouldn’t notice a huge reduction in speed. Of course, the extra work that goes into encrypting/decrypting your traffic across VPN servers will affect speed, but usually it’s not noticeable. To optimize your VPN speed and achieve better performance, here are some factors to consider: Internet service provider interference – Some ISPs interfere with or throttle VPN connections. This seems to be a growing problem. Solution: use a VPN with obfuscation features, which will conceal the VPN traffic as HTTPS. (Perfect Privacy with Stealth VPN, VPN.ac with the XOR protocol, and VyprVPN with the Chameleon protocol are all good options.) High latency – You can generally expect slower speeds when you connect to servers further from your location. Using multi-hop VPN configurations will also increase latency and slow things down. Solution: Use servers closer to your location. If you utilize a multi-hop VPN chain, select nearby servers to minimize latency. Server congestion – Many of the larger VPN services oversell their servers, resulting in congestion, minimal bandwidth, dropped connections, and slow speeds. All of the recommendations on this page performed well in testing and offer adequate bandwidth for good speed. For example, see the Perfect Privacy server page and the VPN.ac server page (VPN Nodes Status at the top). Antivirus or firewall software – Antivirus and third-party firewall software often interfere with and slows down VPNs. Some software will implement their firewall on top of the default (operating system) firewall, which slows everything down. Solution: Disable the third-party firewall, or add an exception/rule for the VPN software. WiFi interference – WiFi interference or problems are unrelated to the VPN, but it can make a difference in overall speed. Solution: It may not be convenient, but using a wired connection will improve speed and security. Processing power – Many devices don’t do well with the extra processing power that is needed for VPN encryption/decryption. This is especially the case with older computers, routers, and mobile devices. Solution: Switch devices or upgrade to a faster processor (higher CPU). Network setup – Some networks do not work well with certain VPN protocols. Solution: The best solution is to experiment with different VPN protocols and/or ports (OpenVPN UDP / TCP / ECC / XOR, IPSec, etc.). Some VPN providers also allow you to modify MTU size, which may improve speed. To achieve the best VPN speed possible, it’s a good idea to experiment with the different variables. Assuming the servers are not overloaded with users, the two main ways to optimize performance are choosing a nearby server with low latency and selecting the right protocol. As mentioned above, the best protocol may vary depending on your unique situation. Best VPN services for streaming Many people who enjoy streaming are turning to VPNs to unlock content that is blocked or restricted and also gain a higher level of privacy. As mentioned above, the best all-around VPN for streaming is ExpressVPN because it always works with Netflix and other streaming services, it offers a huge lineup of apps, and the customer support is great. Another solid choice for streaming is VPNArea. Using a VPN with Netflix will allow you to access all the content you want wherever you are located in the world. Below I am accessing US Netflix from my location in Europe, using an ExpressVPN server in Washington, D.C. VPNs to avoid in 2018 There are a lot of different VPNs on the market – so it’s a good idea to consider your choices carefully. The problem, however, is that the internet is full of disinformation concerning VPNs. Large sites are often paid lots of money to promote inferior services. But this is no secret. With that being said, here are some important details that many of the larger websites are hiding from their readers: PureVPN – PureVPN is recommended by some big websites, but there are many red flags. When testing everything for the PureVPN review, I found IPv4 leaks, IPv6 leaks, DNS leaks, broken features (kill switch) and a host of other speed and connection problems. Also concerning, I learned that PureVPN was caught logging user data and handing this information over to US authorities – all despite having a “zero log policy” and promising to protect user privacy. Betternet – Betternet is a Canada-based provider that is known for offering a free VPN service. Unfortunately, when I tested everything for the Betternet review I found the service to leak IP addresses (both IPv4 and IPv6) as well as and DNS requests. An academic research paper also listed Betternet as #4 on the Top 10 most malware-infected Android VPN apps, while also embedding tracking libraries in their apps. Scary stuff, considering that VPNs are supposed to provide privacy and security (but that’s why you don’t use a free VPN). Betternet’s Android VPN app tested positive for malware by 13 different antivirus tools (AV-rank 13) !!! Hotspot Shield – Hotspot Shield is another troublesome VPN service with a well-documented history or problems. Hotspot Shield VPN was directly identified in a research paper for “actively injecting JavaScript codes using iframes for advertising and tracking purposes” with their Android VPN app. The same study also found a large presence of tracking libraries in the VPN app’s source code. Hotspot Shield was also in the news for a critical flaw in their VPN app which reveals the user’s identity and location. Hidemyass – HideMyAss is a UK-based VPN provider with a troubling history. Despite promising to protect user privacy, HideMyAss was found to be turning over customer data to law enforcement agencies around the world. VPN Unlimited – Extensive testing of the VPN Unlimited apps identified numerous leaks. This screenshot illustrates IPv6 leaks, WebRTC leaks, and DNS leaks with the VPN Unlimited Windows client. Leaks with VPN Unlimited Of course, there are many examples of problematic VPNs. But you can test your VPN to also check for issues that may affect your privacy and security. If you’re serious about privacy and online freedom… Start using a VPN whenever you go online. In just the last few years we’ve seen a number of unprecedented developments in corporate and government mass surveillance: Internet service providers in the United States can now legally record online browsing history and sell this data to third parties and advertisers. Mass surveillance also continues unabated… Residents of the United Kingdom are having their online browsing history, calls, and text messages recorded for up to two years (Investigatory Powers Act). This private information is freely available to various government agencies and their global surveillance partners. Australia has also recently implemented mandatory data retention laws, which require the collection of text messages, calls, and internet connection data. Free speech and free thought are increasingly under attack all around the world. While this has traditionally been a problem in China and other Middle Eastern countries, it is increasingly common throughout the Western world. Here are a few examples fo what we see unfolding: YouTube videos that are blocked or censored. Social media accounts, tweets, posts, and/or entire platforms that are blocked. Websites of all different varieties (torrenting, Wikipedia, news, etc.) blocked. What you are seeing is the continual erosion of privacy and online freedom. And it’s happening throughout the world. The point here is not to sound alarmist, but instead to illustrate these trends and how they affect you. The good news is that there are very effective solutions for these problems. You can protect yourself right now with a good VPN and other privacy tools. Stay safe! Recap – Best VPNs for Privacy, Security, and Speed SOURCE
  20. Worried about privacy issues in Windows 10? Here's what you can do. Thinkstock There has been some concern that Windows 10 gathers too much private information from users. Whether you think Microsoft's operating system crosses the privacy line or just want to make sure you protect as much of your personal life as possible, we're here to help. Here's how to protect your privacy in just a few minutes. Note: This story has been updated for the Windows 10 October 2018 Update, a.k.a. version 1809. If you have an earlier release of Windows 10, some things may be different. Turn off ad tracking At the top of many people's privacy concerns is what data is being gathered about them as they browse the web. That information creates a profile of a person's interests that is used by a variety of companies to target ads. Windows 10 does this with the use of an advertising ID. The ID doesn't just gather information about you when you browse the web, but also when you use Windows 10 apps. You can turn that advertising ID off if you want. Launch the Windows 10 Settings app (by clicking on the Start button at the lower left corner of your screen and then clicking the Settings icon, which looks like a gear) and go to Privacy > General. There you'll see a list of choices under the title "Change privacy options"; the first controls the advertising ID. Move the slider from On to Off. You'll still get ads delivered to you, but they'll be generic ones rather than targeted ones, and your interests won't be tracked. IDG You can turn off Windows 10's advertising ID if you want. You'll still get ads, but they'll be generic ones. (Click any image in this story to enlarge it.) To make absolutely sure you're not tracked online when you use Windows 10, and to turn off any other ways Microsoft will use information about you to target ads, head to the Ad Settings section of Microsoft’s Privacy Dashboard. Sign into your Microsoft account at the top of the page. Then go to the “See ads that interest you” section at the top of the page and move the slider from On to Off. After that, scroll down to the “See personalized ads in your browser” section and move the slider from On to Off. Note that you need to go to every browser you use and make sure the slider for “See personalized ads in your browser” is set to Off. Turn off location tracking Wherever you go, Windows 10 knows you're there. Some people don't mind this, because it helps the operating system give you relevant information, such as your local weather, what restaurants are nearby and so on. But if you don't want Windows 10 to track your location, you can tell it to stop. Launch the Settings app and go to Privacy > Location. Underneath “Allow access to location on this device,” click Change and, on the screen that appears, move the slider from On to Off. Doing that turns off all location tracking for every user on the PC. IDG If you click the Change button, you can turn off location tracking for every user on the Windows 10 device. This doesn't have to be all or nothing affair — you can turn off location tracking on an app-by-app basis. If you want your location to be used only for some apps and not others, make sure location tracking is turned on, then scroll down to the "Choose apps that can use your precise location" section. You'll see a list of every app that can use your location. Move the slider to On for the apps you want to allow to use your location — for example, Weather or News — and to Off for the apps you don't. When you turn off location tracking, Windows 10 will still keep a record of your past location history. To clear your location history, scroll to "Location History" and click Clear. Even if you use location tracking, you might want to clear your history regularly; there's no automated way to have it cleared. Turn off Timeline The Windows 10 April 2018 Update introduced a new feature called Timeline that lets you review and then resume activities and open files you’ve started on your Windows 10 PC, as well as any other Windows PCs and devices you have. So, for example, you’ll be able to switch between a desktop and laptop and from each machine resume activities you’ve started on either PC. In order to do that, Windows needs to gather information about all your activities on each of your machines. If that worries you, it’s easy to turn Timeline off. To do it, go to Settings > Privacy > Activity History and uncheck the boxes next to “Store my activity history on this device” and “Send my activity history to Microsoft.” IDG Here’s how to turn off Timeline so that Microsoft doesn’t gather information about your activities on your PC. At that point, Windows 10 no longer gathers information about your activities. However, it still keeps information about your old activities and shows them in your Timeline on all your PCs. To get rid of that old information, in the “Clear activity history” section of the screen, click “Manage my Microsoft account activity data.” You’ll be sent to Microsoft’s Privacy Dashboard, where you can clear your data. See the section later in this article on how to use the privacy dashboard to do that. Note that you’ll have to take these steps on all of your PCs to turn off the tracking of your activities. Curb Cortana Cortana is a very useful digital assistant, but there's a tradeoff in using it: To do its job well, it needs to know things about you such as your home location, place of work and the times and route you take to commute there. If you’re worried it will invade your privacy by doing that, there are a number of things you can do to limit the information Cortana gathers about you. Start by opening Cortana settings: place your cursor in the Windows search box and click the Cortana settings icon (it looks like a gear) that appears in the left pane. On the screen that appears, select Permissions & History. Click “Manage the information Cortana can access from this device,” and on the screen that appears, turn off Location so that Cortana won’t track and store your location. Then turn off “Contacts, email, calendar & communication history.” That will stop the assistant from gathering information about your meetings, travel plans, contacts and more. But it will also turn off Cortana’s ability to do things such as remind you about meetings and upcoming flights. Towards the bottom of the screen, turn off “Browsing history” so that Cortana won’t keep your browsing history. To stop Cortana from gathering other types of information, head to the Cortana’s Notebook section of Microsoft's Privacy Dashboard. You’ll see a variety of personal content, ranging from finance to flights, news, sports, and much more. Click the content you want Cortana to stop tracking, then follow the instructions for deleting it. If you want to delete all the data Cortana has gathered about you, click “Clear Cortana data” on the right side of the screen. IDG Here’s how to delete all the information Cortana has gathered about you. There’s some bad news for those who want to ditch Cortana completely: Back when the Windows 10 Anniversary Update was released in August 2016, the easy On/Off setting for turning it off was taken away. However, that doesn't mean you can't turn Cortana off — it just takes more work. If you use any version of Windows 10 other than the Home version, you can use the Group Policy Editor to turn it off. Launch the Group Policy Editor by typing gpedit.msc into the search box. Then navigate to Computer Configuration > Administrative Templates > Windows Components > Search > Allow Cortana. Set it to “disabled.” If you have the Home version, you'll have to muck around in the Registry. Before doing that, though, create a Restore Point, so that you can recover if anything goes wrong. Once you've done that: 1. Type regedit into the search box and press Enter to run the Registry Editor. 2. Go to the key HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows\Windows Search. (If the Windows Search key doesn't appear in the Registry Editor, go to HKEY _LOCAL_MACHINE\Software\Policies\Microsoft\Windows. Right-click the key and select New > Key. It will be given a name such as New Key #1. Right-click it, select Rename, and then type Windows Search into the box.) 3. Create the DWORD value AllowCortana by right-clicking Windows Search and selecting New > DWORD (32-bit) Value. Type AllowCortana in the Name field. 4. Double-click the AllowCortana value. Type 0 in the Value data box. 5. Click OK. You'll have to sign out of your Windows account and sign back in again (or restart Windows) to make the setting take effect. Ditch a Microsoft account for a local account When you use your Microsoft account to log into Windows 10, you’re able to sync your settings with all Windows devices. So, for example, when you make changes to your settings on a desktop PC, those changes will also be made on your laptop the next time you log in. But maybe you don’t want Microsoft to store that information about you. And maybe you want to cut your ties as much as possible to anything Microsoft stores about you. If that’s the case, your best bet is to stop using your Microsoft account and instead use a local account. It’s simple to do. Go to Settings > Accounts and select “Sign in with a local account instead.” A wizard launches. Follow its instructions to create and use a local account. Keep in mind that when you do this, you won’t be able to use Microsoft’s OneDrive storage or download and install for-pay apps from the Windows Store. You can, however, download and install free apps from the Windows Store. Change your app permissions Windows apps have the potential to invade your privacy — they can have access to your camera, microphone, location, pictures and videos. But you can decide, in a very granular way, what kind of access each app can have. To do this, go to Settings > Apps. Below “Apps & features” you’ll see a list of your installed apps. Click the app whose permissions you want to control, then click Advanced options and set the app's permissions by toggling them either on or off. IDG Setting permissions for the Fitbit app. Note, though, that very few apps have an “Advanced options” link. And of those that do, not all let you customize your app permissions. However, there’s another way to change app permissions. To do it, go to Settings > Privacy and look under the “App permissions” section on the left-hand side of the page. You’ll see a list of all of Windows’ hardware, capabilities and features that apps can access if they’re given permission — location, camera, microphone, notifications, account info, contacts and so on. Click any of the listed items — for example, Microphone. At the top of the page that appears, you can turn off access to the microphone for all apps. Below that you’ll see a listing of all the apps with access to the microphone, where you can control access on an app-by-app basis. Any app with access has a slider that is On. To stop any app from having access, move the slider to Off. Control and delete diagnostic data As you use Windows 10, data is gathered about your hardware and what you do when you use Windows. Microsoft says that it collects this data as a way to continually improve Windows and to offer you customized advice on how to best use Windows. That makes plenty of people uncomfortable. If you’re one of them, you can to a certain extent control what kind of diagnostic data is gathered about you. To do it, head to Settings > Privacy > Diagnostics & Feedback. In the Diagnostic data section, you can choose between two levels of diagnostic data to be gathered. Note that there’s no way to stop Microsoft from gathering diagnostic data entirely. Here are your two choices: Basic: This sends information to Microsoft “about your device, its settings and capabilities, and whether it is performing properly.” If you’re worried about your privacy, this is the setting to choose. Full: This sends the whole nine yards to Microsoft: “all Basic diagnostic data, along with info about the websites you browse and how you use apps and features, plus additional info about device health, device usage, and enhanced error reporting.” If you’re worried about your privacy, don’t make this choice. IDG Go here to control what diagnostic data Windows 10 gathers. Next, scroll down to the “Tailored experiences” section and move the slider to Off. This won’t affect the data Microsoft gathers, but it will turn off targeted ads and tips that are based on that information. So while it won’t enhance your privacy, you’ll at least cut down on the annoyance factor. Now scroll a bit further down and in the “Delete diagnostic data” section, click Delete. That will delete all the diagnostic data Microsoft has gathered about you. However, after you delete it, Microsoft will start gathering the data again. Finally on this screen, consider scrolling up to the “Improve inking & typing recognition” section and moving the slider to Off. That will stop Windows 10 from sending to Microsoft the words you input using the keyboard and inking. One final note about diagnostic data. You may have heard about a tool Microsoft has been hyping, called the Diagnostic Data Viewer, which you can download from the Microsoft Store. Microsoft claims it lets you see exactly what kind of diagnostic data Microsoft gathers about you. Don’t believe it. It’s something only a programmer could love — or understand. You won’t be able to use it to clearly see the diagnostic data Microsoft collects. Instead, you’ll scroll or search through incomprehensible headings such as “TelClientSynthetic.PdcNetworkActivation_4” and “Microsoft.Windows.App.Browser.IEFrameProcessAttached” with no explanation of what it means. Click any heading, and you’ll find even more incomprehensible data. Use Microsoft’s Privacy Dashboard Microsoft has built an excellent, little-known web tool called the Privacy Dashboard that lets you track and delete a lot of information Microsoft gathers about you. To get to it, go to https://account.microsoft.com/privacy/. As covered earlier in this story, here you can turn off ad targeting and limit the data gathered in Cortana’s Notebook. You can also view and delete your browsing history, search history, location activity, voice activity, media activity, LinkedIn activity, and a lot more. (Note that for you browsing and search history, it only tracks your activity when you use Microsoft Edge or Internet Explorer. It doesn’t track data when you use other browsers, like Chrome or Firefox. And it only tracks your location history when you’re using Microsoft devices, not those that use iOS or Android.) IDG Microsoft’s little-known Privacy Dashboard is a great place to delete much of the information Microsoft gathers about you. Using it is a breeze. Simply head to the information you want to view and clear, then click the “View and Clear…” button. You’ll see all your activity in that category, and be able to delete individual instances (such as a single web search, for example), or all of it at once. Get granular in the Settings app All this shouldn't take that long and will do a great deal to protect your privacy. However, if you want to dig even deeper into privacy protections, there's something else you can do. Launch the Settings app and click Privacy. On the left-hand side of the screen, you'll see the various areas where you can get even more granular about privacy — for example, in the Windows permissions section you can change your global privacy options for things such as speech recognition and inking. And here’s where you’ll get access to all app permissions, as outlined earlier in this article. These steps can take you a long way towards making sure that Windows 10 doesn't cross the line into gathering data you'd prefer remain private. This article was originally published in January 2016 and most recently updated in April 2019. Source: How to protect your privacy in Windows 10 (Computerworld - Preston Gralla)
  21. Wilson Drake

    Happy Safer Internet Day 2019

    This year lets all raise our hands to make Internet a safer place on Safer Internet Day
  22. Privacy: Several States Consider New Laws After California Takes Bold Action, Other States Ponder Privacy Protection Measures Several U.S. states, including Oregon, North Carolina, Virginia and Washington, are considering new legislation to shore up consumer data privacy laws in the wake of California passing strict privacy requirements last year. See Also: Key Drivers to Enable Digital Transformation in Financial Services The European Union's General Data Protection Regulation, which has been enforced since last May, is inspiring renewed efforts worldwide, including at the federal and state levels in the United States, to boost privacy protections. Democrats in Congress have once again introduced national breach notification and privacy legislation, but many previous efforts to pass similar measures have failed (see: Democratic Senators Introduce Security Legislation). Meanwhile, federal regulators are considering changes in HIPAA aimed at reducing "regulatory burdens," including ways to improve secure data sharing for patient care coordination, by, for example, easing certain privacy requirements (see: HHS Seeks Feedback on Potential HIPAA Changes). State Proposals Rather than wait for Congress or federal regulators to take action, more states are considering a variety of measures designed to strengthen consumer data protections. For example, Oregon is considering a bill that would prohibit the sale of de-identified protected health information without first obtaining a signed authorization from an individual. The measure also would provide patients the right to be paid for authorizing the de-identification of their PHI for sale to third parties, such as for research and other uses. In North Carolina, pending legislation would strengthen ID theft/fraud protections. Under the proposal, ransomware attacks would be considered a security breach, and a breached entity would need to notify the state attorney general's office within 30 days. In Virginia, a bill proposes new requirements for businesses related to disposal of certain consumer records. It also features new requirements for manufacturers pertaining to the design and maintenance of devices that connect to the internet. A business would be required to "take all reasonable steps to dispose of, or arrange for the disposal of, consumer records." But that provision would not apply to HIPAA covered entities and business associates, because HIPAA has its own disposal requirements. And Washington is considering a bill that would require companies that collect personal data to be transparent about the type of data being collected, whether consumer data is sold to data brokers, and upon request from a consumer, delete the consumer's personal data without undue delay. These provisions are very similar to requirements in the EU's GDPR. GDPR as Inspiration "The European Union recently updated its privacy law through the passage and implementation of the General Data Protection Regulation, affording its residents the strongest privacy protections in the world," the Washington bill notes. "Washington residents deserve to enjoy the same level of robust privacy safeguards." "We may find that there is a sufficient number of these new proposals that there will be an additional push to implement a federal law that applies a common standard." —Kirk Nahra, Wiley Rein California's new law enacted last year also requires businesses to disclose the purpose for collecting or selling the information, as well as the identity of the third-party organizations receiving the data. Consumers can also request data be deleted and initiate civil action if they believe that an organization has failed to protect their personal data (see California's New Privacy Law: It's Almost GDPR in the U.S.). "The California Consumer Privacy Act was passed last year and compliance is required next year, but 2019 is when California's attorney general compliance guidance is expected, and legislative fixes may be needed," says privacy attorney Adam Greene of the law firm David Wright Tremaine. "Each of the 50 states now has its own breach notification laws, with nearly one-half adopting data security and/or data disposal requirements to protect consumers' personally identifiable information from unauthorized disclosure," says privacy attorney David Holtzman, vice president of compliance at security consultancy CynergisTek. "While most states are not taking a sectorial approach to the type of PII that must be protected, New York, Ohio and South Carolina have adopted cybersecurity requirements that target industries that include health plans and insurers," he adds. "A theme seen in state legislation to update breach notification laws in recent years is to set shorter notification periods. Some argue that this would give consumers more time to take action to protect themselves against the threat of financial fraud or identity theft by notifying major credit reporting agencies." Under Pressure Privacy attorney Kirk Nahra of the law firm Wiley Rein notes: "The states continue to examine the possibilities for increasing privacy and data security protections, both in currently regulated areas and in situations where federal law is not directly applicable through a specific law or regulation." Could all the various state activity put more pressure on Congress to adopt national privacy legislation? "We may find that there is a sufficient number of these new proposals that there will be an additional push to implement a federal law that applies a common standard - although that is still a long way away," Nahra says. "And one of the critical elements of the debate will be how to handle these state laws." Nahra expects other states, "including some traditional red states," will introduce privacy legislation. A Downside? New state privacy laws can potentially have adverse effects, Nahra contends. For example, the Oregon proposal tightening up permitted uses of de-identified PHI "might seem appealing at first blush but actually would primarily have negative impacts," he claims. The Oregon proposal, he argues, "would reduce any of the useful research, public health and other benefits that are provided by de-identified information today, and would at the same time create privacy and security risks for individuals by forcing companies to retain a link between the de-identified data and an identifiable individual. "So, we see potential risks from some of these proposals, particularly where they move through a more chaotic and sometimes less thoughtful state legislative debate." Greene says the Oregon legislation would be difficult to implement. "For example, de-identified data may be created for multiple purposes, some of which might require authorization under the law," he notes. "Identifying what is the true purpose may be challenging. Also, it is not clear whether aggregate data, which is no longer at a person-by-person level, qualifies as de-identified data that may be subject to the law." Source
  23. Psiphon Pro By Psiphon Inc. This is the pro version of Psiphon which is a secure VPN application for Android. The application allows you to navigate freely on the internet. You will be connected to all hindered sites that are blocked due to censorship or other factors. You will also be safe when you do this. You will be able to connect to any site that has been exposed to Psyphon Pro and has blocked access. Psiphon’s work structure is quite simple. As with other VPN applications, a tunnel opens and you appear to be connecting through other countries. Whether you want to use the application only on the browser, you can use it in all applications. One of the features that Psiphon has provided is the ability to display your internet traffic. If you want to use the internet for free, Psiphon is for you. Site: https://workupload.com Sharecode: /file/twX2cTvJ Site: https://file.bz Sharecode: /UcD2R2rfb2/Psiphon_Pro_214_apk
  24. The VPN industry has exploded over the past few years. Fuelled by a greater awareness of online security, a desire to watch geo-restricted content, and yes, piracy, more people are hiding their online identities than ever. But did you know that many VPN providers are owned by the same few companies? A report from The Best VPN, shared exclusively with TNW, looks at five companies in particular — Avast, AnchorFree, StackPath, Gaditek and Kape Technologies. It found that over the past few years, these companies have acquired a total of 19 smaller players in the VPN space, including HideMyAss and CyberGhost VPN. AnchorFree The company with the most brands under its belt is AnchorFree. That’s not surprising since it’s the only firm on our list founded primarily to serve the VPN market. While the other three companies on the list own well-known and established VPN products, they also have a lot of other interests, particularly when it comes to information security services and products. The Best VPN was able to draw links between AnchorFree and seven smaller VPN brands. These include Hotspot Shield, Betternet, TouchVPN, VPN in Touch, Hexatech, VPN 360, and JustVPN. The report notes that AnchorFree isn’t consistently transparent when it comes to telling consumers what brands it owns. While some products carry the AnchorFree logo clearly (like Hotspot Shield), others require you to dig deep into the site’s terms-and-condition to find out who owns what StackPath The next company on the list is StackPath. The Best VPN describes it as a “huge cyber-security company,” and that’s accurate. The firm has raised over $180 million, with revenues of more than $157 million in 2017. Driving this success is a Batman’s utility-belt’s worth of sub-brands and products. These include several VPN brands (like IPVanish, StrongVPN, Encrypt.me), as well as CDN, cloud computing, and information security products. StackPath also provides the infrastructure required to launch a VPN service to other brands, thanks to its WLVPN service. This powers Pornhub’s VPN offering (predictably called VPNHub), as well as Namecheap VPN. Avast Avast is a Czech cybersecurity firm best known for its free antivirus software. Over the years, the company has quietly carved itself out a respectable position within the competitive VPN market. It owns three brands: HideMyAss, Avast Secureline VPN, AVG Secure VPN, and Zen VPN. It’s interesting to note that Avast got its hands on two of these products — namely HideMyAss and AVG Secure VPN — through its $1.3 billion acquisition of AVG Software in 2016. Kape and Gaditek With only two VPN brands apiece, Kape and Gaditek are the smallest companies on this list, but they couldn’t be any more different. Kape is primarily an investment vehicle focusing on the tech sector, and is listed on the London Stock Exchange. Gaditek, on the other hand, is a sprightly Pakistani startup based in the bustling city of Karachi. The jewel in Kape’s crown is Romania’s CyberGhost VPN, which it acquired for €9.2 million (roughly $9.7 million) in March, 2017. The following year, it bought another top-tier VPN provider, ZenMate. ZenMate claims more than 40 million users. Gaditek, on the other hand, focuses on the budget end of the market. It owns PureVPN and Ivacy, both of which offer ultra-affordable plans. Does this matter? There’s nothing wrong, or even especially inappropriate, about a larger player acquiring smaller rivals. Just look at Google, a company that has acquired more than 200 companies over its 20 year life. Acquisitions are the heart and soul of the technology business. But that doesn’t explain why the VPN market is so fragmented, with hardly any brands absorbed into their larger owners. Liviu Arsene, Senior E-threat analyst at Bitdefender, suggests that this merely reinforces the sense of privacy that’s vital for the success of a VPN product. Arsene also argued that allowing VPN providers to retain their independence after an acquisition could allow them to remain agile and innovative. “Large VPN providers that operate a single large-scale infrastructure have a harder time integrating new privacy-driven technologies because of compatibility, integration, and deployment issues,” he said. “The VPN industry is all about having as many servers around the world as possible, in order to ensure both availability and coverage for their customers. Acquiring smaller VPN companies and allowing them to operate independently makes sense because these infrastructures need to be agile, flexible, dynamic, and constantly integrating new privacy-drive technologies in order to allow for more privacy for their clients,” Arsene added. This argument was echoed by a representative from Hide.me, who also suggested that having separate providers allows larger VPN conglomerates to target all segments of the market. “It is more profitable to obtain users through the acquisition of smaller VPN providers than to obtain those users by using standard marketing channels. Once they have that access, they are using a smaller brand for test runs of different business models without direct harm to the mainstream brand. Usually, acquired smaller VPN providers have another price structure than the main brand, and they can cover a more significant chunk of the market,” they explained. Original post : https://thenextweb.com/tech/2019/01/23/youd-be-surprised-how-many-vpns-are-owned-by-the-same-company/ By: MATTHEW HUGHES
  25. New survey finds Americans want online services to collect less of their data According to a new survey from the Center for Data Innovation, only one in four Americans want online services such as Google and Facebook to collect less of their data if it means they would have to start paying a monthly subscription fee. Other surveys have gauged Americans' ideas regarding online privacy but few have asked about such tradeoffs which is why the organisation decided to test their reactions to a series of likely consequences of reducing online data collection. The survey found that when potential tradeoffs were not part of the question, approximately 80 per cent of Americans agreed they would like Google, Facebook and other online services to collect less of their data. However, support waned once respondents considered these tradeoffs. Initial agreement dropped by six per cent when respondents were asked whether they would like online services to collect less data even if it meant seeing ads that are less useful. Support dropped by 27 per cent when they considered whether the would like less data collection even it means seeing more ads than before. Collecting user data The largest drop in support by 53 per cent arose when respondents were asked whether they would like online services to collect less data if it meant they had to pay a monthly subscription fee with only 27 per cent agreeing with reducing data collection in this circumstance. The Center for Data Innovation's survey also gauged American's willingness to have online services collect more data in exchange for various benefits. The survey found when potential benefits were not part of the question, approximately 74 per cent of Americans are opposed to having online services collect more of their data. This figure decreased by 11 per cent when respondents considered whether they would like online services to collect more data if it meant seeing ads that were more useful. The largest decrease in opposition (18%) occurred when they were asked whether they would like online services to collect more of their data if it meant getting more free apps and services with 16 per cent supporting such a tradeoff, 63 per cent opposed and the remaining respondents did not take a position on the issue. Source
  • Create New...