Jump to content

Search the Community

Showing results for tags 'surveillance'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 45 results

  1. Tech investor John Borthwick doesn’t mince words when it comes to how he perceives smart speakers from the likes of Google and Amazon . The founder of venture capital firm Betaworks and former Time Warner and AOL executive believes the information-gathering performed by such devices is tantamount to surveillance. Image: John Borthwick “I would say that there's two or three layers sort of problematic layers with these new smart speakers, smart earphones that are in market now,” Borthwick told Yahoo Finance Editor-in-Chief Andy Serwer during an interview for his series “Influencers.” “And so the first is, from a consumer standpoint, user standpoint, is that these, these devices are being used for what's — it's hard to call it anything but surveillance,” Borthwick said. The way forward? Some form of regulation that gives users more control over their own data. “I personally believe that you, as a user and as somebody who likes technology, who wants to use technology, that you should have far more rights about your data usage than we have today,” Borthwick said. Smart assistants face a reckoning The venture capitalist’s comments follow a string of controversies surrounding smart assistants including Google’s Assistant, Amazon’s Alexa, and Apple’s (AAPL) Siri, in which each company admitted that human workers listen to users’ queries as a means of improving their digital assistants’ voice recognition capabilities. “They've gone to those devices and they've said, ‘Give us data when people passively act upon the device.’ So in other words, I walk over to that light switch,” Borthwick said. “I turn it off, turn it on, it's now giving data back to the smart speaker.” It’s important to note that smart speakers from every major company are only activated when you use their appropriate wake word. To activate your Echo speaker, for instance, you need to say “Alexa” followed by your command. The same thing goes for Google’s Assistant and Apple’s Siri. The uproar surrounding smart speakers and their assistants began when Bloomberg reported in April that Amazon used a global team of employees and contractors to listen to users’ voice commands to Alexa to improve its speech recognition. Image: Amazon Echo That was followed by a similar report by Belgian-based VRT News about Google employees listening to users’ voice commands for Google Assistant. The Guardian then published a third piece about Apple employees listening to users’ Siri commands. Facebook was also pulled into the controversy when Bloomberg reported it had employees listen to users’ voice commands made through its Messenger app. Google and Apple have since apologized, with Google halting the practice, and Apple announcing that it will automatically opt users out of voice sample collection. Users instead will have to opt in if they want to provide voice samples to improve Siri’s voice recognition. Amazon, for its part, has allowed users to opt out of having their voice samples shared with employees, while Facebook said it has paused the practice. Image: Google Home Mini smart speaker The use of voice samples has helped improved the voice recognition features of digital assistants, ensuring they are better able to understand what users say and the context in which they say it. At issue is whether users were aware that employees of these companies were listening. There’s also the matter of accidental activations, which can result in employees hearing conversations or snippets of conversations they might otherwise not have been meant to hear. As for how such issues can be dealt with in the future, Borthwick points to some form of tech regulation. Though he doesn’t offer specifics, the VC says that users need to be able to understand how their data is being used, and be able to take control of it. “I think generally, it's about giving the users a lot more power over the decisions that are being made. I think that's one piece of it,” he said. Source
  2. Millions of security cameras become equipped with “video analytics” and other AI-infused technologies that allow computers not only record but “understand” the objects they’re capturing, they could be used for both security and marketing purposes, the American Civil Liberties Union (ACLU) warned in a recent report ,“The Dawn of Robot Surveillance.” As they become more advanced, the camera use is shifting from simply capturing and storing video “just in case” to actively evaluating video with real-time analytics and for surveillance. While ownership of cameras is mostly under decentralized ownership and control the ACLU cautioned policymakers to be proactive and create rules to regulate the potential negative impact this could have. The report also listed specific features that could allow for intrusive surveillance and recommendations to curtail potential abuse. Source
  3. Spotify pursues emotional surveillance for global profit Music is emotional, and so our listening often signals something deeply personal and private. Today, this means music streaming platforms are in a unique position within the greater platform economy: they have troves of data related to our emotional states, moods, and feelings. It’s a matter of unprecedented access to our interior lives, which is buffered by the flimsy illusion of privacy. When a user chooses, for example, a “private listening” session on Spotify, the effect is to make them feel that it’s a one-way relation between person and machine. Of course, that personalization process is Spotify’s way of selling users on its product. But, as it turns out, in a move that should not surprise anyone at this point, Spotify has been selling access to that listening data to multinational corporations. Where other platforms might need to invest more to piece together emotional user profiles, Spotify streamlines the process by providing boxes that users click on to indicate their moods: Happy Hits, Mood Booster, Rage Beats, Life Sucks. All of these are examples of what can now be found on Spotify’s Browse page under the “mood” category, which currently contains eighty-five playlists. If you need a lift in the morning, there’s Wake Up Happy, A Perfect Day, or Ready for the Day. If you’re feeling down, there’s Feeling Down, Sad Vibe, Down in the Dumps, Drifting Apart, Sad Beats, Sad Indie, and Devastating. If you’re grieving, there’s even Coping with Loss, with the tagline: “When someone you love becomes a memory, find solace in these songs.” Over the years, streaming services have pushed a narrative about these mood playlists, suggesting, through aggressive marketing, that the rise of listening by way of moods and activities was a service to listeners and artists alike—a way to help users navigate infinite choice, to find their way through a vast library of forty million songs. It’s a powerful arm of the industry-crafted mythology of the so-called streaming revolution: platforms celebrating this grand recontextualization of music into mood playlists as an engine of discovery. Spotify is currently running a campaign centered on moods—the company’s Twitter tagline is currently “Music for every mood”—complete with its own influencer campaign. But a more careful look into Spotify’s history shows that the decision to define audiences by their moods was part of a strategic push to grow Spotify’s advertising business in the years leading up to its IPO—and today, Spotify’s enormous access to mood-based data is a pillar of its value to brands and advertisers, allowing them to target ads on Spotify by moods and emotions. Further, since 2016, Spotify has shared this mood data directly with the world’s biggest marketing and advertising firms. Streaming Intelligence Surveillance In 2015, Spotify began selling advertisers on the idea of marketing to moods, moments, and activities instead of genres. This was one year after Spotify acquired the “music intelligence” firm Echo Nest. Together they began looking at the 1.5 billion user-generated playlists at Spotify’s disposal. Studying these playlists allowed Spotify to more deeply analyze the contexts in which listening was happening on its platform. And so, right around the time that Spotify realized it had 400,000 user-generated barbecue playlists, Brian Benedik, then Spotify’s North American Vice President of Advertising and Partnerships, noted in an Ad Age interview that the company would focus on moods as a way to grow its advertising mechanism: “This is not something that’s just randomly thrown out there,” Benedik said. “It’s a strategic evolution of the Spotify ads business.” As of May 1, 2015, advertisers would be able to target ads to users of the free ad-supported service based on activities and moods: “Mood categories like happy, chill, and sad will let a brand like Coca-Cola play on its ‘Open Happiness’ campaign when people are listening to mood-boosting music,” the Ad Age article explained. Four years later, Spotify is the world’s biggest streaming subscription service, with 207 million users in seventy-nine different countries. And as Spotify has grown, its advertising machine has exploded. Of those 207 million users, it claims 96 million are subscribers, meaning that 111 million users rely on the ad-supported version. Spotify top marketing execs have expressed that the company’s ambition is “absolutely” to be the third-largest player in the digital ad market behind Google and Facebook. In turn, since 2015, Spotify’s strategic use of mood and emotion-based targeting has only become even more entrenched in its business model. “At Spotify we have a personal relationship with over 191 million people who show us their true colors with zero filter,” reads a current advertising deck. “That’s a lot of authentic engagement with our audience: billions of data points every day across devices! This data fuels Spotify’s streaming intelligence—our secret weapon that gives brands the edge to be relevant in real-time moments.” Another brand-facing pitch proclaims: “The most exciting part? This new research is starting to reveal the streaming generation’s offline behaviors through their streaming habits.” Today, Spotify Ad Studio, a self-service portal automating the ad-purchase process, promises access to “rich and textured datasets,” allowing brands to instantly target their ads by mood and activity categories like “Chill,” “Commute,” “Dinner,” “Focus/Study,” “Girls Night Out,” and more. And across the Spotify for Brands website are a number of “studies” and “insights reports” regarding research that Spotify has undertaken about streaming habits: “You are what you stream,” they reiterate over and over. In a 2017 package titled Understanding People Through Music—Millennial Edition, Spotify (with help from “youth marketing and millennial research firm” Ypulse) set out to help advertisers better target millennial users by mood, emotion, and activity specifically. Spotify explains that “unlike generations past, millennials aren’t loyal to any specific music genre.” They conflate this with a greater reluctance toward labels and binaries, pointing out the rising number of individuals who identify as gender fluid and the growing demographic of millennials who do not have traditional jobs—and chalk these up to consumer preferences. “This throws a wrench is marketers’ neat audience segmentations,” Spotify commiserates. For the study, they also gathered six hundred in-depth “day in a life” interviews recorded as “behavioral diaries.” All participants were surveilled by demographics, platform usage, playlist behavior, feature usage, and music tastes—and in the United States (where privacy is taken less seriously), Spotify and Ypulse were able to pair Spotify’s own streaming data with additional third-party information on “broader interests, lifestyle, and shopping behaviors.” The result is an interactive hub on the Spotify for Brands website detailing seven distinct “key audio streaming moments for marketers to tap into,” including Working, Chilling, Cooking, Chores, Gaming, Workout, Partying, and Driving. Spotify also dutifully outlines recommendations for how to use this information to sell shit, alongside success stories from Dunkin’ Donuts, Snickers, Gatorade, Wild Turkey, and BMW. More startlingly, for each of these “moments” there is an animated trajectory of a typical “emotional journey” claiming to predict the various emotional states users will experience while listening to particular playlists. Listeners who are “working,” for instance, are likely to start out feeling pressured and stressed, before they become more energized and focused and end up feeling fine and accomplished at the end of the playlist queue. If they listen while doing chores, the study claims to know that they start out feeling stressed and lazy, then grow motivated and entertained, and end by feeling similarly good and accomplished. In Spotify’s world, listening data has become the oil that fuels a monetizable metrics machine, pumping the numbers that lure advertisers to the platform. In a data-driven listening environment, the commodity is no longer music. The commodity is listening. The commodity is users and their moods. The commodity is listening habits as behavioral data. Indeed, what Spotify calls “streaming intelligence” should be understood as surveillance of its users to fuel its own growth and ability to sell mood-and-moment data to brands. A Leviathan of Ads The potential of music to provide mood-related data useful to marketers has long been studied. In 1990, the Journal of Marketing published an article dubbed “Music, Mood and Marketing” that surveyed some of this history while bemoaning how “despite being a prominent promotional tool, music is not well understood or controlled by marketers.” The text outlines how “marketers are precariously dependent on musicians for their insight into the selection or composition of the ‘right’ music for particular situations.” This view of music as a burdensome means to a marketer’s end is absurd, but it’s also the logic that rules the current era of algorithmic music platforms. Unsurprisingly, this 1990 article aimed to overcome challenges for marketers by figuring out new ways to extract value from music that would be beyond the control of musicians themselves: studying the “behavioral effects” of music with a “special emphasis on music’s emotional expressionism and role as mood influencer” in order to create new forms of power and control. Today, marketers want mood-related data more than ever, at least in part to fuel automated, personalized ad targeting. In 2016, the world’s largest holding company for advertising and PR agencies, WPP, announced that it had struck a multi-year partnership with Spotify, giving the conglomerate unprecedented access to Spotify’s mood data specifically. The partnership with the WPP, it turns out, was part of Spotify’s plan to ramp up its advertising business in advance of its IPO. WPP is the parent company to several of the world’s largest and oldest advertising, PR, and brand consulting firms, including Ogilvy, Grey Global Group, and at least eighteen others. Across their portfolio, WPP owns companies that work with numerous mega-corporations and household brands, helping shill everything from cars, Coca-Cola, and KFC to booze, banks, and Burger King. Over the decades, these companies have worked on campaigns spanning from Merrill Lynch and Lay’s potato chips to Colgate-Palmolive and Ford. Additionally, WPP properties also include tech-focused companies that claim proficiency in automation- and personalization-driven ad sales, all of which would now benefit from Spotify’s mood data. The 2016 announcement of WPP and Spotify’s global partnership in “data, insights, creative, technology, innovation, programmatic solution, and new growth markets” speaks for itself: WPP now has unique listening preferences and behaviors of Spotify’s 100 million users in 60 countries. The multi-year deal provides differentiating value to WPP and its clients by harnessing insights from the connection between music and audiences’ moods and activities. Music attributes such as tempo and energy are proven to be highly relevant in predicting mood, which enables advertisers to understand their audiences in a new emotional dimension. What’s more, WPP-owned advertising agencies could now access the “Wunderman Zipline™ Data Management Platform” to gain direct access to Spotify users’ “mood, listening and playlist behavior, activity and location.” They’d also potentially make use of “Spotify’s data on connected device usage” while the WPP-owned company GroupM specifically would retain access to “an exclusive infusion of Spotify data” into its own platform made for corporate ad targeting. Per the announcement, WPP companies would also serve as launch partners for new types of advertising and new markets unveiled by Spotify, while procuring “visibility into Spotify’s product roadmap and access to beta testing.” At the time the partnership was announced, Anas Ghazi, then Managing Director of Global Partnerships at WPP’s Data Alliance, noted that all WPP agencies would be able to “grab these insights. . . . If you think about how music shapes your activity and thoughts, this is a new, unique play for us to find audiences. Mood and moments are the next pieces of understanding audiences.” And Harvey Goldhersz, then CEO of GroupM Data & Analytics, salivated: “The insights we’ll develop from Spotify’s behavioral data will help our clients realize a material marketplace advantage, aiding delivery of ads that are appropriate to the consumer’s mood and the device used.” Ongoing Synergies While this deal was announced via the WPP Data Alliance, visiting that particular organization’s website now auto-directs back to the main WPP website, likely a result of corporate restructuring that WPP underwent over the past year. Currently, the only public-facing evidence of the relationship between WPP and Spotify is listed online under the WPP-owned data and insights company Kantar, which WPP describes as “the world’s leading marketing data, insight and consultancy company.” What might Kantar be doing with this user data? The current splash video deck on its website is useful: it claims to be the first agency to use “facial recognition in advertising testing,” and it professes to be exploring new technologies “from biodata and biometrics and healthcare, to capturing human sentiment and even voting intentions by analyzing social media.” And, finally, it admits to “exploiting big data, artificial intelligence and analytics . . . to predict attitudes and behavior.” When we reached out to see if the relationship between Kantar and Spotify had changed since the initial 2016 announcement, Kantar sent The Baffler this statement: The 2016 Spotify collaboration was the first chapter of many-a collaboration and has continued to evolve to meet the dynamic needs of our clients and marketplace. Spotify continues to be a valued partner of larger enterprise and Kantar with on-going synergies. One year after the announcement of the partnership, in 2017, Spotify further confirmed its desire to establish direct relationships with the world’s biggest advertising agencies when it hired two executives from WPP: Angela Solk, now Global Head of Agency Partnerships, whose job at Spotify includes teaching WPP and other ad agencies how to best make use of Spotify’s first-party data. (In Solk’s first year at Spotify, she helped create the Smirnoff Equalizer; in a 2018 interview, she reflected on the “beauty” of that branded content campaign and Spotify’s ability to extract listener insight and make it “part of Smirnoff’s DNA.”) Spotify also hired Craig Weingarten as its Head of Industry, Automotive, which now leads Spotify’s Detroit-based auto ad sales team. According to its own media narrative, Spotify offers data access to brands that competitor platforms do not, and it has gained a reputation for its eagerness to share its first-party data. At advertising conferences and in the ad press, Spotify top ad exec Marco Bertozzi has emphasized how Spotify hopes to widely share first-party data, going so far as to confess, “When other walled gardens say no to data questions . . . we say yes.” (Bertozzi was also the mind behind an internal Spotify campaign adorably branded “#LoveAds” to combat growing societal disgust with digital advertising. #LoveAds started as a mantra of the advertising team, but as Bertozzi proudly explained in late 2018, “#LoveAds became a movement within the company.”) Spotify has spent tremendous energy on its ad team’s proficiency with cross-device advertising options (likely due to the imminent ubiquity of Spotify in the car and the so-called “smart home”), as well as “programmatic advertising,” otherwise understood as the targeted advertising sold through an automated process, often in milliseconds—Spotify seeks to be the most dominant seller of such advertising in the audio space. And there’s also the self-serve platform, direct inventory sales, Spotify’s private marketplace (an invite-only inventory for select advertisers), programmatic guaranteed deals (a guaranteed volume of impressions at a fixed price)—the jargon ad-speak lists could go on and on. Trying to keep tabs on Spotify’s advertising products and partnerships is dizzying. But what is clear is that the hype surrounding these partnerships has often focused on “moods and moments”-related data Spotify offers brands—not to mention the company’s penchant for allowing brands to combine their own data with Spotify’s. In 2017, Spotify’s Brian Benedik told The Drum that Spotify’s access to listening habits and first-party data is “one of the reasons that some of these big multi-national brands like the Samsungs and the Heinekens and the Microsofts and Procter and Gambles of the world are working with us a lot closer than they ever have . . . they don’t see that or get that from any other platform out there.” And it appears that things will only get darker. Julie Clark, Spotify’s Global Head of Automation Sales, said earlier this year in an interview that its targeting capabilities are growing: “There’s deeper first party-data that’s going to become available as well.” Mood-Boosterism Recently, I tried out a mood-related experiment on Spotify. I created a new account and only listened to the “Coping with Loss” playlist on loop for a few days. I paid particular attention to the advertisements that I was served by Spotify. And while I do not know for sure whether listening to the “Coping with Loss” playlist caused me to receive an unusually nostalgic Home Depot ad about how your carpets contain memories, or an ad for a particularly angsty new album called Amidst the Chaos, the extent to which Spotify is matching moods and emotions with advertisements certainly makes it seem possible. What was clearer: during my time spent listening exclusively to songs about grieving, Spotify was quick to recommend that I brighten my mood. Under the heading “More like Coping With Loss . . .” it recommended playlists themed for Father’s Day and Mother’s Day, and playlists called “Warm Fuzzy Feelings,” “Soundtrack Love Songs,” “90s Love Songs,” “Love Ballads,” and “Acoustic Hits.” Spotify evidently did not want me to sit with my sorrow; it wanted my mood to improve. It wanted me to be happy. This is because Spotify specifically wants to be seen as a mood-boosting platform. In Spotify for Brands blog posts, the company routinely emphasizes how its own platform distinguishes itself from other streams of digital content, particularly because it gives marketers a chance to reach users through a medium that is widely seen as a “positive enhancer”: a medium they turn to for “music to help them get through the less desirable moments in their day, improve the more positive ones and even discover new things about their personality,” says Spotify. “We’re quite unique in that we have people’s ears . . . combine that with the psycho-graphic data that we have and that becomes very powerful for brands,” said Jana Jakovljevic in 2015, then Head of Programmatic Solutions; she is now employed by AI ad-tech company Cognitiv, which claims to be “the first neural network technology that unearths patterns of consumer behavior” using “deep learning” to predict and target consumers. The fact that experience at Spotify could prepare someone for such a career shift is worth some reflection. But more interestingly, Jakovljevic added that Spotify was using this data in many ways, including to determine exactly what type of music to recommend, which is important to remember: the data that is used to sell advertisers on the platform is also the data driving recommendations. The platform can recommend music in ways that appease advertisers while promising them that mood-boosting ad space. What’s in question here isn’t just how Spotify monitors and mines data on our listening in order to use their “audience segments” as a form of currency—but also how it then creates environments more suitable for advertisers through what it recommends, manipulating future listening on the platform. In appealing to advertisers, Spotify also celebrates its position as a background experience and in particular how this benefits advertisers and brands. Jorge Espinel, who was Head of Global Business Development at Spotify for five years, once said in an interview: “We love to be a background experience. You’re competing for consumer attention. Everyone is fighting for the foreground. We have the ability to fight for the background. And really no one is there. You’re doing your email, you’re doing your social network, etcetera.” In other words, it is in advertisers’ best interests that Spotify stays a background experience. When a platform like Spotify sells advertisers on its mood-boosting, background experience, and then bakes these aims into what it recommends to listeners, a twisted form of behavior manipulation is at play. It’s connected to what Shoshana Zuboff, in The Age of Surveillance Capitalism: The Fight for A Human Future at the New Frontier of Power, calls the “behavioral futures market”—where “many companies are eager to lay their bets on our future behavior.” Indeed, Spotify seeks not just to monitor and mine our mood, but also to manipulate future behavior. “What we’d ultimately like to do is be able to predict people’s behavior through music,” Les Hollander, the Global Head of Audio and Podcast Monetization, said in 2017. “We know that if you’re listening to your chill playlist in the morning, you may be doing yoga, you may be meditating . . . so we’d serve a contextually relevant ad with information and tonality and pace to that particular moment.” Very Zen! Spotify’s treatment of its mood and emotion data as a form of currency in the greater data marketplace should be considered more generally in the context of the tech industry’s rush to quantify our emotions. There is a burgeoning industry surrounding technology that alleges to mine our emotional states in order to feed AI projects; take, for example, car companies that claim they can use facial recognition to read your mood and keep you safer on the road. Or Facebook’s patents on facial recognition software. Or unnerving technologies like Affectiva, which claim to be developing an industry around “emotion AI” and “affective computing” processes that measure human emotions. It remains to be seen how Spotify could leverage such tech to maintain its reputation as a mood-boosting platform. And yet we should admit that it’s good for business for Spotify to manipulate people’s emotions on the platform toward feelings of chillness, contentment, and happiness. This has immense consequences for music, of course, but what does it mean for news and politics and culture at large, as the platform is set to play a bigger role in mediating all of the above, especially as its podcasting efforts grow? On the Spotify for Brands blog, the streaming giant explains that its research shows millennials are weary of most social media and news platforms, feeling that these mediums affect them negatively. Spotify is a solution for brands, it explains, because it is a platform where people go to feel good. Of course, in this telling of things, Spotify conveniently ignores why those other forms of media feel so bad. It’s because they are platforms that prioritize their own product and profit above all else. It’s because they are platforms governed by nothing more than surveillance technology and the mechanisms of advertising. Source
  4. DUBLIN (Reuters) - The European Court of Justice (ECJ) will hear a landmark privacy case regarding the transfer of EU citizens’ data to the United States in July, after Facebook’s bid to stop its referral was blocked by Ireland’s Supreme Court on Friday. The case, which was initially brought against Facebook by Austrian privacy activist Max Schrems, is the latest to question whether methods used by technology firms to transfer data outside the 28-nation European Union give EU consumers sufficient protection from U.S. surveillance. A ruling by Europe’s top court against the current legal arrangements would have major implications for thousands of companies, which make millions of such transfers every day, including human resources databases, credit card transactions and storage of internet browsing histories. The Irish High Court, which heard Schrems’ case against Facebook last year, said there were well-founded concerns about an absence of an effective remedy in U.S. law compatible with EU legal requirements, which prohibit personal data being transferred to a country with inadequate privacy protections. The High Court ordered the case be referred to the ECJ to assess whether the methods used for data transfers - including standard contractual clauses and the so called Privacy Shield agreement - were legal. Facebook took the case to the Supreme Court when the High Court refused its request to appeal the referral, but in a unanimous decision on Friday, the Supreme Court said it would not overturn any aspect the ruling. The High Court’s original five-page referral asks the ECJ if the Privacy Shield - under which companies certify they comply with EU privacy law when transferring data to the United States - does in fact mean that the United States “ensures an adequate level of protection”. Facebook came under scrutiny last year after it emerged the personal information of up to 87 million users, mostly in the United States, may have been improperly shared with political consultancy Cambridge Analytica. More generally, data privacy has been a growing public concern since revelations in 2013 by former U.S. intelligence contractor Edward Snowden of mass U.S. surveillance caused political outrage in Europe. The Privacy Shield was hammered out between the EU and the United States after the ECJ struck down its predecessor, Safe Harbour, on the grounds that it did not afford Europeans’ data enough protection from U.S. surveillance. That case was also brought by Schrems via the Irish courts. “Facebook likely again invested millions to stop this case from progressing. It is good to see that the Supreme Court has not followed,” Schrems said in a statement. Source
  5. T-Mobile, Sprint, and AT&T are selling access to their customers’ location data, and that data is ending up in the hands of bounty hunters and others not authorized to possess it, letting them track most phones in the country. Nervously, I gave a bounty hunter a phone number. He had offered to geolocate a phone for me, using a shady, overlooked service intended not for the cops, but for private individuals and businesses. Armed with just the number and a few hundred dollars, he said he could find the current location of most phones in the United States. The bounty hunter sent the number to his own contact, who would track the phone. The contact responded with a screenshot of Google Maps, containing a blue circle indicating the phone’s current location, approximate to a few hundred metres. Queens, New York. More specifically, the screenshot showed a location in a particular neighborhood—just a couple of blocks from where the target was. The hunter had found the phone (the target gave their consent to Motherboard to be tracked via their T-Mobile phone.) The bounty hunter did this all without deploying a hacking tool or having any previous knowledge of the phone’s whereabouts. Instead, the tracking tool relies on real-time location data sold to bounty hunters that ultimately originated from the telcos themselves, including T-Mobile, AT&T, and Sprint, a Motherboard investigation has found. These surveillance capabilities are sometimes sold through word-of-mouth networks. Whereas it’s common knowledge that law enforcement agencies can track phones with a warrant to service providers, IMSI catchers, or until recently via other companies that sell location data such as one called Securus, at least one company, called Microbilt, is selling phone geolocation services with little oversight to a spread of different private industries, ranging from car salesmen and property managers to bail bondsmen and bounty hunters, according to sources familiar with the company’s products and company documents obtained by Motherboard. Compounding that already highly questionable business practice, this spying capability is also being resold to others on the black market who are not licensed by the company to use it, including me, seemingly without Microbilt’s knowledge. Motherboard’s investigation shows just how exposed mobile networks and the data they generate are, leaving them open to surveillance by ordinary citizens, stalkers, and criminals, and comes as media and policy makers are paying more attention than ever to how location and other sensitive data is collected and sold. The investigation also shows that a wide variety of companies can access cell phone location data, and that the information trickles down from cell phone providers to a wide array of smaller players, who don’t necessarily have the correct safeguards in place to protect that data. “People are reselling to the wrong people,” the bail industry source who flagged the company to Motherboard said. Motherboard granted the source and others in this story anonymity to talk more candidly about a controversial surveillance capability. Your mobile phone is constantly communicating with nearby cell phone towers, so your telecom provider knows where to route calls and texts. From this, telecom companies also work out the phone’s approximate location based on its proximity to those towers. Although many users may be unaware of the practice, telecom companies in the United States sell access to their customers’ location data to other companies, called location aggregators, who then sell it to specific clients and industries. Last year, one location aggregator called LocationSmart faced harsh criticism for selling data that ultimately ended up in the hands of Securus, a company which provided phone tracking to low level enforcement without requiring a warrant. LocationSmart also exposed the very data it was selling through a buggy website panel, meaning anyone could geolocate nearly any phone in the United States at a click of a mouse. There’s a complex supply chain that shares some of American cell phone users’ most sensitive data, with the telcos potentially being unaware of how the data is being used by the eventual end user, or even whose hands it lands in. Financial companies use phone location data to detect fraud; roadside assistance firms use it to locate stuck customers. But AT&T, for example, told Motherboard the use of its customers’ data by bounty hunters goes explicitly against the company’s policies, raising questions about how AT&T allowed the sale for this purpose in the first place. “The allegation here would violate our contract and Privacy Policy,” an AT&T spokesperson told Motherboard in an email. In the case of the phone we tracked, six different entities had potential access to the phone’s data. T-Mobile shares location data with an aggregator called Zumigo, which shares information with Microbilt. Microbilt shared that data with a customer using its mobile phone tracking product. The bounty hunter then shared this information with a bail industry source, who shared it with Motherboard. The CTIA, a telecom industry trade group of which AT&T, Sprint, and T-Mobile are members, has official guidelines for the use of so-called “location-based services” that “rely on two fundamental principles: user notice and consent,” the group wrote in those guidelines. Telecom companies and data aggregators that Motherboard spoke to said that they require their clients to get consent from the people they want to track, but it’s clear that this is not always happening. A second source who has tracked the geolocation industry told Motherboard, while talking about the industry generally, “If there is money to be made they will keep selling the data.” “Those third-level companies sell their services. That is where you see the issues with going to shady folks [and] for shady reasons,” the source added. Frederike Kaltheuner, data exploitation programme lead at campaign group Privacy International, told Motherboard in a phone call that “it’s part of a bigger problem; the US has a completely unregulated data ecosystem.” Microbilt buys access to location data from an aggregator called Zumigo and then sells it to a dizzying number of sectors, including landlords to scope out potential renters; motor vehicle salesmen, and others who are conducting credit checks. Armed with just a phone number, Microbilt’s “Mobile Device Verify” product can return a target’s full name and address, geolocate a phone in an individual instance, or operate as a continuous tracking service. “You can set up monitoring with control over the weeks, days and even hours that location on a device is checked as well as the start and end dates of monitoring,” a company brochure Motherboard found online reads. Posing as a potential customer, Motherboard explicitly asked a Microbilt customer support staffer whether the company offered phone geolocation for bail bondsmen. Shortly after, another staffer emailed with a price list—locating a phone can cost as little as $4.95 each if searching for a low number of devices. That price gets even cheaper as the customer buys the capability to track more phones. Getting real-time updates on a phone’s location can cost around $12.95. “Dirt cheap when you think about the data you can get,” the source familiar with the industry added. It’s bad enough that access to highly sensitive phone geolocation data is already being sold to a wide range of industries and businesses. But there is also an underground market that Motherboard used to geolocate a phone—one where Microbilt customers resell their access at a profit, and with minimal oversight. “Blade Runner, the iconic sci-fi movie, is set in 2019. And here we are: there's an unregulated black market where bounty-hunters can buy information about where we are, in real time, over time, and come after us. You don't need to be a replicant to be scared of the consequences,” Thomas Rid, professor of strategic studies at Johns Hopkins University, told Motherboard in an online chat. The bail industry source said his middleman used Microbilt to find the phone. This middleman charged $300, a sizeable markup on the usual Microbilt price. The Google Maps screenshot provided to Motherboard of the target phone’s location also included its approximate longitude and latitude coordinates, and a range of how accurate the phone geolocation is: 0.3 miles, or just under 500 metres. It may not necessarily be enough to geolocate someone to a specific building in a populated area, but it can certainly pinpoint a particular borough, city, or neighborhood. In other cases of phone geolocation it is typically done with the consent of the target, perhaps by sending a text message the user has to deliberately reply to, signalling they accept their location being tracked. This may be done in the earlier roadside assistance example or when a company monitors its fleet of trucks. But when Motherboard tested the geolocation service, the target phone received no warning it was being tracked. The bail source who originally alerted Microbilt to Motherboard said that bounty hunters have used phone geolocation services for non-work purposes, such as tracking their girlfriends. Motherboard was unable to identify a specific instance of this happening, but domestic stalkers have repeatedly used technology, such as mobile phone malware, to track spouses. As Motherboard was reporting this story, Microbilt removed documents related to its mobile phone location product from its website. https://www.documentcloud.org/documents/5676919-Microbilt-Mobile-Device-Verify-2018.html A Microbilt spokesperson told Motherboard in a statement that the company requires anyone using its mobile device verification services for fraud prevention must first obtain consent of the consumer. Microbilt also confirmed it found an instance of abuse on its platform—our phone ping. “The request came through a licensed state agency that writes in approximately $100 million in bonds per year and passed all up front credentialing under the pretense that location was being verified to mitigate financial exposure related to a bond loan being considered for the submitted consumer,” Microbilt said in an emailed statement. In this case, “licensed state agency” is referring to a private bail bond company, Motherboard confirmed. “As a result, MicroBilt was unaware that its terms of use were being violated by the rogue individual that submitted the request under false pretenses, does not approve of such use cases, and has a clear policy that such violations will result in loss of access to all MicroBilt services and termination of the requesting party’s end-user agreement,” Microbilt added. “Upon investigating the alleged abuse and learning of the violation of our contract, we terminated the customer’s access to our products and they will not be eligible for reinstatement based on this violation.” Zumigo confirmed it was the company that provided the phone location to Microbilt and defended its practices. In a statement, Zumigo did not seem to take issue with the practice of providing data that ultimately ended up with licensed bounty hunters, but wrote, “illegal access to data is an unfortunate occurrence across virtually every industry that deals in consumer or employee data, and it is impossible to detect a fraudster, or rogue customer, who requests location data of his or her own mobile devices when the required consent is provided. However, Zumigo takes steps to protect privacy by providing a measure of distance (approx. 0.5-1.0 mile) from an actual address.” Zumigo told Motherboard it has cut Microbilt’s data access. In Motherboard’s case, the successfully geolocated phone was on T-Mobile. “We take the privacy and security of our customers’ information very seriously and will not tolerate any misuse of our customers’ data,” A T-Mobile spokesperson told Motherboard in an emailed statement. “While T-Mobile does not have a direct relationship with Microbilt, our vendor Zumigo was working with them and has confirmed with us that they have already shut down all transmission of T-Mobile data. T-Mobile has also blocked access to device location data for any request submitted by Zumigo on behalf of Microbilt as an additional precaution.” Microbilt’s product documentation suggests the phone location service works on all mobile networks, however the middleman was unable or unwilling to conduct a search for a Verizon device. Verizon did not respond to a request for comment. AT&T told Motherboard it has cut access to Microbilt as the company investigates. “We only permit the sharing of location when a customer gives permission for cases like fraud prevention or emergency roadside assistance, or when required by law,” the AT&T spokesperson said. Sprint told Motherboard in a statement that “protecting our customers’ privacy and security is a top priority, and we are transparent about that in our Privacy Policy [...] Sprint does not have a direct relationship with MicroBilt. If we determine that any of our customers do and have violated the terms of our contract, we will take appropriate action based on those findings.” Sprint would not clarify the contours of its relationship with Microbilt. These statements sound very familiar. When The New York Times and Senator Ron Wyden published details of Securus last year, the firm that was offering geolocation to low level law enforcement without a warrant, the telcos said they were taking extra measures to make sure their customers’ data would not be abused again. Verizon announced it was going to limit data access to companies not using it for legitimate purposes. T-Mobile, Sprint, and AT&T followed suit shortly after with similar promises. After Wyden’s pressure, T-Mobile’s CEO John Legere tweeted in June last year “I’ve personally evaluated this issue & have pledged that @tmobile will not sell customer location data to shady middlemen.” Months after the telcos said they were going to combat this problem, in the face of an arguably even worse case of abuse and data trading, they are saying much the same thing. Last year, Motherboard reported on a company that previously offered phone geolocation to bounty hunters; here Microbilt is operating even after a wave of outrage from policy makers. In its statement to Motherboard on Monday, T-Mobile said it has nearly finished the process of terminating its agreements with location aggregators. “It would be bad if this was the first time we learned about it. It’s not. Every major wireless carrier pledged to end this kind of data sharing after I exposed this practice last year. Now it appears these promises were little more than worthless spam in their customers’ inboxes,” Wyden told Motherboard in a statement. Wyden is proposing legislation to safeguard personal data. Due to the ongoing government shutdown, the Federal Communications Commission (FCC) was unable to provide a statement. “Wireless carriers’ continued sale of location data is a nightmare for national security and the personal safety of anyone with a phone,” Wyden added. “When stalkers, spies, and predators know when a woman is alone, or when a home is empty, or where a White House official stops after work, the possibilities for abuse are endless.” Source
  6. from the 'intel-techniques,'-indeed dept A little opsec goes a long way. The Massachusetts State Police -- one of the most secretive law enforcement agencies in the nation -- gave readers of its Twitter feed a free look at the First Amendment-protected activities it keeps tabs on… by uploading a screenshot showing its browser bookmarks. Alex Press of Jacobin Magazine was one of the Twitter users to catch the inadvertent exposure of MSP operations. If you can't read/see the tweet, it says: the MA staties just unintentionally tweeted a photo that shows their bookmarks include a whole number of Boston’s left-wing orgs The tweet was quickly scrubbed by the MSP, but not before other Twitter users had grabbed screenshots. Some of the activist groups bookmarked by the state police include Mass. Action Against Police Brutality, the Coalition to Organize and Mobilize Boston Against Trump, and Resistance Calendar. Here's a closer look at the bookmarks. The MSP did not deny they keep (browser) tabs on protest organizations. Instead, it attempted to portray this screen of left-leaning bookmarks as some sort of non-partisan, non-cop-centric attempt to keep the community safe by being forewarned and forearmed. Ok. But mainly these groups? The ones against police brutality and the back-the-blue President? Seems a little one-sided for an "of any type and by any group" declaration. The statement continues in the same defensive vein for a few more sentences, basically reiterating the false conceit that cops don't take sides when it comes to activist groups and the good people of Massachusetts are lucky to have such proactive public servants at their disposal. Whatever. If it wasn't a big deal, the MSP wouldn't have vanished the original tweet into the internet ether. The screenshot came from a "fusion center" -- one of those DHS partnerships that results in far more rights violations and garbage "see something, say something" reports than "actionable intelligence". Fusion centers are supposed to be focused on terrorism, not on people who don't like police brutality or the current Commander in Chief. What this looks like is probably what it is: police keeping tabs on people they don't like or people who don't like them. That's not really what policing is about and it sure as hell doesn't keep the community any safer. Source
  7. By Bruce Schneier The Five Eyes -- the intelligence consortium of the rich English-speaking countries (the US, Canada, the UK, Australia, and New Zealand) -- have issued a "Statement of Principles on Access to Evidence and Encryption" where they claim their needs for surveillance outweigh everyone's needs for security and privacy. ...the increasing use and sophistication of certain encryption designs present challenges for nations in combatting serious crimes and threats to national and global security. Many of the same means of encryption that are being used to protect personal, commercial and government information are also being used by criminals, including child sex offenders, terrorists and organized crime groups to frustrate investigations and avoid detection and prosecution. Privacy laws must prevent arbitrary or unlawful interference, but privacy is not absolute. It is an established principle that appropriate government authorities should be able to seek access to otherwise private information when a court or independent authority has authorized such access based on established legal standards. The same principles have long permitted government authorities to search homes, vehicles, and personal effects with valid legal authority. The increasing gap between the ability of law enforcement to lawfully access data and their ability to acquire and use the content of that data is a pressing international concern that requires urgent, sustained attention and informed discussion on the complexity of the issues and interests at stake. Otherwise, court decisions about legitimate access to data are increasingly rendered meaningless, threatening to undermine the systems of justice established in our democratic nations. To put it bluntly, this is reckless and shortsighted. I've repeatedly written about why this can't be done technically, and why trying results in insecurity. But there's a greater principle at first: we need to decide, as nations and as society, to put defense first. We need a "defense dominant" strategy for securing the Internet and everything attached to it. This is important. Our national security depends on the security of our technologies. Demanding that technology companies add backdoors to computers and communications systems puts us all at risk. We need to understand that these systems are too critical to our society and -- now that they can affect the world in a direct physical manner -- affect our lives and property as well. This is what I just wrote, in Click Here to Kill Everybody: There is simply no way to secure US networks while at the same time leaving foreign networks open to eavesdropping and attack. There's no way to secure our phones and computers from criminals and terrorists without also securing the phones and computers of those criminals and terrorists. On the generalized worldwide network that is the Internet, anything we do to secure its hardware and software secures it everywhere in the world. And everything we do to keep it insecure similarly affects the entire world. This leaves us with a choice: either we secure our stuff, and as a side effect also secure their stuff; or we keep their stuff vulnerable, and as a side effect keep our own stuff vulnerable. It's actually not a hard choice. An analogy might bring this point home. Imagine that every house could be opened with a master key, and this was known to the criminals. Fixing those locks would also mean that criminals' safe houses would be more secure, but it's pretty clear that this downside would be worth the trade-off of protecting everyone's house. With the Internet+ increasing the risks from insecurity dramatically, the choice is even more obvious. We must secure the information systems used by our elected officials, our critical infrastructure providers, and our businesses. Yes, increasing our security will make it harder for us to eavesdrop, and attack, our enemies in cyberspace. (It won't make it impossible for law enforcement to solve crimes; I'll get to that later in this chapter.) Regardless, it's worth it. If we are ever going to secure the Internet+, we need to prioritize defense over offense in all of its aspects. We've got more to lose through our Internet+ vulnerabilities than our adversaries do, and more to gain through Internet+ security. We need to recognize that the security benefits of a secure Internet+ greatly outweigh the security benefits of a vulnerable one. We need to have this debate at the level of national security. Putting spy agencies in charge of this trade-off is wrong, and will result in bad decisions. Cory Doctorow has a good reaction. Source
  8. In the decade after the 9/11 attacks, the New York City Police Department moved to put millions of New Yorkers under constant watch. Warning of terrorism threats, the department created a plan to carpet Manhattan’s downtown streets with thousands of cameras and had, by 2008, centralized its video surveillance operations to a single command center. Two years later, the NYPD announced that the command center, known as the Lower Manhattan Security Coordination Center, had integrated cutting-edge video analytics software into select cameras across the city. The video analytics software captured stills of individuals caught on closed-circuit TV footage and automatically labeled the images with physical tags, such as clothing color, allowing police to quickly search through hours of video for images of individuals matching a description of interest. At the time, the software was also starting to generate alerts for unattended packages, cars speeding up a street in the wrong direction, or people entering restricted areas. Over the years, the NYPD has shared only occasional, small updates on the program’s progress. In a 2011 interview with Scientific American, for example, Inspector Salvatore DiPace, then commanding officer of the Lower Manhattan Security Initiative, said the police department was testing whether the software could box out images of people’s faces as they passed by subway cameras and subsequently cull through the images for various unspecified “facial features.” While facial recognition technology, which measures individual faces at over 16,000 points for fine-grained comparisons with other facial images, has attracted significant legal scrutiny and media attention, this object identification software has largely evaded attention. How exactly this technology came to be developed and which particular features the software was built to catalog have never been revealed publicly by the NYPD. Now, thanks to confidential corporate documents and interviews with many of the technologists involved in developing the software, The Intercept and the Investigative Fund have learned that IBM began developing this object identification technology using secret access to NYPD camera footage. With access to images of thousands of unknowing New Yorkers offered up by NYPD officials, as early as 2012, IBM was creating new search features that allow other police departments to search camera footage for images of people by hair color, facial hair, and skin tone. IBM declined to comment on its use of NYPD footage to develop the software. However, in an email response to questions, the NYPD did tell The Intercept that “Video, from time to time, was provided to IBM to ensure that the product they were developing would work in the crowded urban NYC environment and help us protect the City. There is nothing in the NYPD’s agreement with IBM that prohibits sharing data with IBM for system development purposes. Further, all vendors who enter into contractual agreements with the NYPD have the absolute requirement to keep all data furnished by the NYPD confidential during the term of the agreement, after the completion of the agreement, and in the event that the agreement is terminated.” In an email to The Intercept, the NYPD confirmed that select counterterrorism officials had access to a pre-released version of IBM’s program, which included skin tone search capabilities, as early as the summer of 2012. NYPD spokesperson Peter Donald said the search characteristics were only used for evaluation purposes and that officers were instructed not to include the skin tone search feature in their assessment. The department eventually decided not to integrate the analytics program into its larger surveillance architecture, and phased out the IBM program in 2016. After testing out these bodily search features with the NYPD, IBM released some of these capabilities in a 2013 product release. Later versions of IBM’s software retained and expanded these bodily search capabilities. (IBM did not respond to a question about the current availability of its video analytics programs.) Asked about the secrecy of this collaboration, the NYPD said that “various elected leaders and stakeholders” were briefed on the department’s efforts “to keep this city safe,” adding that sharing camera access with IBM was necessary for the system to work. IBM did not respond to a question about why the company didn’t make this collaboration public. Donald said IBM gave the department licenses to apply the system to 512 cameras, but said the analytics were tested on “fewer than fifty.” He added that IBM personnel had access to certain cameras for the sole purpose of configuring NYPD’s system, and that the department put safeguards in place to protect the data, including “non-disclosure agreements for each individual accessing the system; non-disclosure agreements for the companies the vendors worked for; and background checks.” Civil liberties advocates contend that New Yorkers should have been made aware of the potential use of their physical data for a private company’s development of surveillance technology. The revelations come as a city council bill that would require NYPD transparency about surveillance acquisitions continues to languish, due, in part, to outspoken opposition from New York City Mayor Bill de Blasio and the NYPD. Skin Tone Search Technology, Refined on New Yorkers IBM’s initial breakthroughs in object recognition technology were envisioned for technologies like self-driving cars or image recognition on the internet, said Rick Kjeldsen, a former IBM researcher. But after 9/11, Kjeldsen and several of his colleagues realized their program was well suited for counterterror surveillance. “After 9/11, the funding sources and the customer interest really got driven toward security,” said Kjeldsen, who said he worked on the NYPD program from roughly 2009 through 2013. “Even though that hadn’t been our focus up to that point, that’s where demand was.” IBM’s first major urban video surveillance project was with the Chicago Police Department and began around 2005, according to Kjeldsen. The department let IBM experiment with the technology in downtown Chicago until 2013, but the collaboration wasn’t seen as a real business partnership. “Chicago was always known as, it’s not a real — these guys aren’t a real customer. This is kind of a development, a collaboration with Chicago,” Kjeldsen said. “Whereas New York, these guys were a customer. And they had expectations accordingly.” The NYPD acquired IBM’s video analytics software as one part of the Domain Awareness System, a shared project of the police department and Microsoft that centralized a vast web of surveillance sensors in lower and midtown Manhattan — including cameras, license plate readers, and radiation detectors — into a unified dashboard. IBM entered the picture as a subcontractor to Microsoft subsidiary Vexcel in 2007, as part of a project worth $60.7 million over six years, according to the internal IBM documents. In New York, the terrorist threat “was an easy selling point,” recalled Jonathan Connell, an IBM researcher who worked on the initial NYPD video analytics installation. “You say, ‘Look what the terrorists did before, they could come back, so you give us some money and we’ll put a camera there.” A former NYPD technologist who helped design the Lower Manhattan Security Initiative, asking to speak on background citing fears of professional reprisal, confirmed IBM’s role as a “strategic vendor.” “In our review of video analytics vendors at that time, they were well ahead of everyone else in my personal estimation,” the technologist said. According to internal IBM planning documents, the NYPD began integrating IBM’s surveillance product in March 2010 for the Lower Manhattan Security Coordination Center, a counterterrorism command center launched by Police Commissioner Ray Kelly in 2008. In a “60 Minutes” tour of the command center in 2011, Jessica Tisch, then the NYPD’s director of policy and planning for counterterrorism, showed off the software on gleaming widescreen monitors, demonstrating how it could pull up images and video clips of people in red shirts. Tisch did not mention the partnership with IBM. During Kelly’s tenure as police commissioner, the NYPD quietly worked with IBM as the company tested out its object recognition technology on a select number of NYPD and subway cameras, according to IBM documents. “We really needed to be able to test out the algorithm,” said Kjeldsen, who explained that the software would need to process massive quantities of diverse images in order to learn how to adjust to the differing lighting, shadows, and other environmental factors in its view. “We were almost using the video for both things at that time, taking it to the lab to resolve issues we were having or to experiment with new technology,” Kjeldsen said. At the time, the department hoped that video analytics would improve analysts’ ability to identify suspicious objects and persons in real time in sensitive areas, according to Conor McCourt, a retired NYPD counterterrorism sergeant who said he used IBM’s program in its initial stages. “Say you have a suspicious bag left in downtown Manhattan, as a person working in the command center,” McCourt said. “It could be that the analytics saw the object sitting there for five minutes, and says, ‘Look, there’s an object sitting there.’” Operators could then rewind the video or look at other cameras nearby, he explained, to get a few possibilities as to who had left the object behind. Over the years, IBM employees said, they started to become more concerned as they worked with the NYPD to allow the program to identify demographic characteristics. By 2012, according to the internal IBM documents, researchers were testing out the video analytics software on the bodies and faces of New Yorkers, capturing and archiving their physical data as they walked in public or passed through subway turnstiles. With these close-up images, IBM refined its ability to search for people on camera according to a variety of previously undisclosed features, such as age, gender, hair color (called “head color”), the presence of facial hair — and skin tone. The documents reference meetings between NYPD personnel and IBM researchers to review the development of body identification searches conducted at subway turnstile cameras. “We were certainly worried about where the heck this was going,” recalled Kjeldsen. “There were a couple of us that were always talking about this, you know, ‘If this gets better, this could be an issue.’” According to the NYPD, counterterrorism personnel accessed IBM’s bodily search feature capabilities only for evaluation purposes, and they were accessible only to a handful of counterterrorism personnel. “While tools that featured either racial or skin tone search capabilities were offered to the NYPD, they were explicitly declined by the NYPD,” Donald, the NYPD spokesperson, said. “Where such tools came with a test version of the product, the testers were instructed only to test other features (clothing, eyeglasses, etc.), but not to test or use the skin tone feature. That is not because there would have been anything illegal or even improper about testing or using these tools to search in the area of a crime for an image of a suspect that matched a description given by a victim or a witness. It was specifically to avoid even the suggestion or appearance of any kind of technological racial profiling.” The NYPD ended its use of IBM’s video analytics program in 2016, Donald said. Donald acknowledged that, at some point in 2016 or early 2017, IBM approached the NYPD with an upgraded version of the video analytics program that could search for people by ethnicity. “The Department explicitly rejected that product,” he said, “based on the inclusion of that new search parameter.” In 2017, IBM released Intelligent Video Analytics 2.0, a product with a body camera surveillance capability that allows users to detect people captured on camera by “ethnicity” tags, such as “Asian,” “Black,” and “White.” Kjeldsen, the former IBM researcher who helped develop the company’s skin tone analytics with NYPD camera access, said the department’s claim that the NYPD simply tested and rejected the bodily search features was misleading. “We would have not explored it had the NYPD told us, ‘We don’t want to do that,’” he said. “No company is going to spend money where there’s not customer interest.” Kjeldsen also added that the NYPD’s decision to allow IBM access to their cameras was crucial for the development of the skin tone search features, noting that during that period, New York City served as the company’s “primary testing area,” providing the company with considerable environmental diversity for software refinement. “The more different situations you can use to develop your software, the better it’s going be,” Kjeldsen said. “That obviously pertains to people, skin tones, whatever it is you might be able to classify individuals as, and it also goes for clothing.” The NYPD’s cooperation with IBM has since served as a selling point for the product at California State University, Northridge. There, campus police chief Anne Glavin said the technology firm IXP helped sell her on IBM’s object identification product by citing the NYPD’s work with the company. “They talked about what it’s done for New York City. IBM was very much behind that, so this was obviously of great interest to us,” Glavin said. Day-to-Day Policing, Civil Liberties Concerns The NYPD-IBM video analytics program was initially envisioned as a counterterrorism tool for use in midtown and lower Manhattan, according to Kjeldsen. However, the program was integrated during its testing phase into dozens of cameras across the city. According to the former NYPD technologist, it could have been integrated into everyday criminal investigations. “All bureaus of the department could make use of it,” said the former technologist, potentially helping detectives investigate everything from sex crimes to fraud cases. Kjeldsen spoke of cameras being placed at building entrances and near parking entrances to monitor for suspicious loiterers and abandoned bags. Donald, the NYPD spokesperson, said the program’s access was limited to a small number of counterterrorism officials, adding, “We are not aware of any case where video analytics was a factor in an arrest or prosecution.” Campus police at California State University, Northridge, who adopted IBM’s software, said the bodily search features have been helpful in criminal investigations. Asked about whether officers have deployed the software’s ability to filter through footage for suspects’ clothing color, hair color, and skin tone, Captain Scott VanScoy at California State University, Northridge, responded affirmatively, relaying a story about how university detectives were able to use such features to quickly filter through their cameras and find two suspects in a sexual assault case. “We were able to pick up where they were at different locations from earlier that evening and put a story together, so it saves us a ton of time,” Vanscoy said. “By the time we did the interviews, we already knew the story and they didn’t know we had known.” Glavin, the chief of the campus police, added that surveillance cameras using IBM’s software had been placed strategically across the campus to capture potential security threats, such as car robberies or student protests. “So we mapped out some CCTV in that area and a path of travel to our main administration building, which is sometimes where people will walk to make their concerns known and they like to stand outside that building,” Glavin said. “Not that we’re a big protest campus, we’re certainly not a Berkeley, but it made sense to start to build the exterior camera system there.” Civil liberties advocates say they are alarmed by the NYPD’s secrecy in helping to develop a program with the potential capacity for mass racial profiling. The identification technology IBM built could be easily misused after a major terrorist attack, argued Rachel Levinson-Waldman, senior counsel in the Brennan Center’s Liberty and National Security Program. “Whether or not the perpetrator is Muslim, the presumption is often that he or she is,” she said. “It’s easy to imagine law enforcement jumping to a conclusion about the ethnic and religious identity of a suspect, hastily going to the database of stored videos and combing through it for anyone who meets that physical description, and then calling people in for questioning on that basis.” IBM did not comment on questions about the potential use of its software for racial profiling. However, the company did send a comment to The Intercept pointing out that it was “one of the first companies anywhere to adopt a set of principles for trust and transparency for new technologies, including AI systems.” The statement continued on to explain that IBM is “making publicly available to other companies a dataset of annotations for more than a million images to help solve one of the biggest issues in facial analysis — the lack of diverse data to train AI systems.” Few laws clearly govern object recognition or the other forms of artificial intelligence incorporated into video surveillance, according to Clare Garvie, a law fellow at Georgetown Law’s Center on Privacy and Technology. “Any form of real-time location tracking may raise a Fourth Amendment inquiry,” Garvie said, citing a 2012 Supreme Court case, United States v. Jones, that involved police monitoring a car’s path without a warrant and resulted in five justices suggesting that individuals could have a reasonable expectation of privacy in their public movements. In addition, she said, any form of “identity-based surveillance” may compromise people’s right to anonymous public speech and association. Garvie noted that while facial recognition technology has been heavily criticized for the risk of false matches, that risk is even higher for an analytics system “tracking a person by other characteristics, like the color of their clothing and their height,” that are not unique characteristics. The former NYPD technologist acknowledged that video analytics systems can make mistakes, and noted a study where the software had trouble characterizing people of color: “It’s never 100 percent.” But the program’s identification of potential suspects was, he noted, only the first step in a chain of events that heavily relies on human expertise. “The technology operators hand the data off to the detective,” said the technologist. “You use all your databases to look for potential suspects and you give it to a witness to look at. … This is all about finding a way to shorten the time to catch the bad people.” Object identification programs could also unfairly drag people into police suspicion just because of generic physical characteristics, according to Jerome Greco, a digital forensics staff attorney at the Legal Aid Society, New York’s largest public defenders organization. “I imagine a scenario where a vague description, like young black male in a hoodie, is fed into the system, and the software’s undisclosed algorithm identifies a person in a video walking a few blocks away from the scene of an incident,” Greco said. “The police find an excuse to stop him, and, after the stop, an officer says the individual matches a description from the earlier incident.” All of a sudden, Greco continued, “a man who was just walking in his own neighborhood” could be charged with a serious crime without him or his attorney ever knowing “that it all stemmed from a secret program which he cannot challenge.” While the technology could be used for appropriate law enforcement work, Kjeldsen said that what bothered him most about his project was the secrecy he and his colleagues had to maintain. “We certainly couldn’t talk about what cameras we were using, what capabilities we were putting on cameras,” Kjeldsen said. “They wanted to control public perception and awareness of LMSI” — the Lower Manhattan Security Initiative — “so we always had to be cautious about even that part of it, that we’re involved, and who we were involved with, and what we were doing.” (IBM did not respond to a question about instructing its employees not to speak publicly about its work with the NYPD.) The way the NYPD helped IBM develop this technology without the public’s consent sets a dangerous precedent, Kjeldsen argued. “Are there certain activities that are nobody’s business no matter what?” he asked. “Are there certain places on the boundaries of public spaces that have an expectation of privacy? And then, how do we build tools to enforce that? That’s where we need the conversation. That’s exactly why knowledge of this should become more widely available — so that we can figure that out.” This article was reported in partnership with the Investigative Fund at the Nation Institute. Source
  9. from the result-of-asking-'why-not?'-rather-than-'why?' dept Reuters has a long, detailed examination of the Chinese surveillance state. China's intrusion into the lives of its citizens has never been minimal, but advances in technology have allowed the government to keep tabs on pretty much every aspect of citizens' lives. Facial recognition has been deployed at scale and it's not limited to finding criminals. It's used to identify regular citizens as they go about their daily lives. This is paired with license plate readers and a wealth of information gathered from online activity to provide the government dozens of data points for every citizen that wanders into the path of its cameras. Other biometric information is gathered and analyzed to help the security and law enforcement agencies better pin down exactly who it is they're looking at. But it goes further than that. The Chinese version of stop-and-frisk involves "patting down" cellphones for illegal content or evidence of illegal activities. China is home to several companies offering phone cracking services and forensic software. It's not only Cellebrite and Grayshift, although these two are best known for selling tech to US law enforcement. Not that phone cracking is really a necessity in China. Most citizens hand over passwords when asked, considering the alternative isn't going to be a detainment while a warrant is sought. The option is far more likely to be something like a trip to a modern dungeon for a little conversational beating. What's notable about this isn't the tech. This tech is everywhere. US law enforcement has access to much of this, minus the full-blown facial recognition and other biometric tracking. (That's on its way, though.) Plate readers, forensic devices, numerous law enforcement databases, social media tracking software… these are all in use already. Much of what China has deployed is being done in the name of security. That's the same justification for the massive surveillance apparatus erected after the 2001 attacks. The framework for a totalitarian state is already in place. The only thing separating us from China is our Constitutional rights. Whenever you hear a US government official lamenting perps walking on technicalities or encryption making it tough to lock criminals up, keep in mind the alternative is China: a full-blown police state stocked to the teeth with surveillance tech. Source
  10. Researchers believe a new encryption technique may be key to maintaining a balance between user privacy and government demands. For governments worldwide, encryption is a thorn in the side in the quest for surveillance, cracking suspected criminal phones, and monitoring communication. Officials are applying pressure on technology firms and app developers which provide end-to-end encryption services provide a way for police forces to break encryption. However, the moment you provide a backdoor into such services, you are creating a weak point that not only law enforcement and governments can use -- assuming that tunneling into a handset and monitoring is even within legal bounds -- but threat actors, and undermining the security of encryption as a whole. As the mass surveillance and data collection activities of the US National Security Agency hit the headlines, faith in governments and their ability to restrain such spying to genuine cases of criminality began to weaken. Now, the use of encryption and secure communication channels is ever-more popular, technology firms are resisting efforts to implant deliberate weaknesses in encryption protocols, and neither side wants to budge. What can be done? From the outset, something has got to give. However, researchers from Boston University believe they may have come up with a solution. On Monday, the team said they have developed a new encryption technique which will give authorities some access, but without providing unlimited access in practice, to communication. In other words, a middle ground -- a way to break encryption to placate law enforcement, but not to the extent that mass surveillance on the general public is possible. Mayank Varia, Research Associate Professor at Boston University and cryptography expert, has developed the new technique, known as cryptographic "crumpling." In a paper documenting the research, lead author Varia says that the new cryptography methods could be used for "exceptional access" to encrypted data for government purposes while keeping user privacy at large at a reasonable level. "Our approach places most of the responsibility for achieving exceptional access on the government, rather than on the users or developers of cryptographic tools," the paper notes. "As a result, our constructions are very simple and lightweight, and they can be easily retrofitted onto existing applications and protocols." The crumpling techniques use two approaches -- the first being a Diffie-Hellman key exchange over modular arithmetic groups which leads to an "extremely expensive" puzzle which must be solved to break the protocol, and the second a "hash-based proof of work to impose a linear cost on the adversary for each message" to recover. Crumpling requires strong, modern cryptography as a precondition as it allows per-message encryption keys and detailed management. The system requires this infrastructure so a small number of messages can be targeted without full-scale exposure. The team says that this condition will also only permit "passive" decryption attempts, rather than man-in-the-middle (MiTM) attacks. By introducing cryptographic puzzles into the generation of per-message cryptographic keys, the keys will be possible to decrypt but will require vast resources to do so. In addition, each puzzle will be chosen independently for each key, which means "the government must expend effort to solve each one." "Like a crumple zone in automotive engineering, in an emergency situation the construction should break a little bit in order to protect the integrity of the system as a whole and the safety of its human users," the paper notes. "We design a portion of our puzzles to match Bitcoin's proof of work computation so that we can predict their real-world marginal cost with reasonable confidence." To prevent unauthorized attempts to break encryption an "abrasion puzzle" serves as a gatekeeper which is more expensive to solve than individual key puzzles. While this would not necessarily deter state-sponsored threat actors, it may at least deter individual cyberattackers as the cost would not be worth the result. The new technique would allow governments to recover the plaintext for targeted messages, however, it would also be prohibitively expensive. A key length of 70 bits, for example -- with today's hardware -- would cost millions and force government agencies to choose their targets carefully and the expense would potentially prevent misuse. The research team estimates that the government could recover less than 70 keys per year with a budget of close to $70 million dollars upfront -- one million dollars per message and the full amount set out in the US' expanded federal budget to break encryption. However, there could also be additional costs of $1,000 to $1 million per message, and these kind of figures are difficult to conceal, especially as one message from a suspected criminal in a conversation without contextual data is unlikely to ever be enough to secure conviction. The research team says that crumpling can be adapted for use in common encryption services including PGP, Signal, as well as full-disk and file-based encryption. "We view this work as a catalyst that can inspire both the research community and the public at large to explore this space further," the researchers say. "Whether such a system will ever be (or should ever be) adopted depends less on technology and more on questions for society to answer collectively: whether to entrust the government with the power of targeted access and whether to accept the limitations on law enforcement possible with only targeted access." The research was funded by the National Science Foundation. Source
  11. part 1 (YET ANOTHER) WARNING .... Your online activities are now being tracked and recorded by various government and corporate entities around the world. This information can be used against you at any time and there is no real way to “opt out”. In the past decade, we have seen the systematic advancement of the surveillance apparatus throughout the world. The United States, United Kingdom, Australia, and Canada have all passed laws allowing, and in some cases forcing, telecom companies to bulk-collect your data: United States – In March 2017 the US Congress passed legislation that allows internet service providers to collect, store, and sell your private browsing history, app usage data, location information and more – without your consent. This essentially allows Comcast, Verizon, AT&T and other providers to monetize and sell their customers to the highest bidders (usually for targeted advertising). United Kingdom – In November 2016 the UK Parliament passed the infamous Snoopers Charter (Investigatory Powers Act) which forces internet providers and phone companies to bulk-collect customer data. This includes private browsing history, social media posts, phone calls, text messages, and more. This information is stored for 12 months in a giant database that is accessible to 48 different government agencies. The erosion of free speech is also rapidly underway as various laws allow UK authorities to lock up anyone they deem to be “offensive” (1984 is already here). Australia – In April 2017 the Australian government passed a massive data retention law that forces telecoms to collect and store text messages, phone calls, location information, and internet connection data for a full two years, with the data being accessible to authorities without a warrant. Canada, Europe, and other parts of the world have similar laws and policies already in place. What you are witnessing is the rapid expansion of the global surveillance state, whereby corporate and government entities work together to monitor and record everything you do. What the hell is going on here? Perhaps you are wondering why all this is happening. There is a simple answer to that question. Control Just like we have seen throughout history, government surveillance is simply a tool used for control. This could be for maintaining control of power, controlling a population, or controlling the flow of information in a society. You will notice that the violation of your right to privacy will always be justified by various excuses – from “terrorism” to tax evasion – but never forget, it’s really about control. Along the same lines, corporate surveillance is also about control. Collecting your data helps private entities control your buying decisions, habits, and desires. The tools for doing this are all around you: apps on your devices, social networks, tracking ads, and many free products which simply bulk-collect your data (when something is free, you are the product). This is why the biggest collectors of private data – Google and Facebook – are also the two businesses that completely dominate the online advertising industry. So to sum this up, advertising today is all about the buying and selling of individuals. But it gets even worse… Now we have the full-scale cooperation between government and corporate entities to monitor your every move. In other words, governments are now enlisting private corporations to carry out bulk data collection on entire populations. Your internet service provider is your adversary working on behalf of the surveillance state. This basic trend is happening in much of the world, but it has been well documented in the United States with the PRISM Program. So why should you care? Everything that’s being collected could be used against you today, or at any time in the future, in ways you may not be able to imagine. In many parts of the world, particularly in the UK, thought crime laws are already in place. If you do something that is deemed to be “offensive”, you could end up rotting away in a jail cell for years. Again, we have seen this tactic used throughout history for locking up dissidents – and it is alive and well in the Western world today. From a commercial standpoint, corporate surveillance is already being used to steal your data and hit you with targeted ads, thereby monetizing your private life. Reality check Many talking heads in the media will attempt to confuse you by pretending this is a problem with a certain politician or perhaps a political party. But that’s a bunch of garbage to distract you from the bigger truth. For decades, politicians from all sides (left and right) have worked hard to advance the surveillance agenda around the world. Again, it’s all about control, regardless of which puppet is in office. So contrary to what various groups are saying, you are not going to solve this problem by writing a letter to another politician or signing some online petition. Forget about it. Instead, you can take concrete steps right now to secure your data and protect your privacy. Restore Privacy is all about giving you the tools and information to do that. If you feel overwhelmed by all this, just relax. The privacy tools you need are easy to use no matter what level of experience you have. Arguably the most important privacy tool is a good VPN (virtual private network). A VPN will encrypt and anonymize your online activity by creating a secured tunnel between your computer and a VPN server. This makes your data and online activities unreadable to government surveillance, your internet provider, hackers, and other third-party snoopers. A VPN will also allow you to spoof your location, hide your real IP address, and allow you to access blocked content from anywhere in the world. Check out the best VPN guide to get started. Stay safe! SOURCE
  12. MOSCOW - Edward Snowden, who exposed extensive U.S. surveillance programs in 2013, warned this week that Japan may be moving closer to sweeping surveillance of ordinary citizens as the government eyes a legal change to enhance police powers in the name of counterterrorism. "This is the beginning of a new wave of mass surveillance in Japan," the 33-year-old American said in an exclusive interview with Kyodo News while in exile in Russia, referring to a so-called anti-conspiracy bill that has stirred controversy in and outside Japan as having the potential to undermine civil liberties. The consequences could be even graver when combined with the use of a wide-reaching online data collection tool called XKEYSCORE, the former contractor for the U.S. National Security Agency said. He also gave credence to the authenticity of new NSA papers exposed through The Intercept, a U.S. online media outlet, earlier this year that showed the agency's surveillance tool has already been shared with Japan. Edward Snowden: Exclusive interview with Kyodo News 1 The remarks by the intelligence expert are the latest warning over the Japanese government's push to pass the controversial bill through parliament, which criminalizes the planning and preparatory actions of 277 serious crimes. In an open letter addressed to Prime Minister Shinzo Abe in mid-May, a U.N. special rapporteur on the right to privacy stated that the bill could lead to undue restrictions of privacy and freedom of expression due to its potentially broad application -- a claim the Japanese government has strongly protested against. Snowden said he agrees with the U.N.-appointed expert Joseph Cannataci, arguing the bill is "not well explained" and raises concerns that the government may have intentions other than its stated goal of cracking down on terrorism and organized crimes ahead of the 2020 Tokyo Olympics. The anti-conspiracy law proposed by the government "focuses on terrorism and everything else that's not related to terrorism -- things like taking plants from the forestry reserve," he said. "And the only real understandable answer (to the government's desire to pass the bill)...is that this is a bill that authorizes the use of surveillance in new ways because now everyone can be a criminal." Based on his experience of using XKEYSCORE himself, Snowden said authorities could become able to intercept everyone's communications, including people organizing political movements or protests, and put them "in a bucket." The records would be simply "pulled out of the bucket" whenever necessary and the public would not be able to know whether such activities are done legally or secretly by the government because there are no sufficient legal safeguards in the bill, Snowden said. Snowden finds the current situation in Japan reminiscent of what he went through in the United States following the terror attacks on Sept. 11, 2001. In passing the Patriot Act, which strengthened the U.S. government's investigative powers in the wake of the attacks, the government said similar things to what the Japanese government is saying now, such as "these powers are not going to be targeted against ordinary citizens" and "we're only interested in finding al-Qaida and terrorists," according to Snowden. But within a few short years of the enactment of the Patriot Act, the U.S. government was using the law secretly to "collect the phone records of everyone in the United States, and everyone around the world who they could access" through the largest phone companies in the United States, Snowden said, referring to the revelations made in 2013 through top-secret documents he leaked. Even though it sacrifices civil liberties, mass surveillance is not effective, Snowden said. The U.S. government's privacy watchdog concluded in its report in 2014 that the NSA's massive telephone records program showed "minimal value" in safeguarding the nation from terrorism and that it must be ended. On Japan's anti-conspiracy bill, Snowden said it should include strong guarantees of human rights and privacy and ensure that those guarantees are "not enforced through the words of politicians but through the actions of courts." "This means in advance of surveillance, in all cases the government should seek an individualized warrant, and individualized authorization that this surveillance is lawful and appropriate in relationship to the threat that's presented by the police," he said. He also said allowing a government to get into the habit of collecting the communications of everyone through powerful surveillance tools could dangerously change the power relationship between the public and government to something closer to "subject and ruler" instead of partners, which is how it should be in a democracy. Arguably, people in Japan may not make much of what Snowden sees as the rise of new untargeted and indiscriminate mass surveillance, thinking that they have nothing to hide or fear. But he insists that privacy is not about something to "hide" but about "protecting" an open and free society where people can be different and can have their own ideas. Freedom of speech would not mean much if people do not have the space to figure out what they want to say, or share their views with others they trust, to develop them before introducing them into the context of the world, he said. "When you say 'I don't care about privacy, because I've nothing to hide,' that's no different than saying you don't care about freedom of speech, because you've nothing to say," he added. Snowden, who was dressed in a black suit, said toward the end of his more than 100-minute interview at a hotel in Moscow that living in exile is not "a lifestyle that anyone chooses voluntarily." He hopes to return home while continuing active exchanges online with people in various countries. "The beautiful thing about today is that I can be in every corner of the world every night. I speak at U.S. universities every month. It's important to understand that I don't really live in Moscow. I live on the internet," he said. Snowden showed no regrets over taking the risk of becoming a whistleblower and being painted by his home country as a "criminal" or "traitor," facing espionage charges at home for his historic document leak. "It's scary as hell, but it's worth it. Because if we don't do it, if we see the truth of crimes or corruption in government, and we don't say something about it, we're not just making the world worse for our children, we're making the world worse for us, and we're making ourselves worse," he said. Article source
  13. Facebook Bans Devs From Creating Surveillance Tools With User Data Without a hint of irony, Facebook has told developers that they may not use data from Instagram and Facebook in surveillance tools. The social network says that the practice has long been a contravention of its policies, but it is now tidying up and clarifying the wording of its developer policies. American Civil Liberties Union, Color of Change and the Center for Media Justice put pressure on Facebook after it transpired that data from users' feeds was being gathered and sold on to law enforcement agencies. The re-written developer policy now explicitly states that developers are not allowed to "use data obtained from us to provide tools that are used for surveillance." It remains to be seen just how much of a difference this will make to the gathering and use of data, and there is nothing to say that Facebook's own developers will not continue to engage in the same practices. Deputy chief privacy officer at Facebook, Rob Sherman, says: Transparency reports published by Facebook show that the company has complied with government requests for data. The secrecy such requests and dealings are shrouded in means that there is no way of knowing whether Facebook is engaged in precisely the sort of activity it is banning others from performing. Source
  14. Legislation introduced today by New York City council members Dan Garodnick and Vanessa Gibson would finally compel the NYPD — one of the most technology-laden police forces in the country — to make public its rulebook for deploying its controversial surveillance arsenal. The bill, named the Public Oversight of Surveillance Technology (POST) act, would require the NYPD to detail how, when, and with what authority it uses technologies like Stingray devices, which can monitor and interfere with the cellular communications of an entire crowd at once. Specifically, the department would have to publicize the “rules, processes and guidelines issued by the department regulating access to or use of such surveillance technology as well as any prohibitions or restrictions on use, including whether the department obtains a court authorization for each use of a surveillance technology, and what specific type of court authorization is sought.” The NYPD would also have to say how it protects the gathered surveillance data itself (for example, X-ray imagery, or individuals captured in a facial recognition scan), and whether or not this data is shared with other governmental organizations. A period of public comment would follow these disclosures. In a press release, the New York Civil Liberties Union, which has been instrumental in fighting to reveal the mere fact that the NYPD possesses devices like the Stingray, hailed the bill: Public awareness of how the NYPD conducts intrusive surveillance, especially the impacts on vulnerable New Yorkers, is critical to democracy. For too long the NYPD has been using technology that spies on cellphones, sees through buildings and follows your car under a shroud of secrecy, and the bill is a significant step out of the dark ages. It’s unclear whether the bill would apply to products that have both powerful surveillance and non-surveillance functionality, a la Palantir, but the legislation’s definition of “surveillance technology” is sufficiently broad: The term “surveillance technology” means equipment, software, or system capable of, or used or designed for, collecting, retaining, processing, or sharing audio, video, location, thermal, biometric, or similar information, that is operated by or at the direction of the department. Though the bill might do little to curb the use of such technologies, it would at least give those on the sidewalk a better idea of how and when they’re being watched, if not why. The NYPD did not immediately return a request for comment. By Sam Biddle https://theintercept.com/2017/03/01/new-bill-would-force-nypd-to-disclose-its-surveillance-tech-playbook/
  15. The Tor Project, responsible for software that enables anonymous Internet use and communication, is launching a new mobile app to detect internet censorship and surveillance around the world. The app, called “OONIProbe,” alerts users to the blocking of websites, censorship and surveillance systems and the speed of networks. Slowing internet speeds down to a crawl is one way governments censor internet content they deem illegal. The app also spells out how users might be able to circumvent the blockage. Ooni on the iPhone Operating under the Tor Project umbrella, the Open Observatory of Network Interference (OONI) is a global observation network watching online censorship since 2012. Data from OONI has detected censorship in countries including Iran, Saudi Arabia, Turkey, South Korea, Greece, China, Russia, India, Indonesia and Sudan. The project watches over 100 countries and serves as a resource to journalists, lawyers, activists, researchers and people on the ground in countries where censorship is prevalent. In 2016, internet censorship was used in countries like the African nation of Gabon during highly contested elections and subsequent protests. To stop citizens from sharing videos of election irregularities, the country’s internet was down for four days. Earlier in 2016, Uganda engaged in similar widespread censorship. Both countries at times denied their actions, making tools like OONI ever more valuable. “What Signal did for end-to-end encryption, OONI did for unmasking censorship,” Moses Karanja, a Kenyan researcher on the politics of information controls at Strathmore University’s CIPIT, said in a statement. “Most Africans rely on mobile phones as their primary means of accessing the internet and OONI’s mobile app allows for decentralized efforts in unmasking the nature of censorship and internet performance. The possibilities are exciting for researchers, business and the human rights community around the world. We look forward to interesting days ahead. ” Internet freedom declined for the sixth year in a row in 2016, according to a report from Freedom House, making censorship and surveillance transparency a high priority for activists looking to turn back that momentum. Twenty-four governments blocked access to social media sites and communication services in 2016, compared with 15 governments doing so last year, according to Freedom House. Internet freedom fell most precipitously in Uganda, Bangladesh, Cambodia, Ecuador and Libya. Several countries, including Egypt and the United Arab Emirates, reportedly tried to block Signal, the increasingly popular encrypted messenger developed in the United States. That’s part of a global trend that’s seen governments go after apps like WhatsApp and Telegram in an effort to stymie secure communications. “Never before has it been so easy to uncover evidence of internet censorship,” Arturo Filastò, OONI’s project lead and core developer said in a statement. “By simply owning a smartphone (and running ooniprobe), you can now play an active role in increasing transparency around internet controls.” The app will be available on the Google Play and iOS app stores this week, according to Tor Project spokeswoman Kate Krauss. Article source
  16. Four in Five Britons Fearful Trump Will Abuse their Data More than three-quarters of Britons believe incoming US President Donald Trump will use his surveillance powers for personal gain, and a similar number want reassurances from the government that data collected by GCHQ will be safeguarded against such misuse. These are the headline findings from a new Privacy International poll of over 1600 Brits on the day Trump is inaugurated as the 45th President of the most powerful nation on earth. With that role comes sweeping surveillance powers – the extent of which was only revealed after NSA whistleblower Edward Snowden went public in 2013. There are many now concerned that Trump, an eccentric reality TV star and gregarious property mogul, could abuse such powers for personal gain. That’s what 78% of UK adults polled by Privacy International believe, and 54% said they had no trust that Trump would use surveillance for legitimate purposes. Perhaps more important for those living in the United Kingdom is the extent of the information sharing partnership between the US and the UK. Some 73% of respondents said they wanted the government to explain what safeguards exist to ensure any data swept up by their domestic secret services doesn’t end up being abused by the new US administration. That fear has become even more marked since the passage of the Investigatory Powers Act or 'Snoopers’ Charter', which granted the British authorities unprecedented mass surveillance and hacking powers, as well as forcing ISPs to retain all web records for up to 12 months. Privacy International claimed that although it has privately been presented with documents detailing the info sharing partnership between the two nations, Downing Street has so far refused to make the information public. The rights group and nine others are currently appealing to the European Court of Human Rights to overturn a decision by the Investigatory Powers Tribunal (IPT) not to release information about the rules governing the US-UK agreement. “UK and the US spies have enjoyed a cosy secret relationship for a long time, sharing sensitive intelligence data with each other, without parliament knowing anything about it, and without any public consent. Slowly, we’re learning more about the staggering scale of this cooperation and a dangerous lack of sufficient oversight,” argued Privacy International research officer, Edin Omanovic. “Today, a new President will take charge of US intelligence agencies – a President whose appetite for surveillance powers and how they’re used put him at odds with British values, security, and its people… Given that our intelligence agencies are giving him unfettered access to massive troves of personal data, including potentially about British people, it is essential that the details behind all this are taken out of the shadows.” Source
  17. Mozilla: The Internet Is Unhealthy And Urgently Needs Your Help Mozilla argues that the internet's decentralized design is under threat by a few key players, including Google, Facebook, Apple, Tencent, Alibaba and Amazon, monopolizing messaging, commerce, and search. Can the internet as we know it survive the many efforts to dominate and control it, asks Firefox maker Mozilla. Much of the internet is in a perilous state, and we, its citizens, all need to help save it, says Mark Surman, executive director of Firefox maker the Mozilla Foundation. We may be in awe of the web's rise over the past 30 years, but Surman highlights numerous signs that the internet is dangerously unhealthy, from last year's Mirai botnet attacks, to market concentration, government surveillance and censorship, data breaches, and policies that smother innovation. "I wonder whether this precious public resource can remain safe, secure and dependable. Can it survive?" Surman asks. "These questions are even more critical now that we move into an age where the internet starts to wrap around us, quite literally," he adds, pointing to the Internet of Things, autonomous systems, and artificial intelligence. In this world, we don't use a computer, "we live inside it", he adds. "How [the internet] works -- and whether it's healthy -- has a direct impact on our happiness, our privacy, our pocketbooks, our economies and democracies." Surman's call to action coincides with nonprofit Mozilla's first 'prototype' of the Internet Health Report, which looks at healthy and unhealthy trends that are shaping the internet. Its five key areas include open innovation, digital inclusion, decentralization, privacy and security, and web literacy. Mozilla will launch the first report after October, once it has incorporated feedback on the prototype. That there are over 1.1 billion websites today, running on mostly open-source software, is a positive sign for open innovation. However, Mozilla says the internet is "constantly dodging bullets" from bad policy, such as outdated copyright laws, secretly negotiated trade agreements, and restrictive digital-rights management. Similarly, while mobile has helped put more than three billion people online today, there were 56 internet shutdowns last year, up from 15 shutdowns in 2015, it notes. Mozilla fears the internet's decentralized design, while flourishing and protected by laws, is under threat by a few key players, including Facebook, Google, Apple, Tencent, Alibaba and Amazon, monopolizing messaging, commerce and search. "While these companies provide hugely valuable services to billions of people, they are also consolidating control over human communication and wealth at a level never before seen in history," it says. Mozilla approves of the wider adoption of encryption today on the web and in communications but highlights the emergence of new surveillance laws, such as the UK's so-called Snooper's Charter. It also cites as a concern the Mirai malware behind last year's DDoS attacks, which abused unsecured webcams and other IoT devices, and is calling for safety standards, rules and accountability measures. The report also draws attention to the policy focus on web literacy in the context of learning how to code or use a computer, which ignores other literacy skills, such as the ability to spot fake news, and separate ads from search results. Source Alternate Source - 1: Mozilla’s First Internet Health Report Tackles Security, Privacy Alternate Source - 2: Mozilla Wants Infosec Activism To Be The Next Green Movement
  18. Chinese Citizens Can Be Tracked In Real Time A group of researchers have revealed that the Chinese government is collecting data on its citizens to an extent where their movements can even be tracked in real-time using their mobile devices. This discovery was made by The Citizen Lab at the University of Toronto's Munk School of Global Affairs who specialize in studying the ways in which information technology affects both personal and human rights worldwide. It has been known for some time that the Chinese government employs a number of invasive tactics to be fully aware of the lives of its citizens. Though Citizen Lab was able to discover that the government has begun to monitor its populace using apps and services designed and run by the private sector. The discovery was made when the researchers began exploring Tencent's popular chat app WeChat that is installed on the devices of almost every Chinese citizen with 800 million active users each month. Citizen Lab found that not only does the app help the government censor chats between users but that it is also being used as a state surveillance tool. WeChat's restrictions even remain active for Chinese students studying abroad. Ronald Deibert, a researcher at Citizen Lab, offered further insight on the team's discovery, saying: "What the government has managed to do, I think quite successfully, is download the controls to the private sector, to make it incumbent upon them to police their own networks". To make matters worse, the data collected by WeChat and other Chinese apps and services is currently being sold online. The Guangzhou Southern Metropolis Daily led an investigation that found that large amounts of personal data on nearly anyone could be purchased online for a little over a hundred US dollars. The newspaper also found another service that offered the ability to track users in real-time via their mobile devices. Users traveling to China anytime soon should be extra cautious as to their activities online and should think twice before installing WeChat during their stay. Published under license from ITProPortal.com, a Future plc Publication. All rights reserved. Source
  19. After Spying Webcams, Welcome the Spy Toys “My Friend Cayla and I-Que” Privacy advocates claim both toys pose security and privacy threat for children and parents. Internet-connected toys are currently a rage among parents and kids alike but what we are not aware of are the associated security dangers of using Smart toys. It is a fact that has been acknowledged by the Center for Digital Democracy that smart toys pose grave privacy, security and similar other risks to children. There are certain privacy and security flaws in a pair of smart toys that have been designed to engage with kids. Last year, we reported how “Hello Barbie” toy spies on kids by talking to them, recording their conversations and send them to company’s servers which are then analyzed and stored in another cloud server. Now, the dolls My Friend Cayla and I-Que Intelligent Robot that are being marketed for both male and female kids are the objects of security concern. In fact the Federal Trade Commission’s child advocacy, consumer and privacy groups have filed a complaint [PDF] against these dolls. It is being suspected that these dolls are violating the Children’s Online Privacy Protection Act (COPPA) as well as the FTC rules because these collect and use personal data via communicating with kids. This feature of the dolls is being termed as a deceptive practice by the makers. The FTC has been asked in the complaint to investigate the matter and take action against the manufacturer of the dolls Genesis Toys as well as the provider of third-party voice recognition software for My Friend Cayla and I-Que, Nuance Communications. The complaints have been filed by these groups: the Campaign for a Commercial-Free Childhood (CCFC), Consumers Union, Center for Digital Democracy (CDD) and the Electronic Privacy Information Center (EPIC). According to complainers, these dolls are already creepy looking and the fact that these gather information makes them even creepier. Both these toys use voice recognition technology coupled with internet connectivity and Bluetooth to engage with the kids through answering questions and making up conversations. However, according to the CDD, this is done in a very insecure and invasive manner. The Genesis Toys claims on its website that while “most of Cayla’s conversational features can be accessed offline,” but searching for information would require internet connectivity. The promotional video for Cayla doll also focuses upon the toy’s ability to communicate with the kid as it stated: “ask Cayla almost anything.” To work, these dolls require mobile apps but some questions might be asked directly. The toys keep a Bluetooth connection enabled constantly so that the dolls could reach to the actions in the app and identify the objects when the kid taps on the screen. Some of the asked questions are recorded and sent to Nuance’s servers for parsing but it is yet unclear how much of the information is kept private. The toys’ manufacturer maintains that complete anonymity is observed. The toys were released in late 2015 but still these are selling like hot cakes. As per researchers’ statement in the FTC complaint, “by connecting one phone to the doll through the insecure Bluetooth connection and calling that phone with a second phone, they were able to both converse with and covertly listen to conversations collected through the My Friend Cayla and i-Que toys.” This means anyone can use their smartphone to communicate with the child using the doll as the gateway. Watch this add to see how Cayla works Watch this video to understand how anyone can spy on your child with Cayla and i-Que If you own a smart toy, keep an eye on the conversation between you and your kid. Courtesy: CDD Source
  20. Snowden Leaks Reveal NSA Snooped On In-Flight Mobile Calls NSA, GCHQ intercepted signals as they were sent from satellites to ground stations. GCHQ and the NSA have spied on air passengers using in-flight GSM mobile services for years, newly-published documents originally obtained by Edward Snowden reveal. Technology from UK company AeroMobile and SitaOnAir is used by dozens of airlines to provide in-flight connectivity, including by British Airways, Virgin Atlantic, Lufthansa, and many Arab and Asian companies. Passengers connect to on-board GSM servers, which then communicate with satellites operated by British firm Inmarsat. "The use of GSM in-flight analysis can help identify the travel of a target—not to mention the other mobile devices (and potentially individuals) onboard the same plane with them," says a 2010 NSA newsletter. A presentation, made available by the Intercept, contains details of GCHQ's so-called "Thieving Magpie" programme. GCHQ and the NSA intercepted the signals as they were sent from the satellites to the ground stations that hooked into the terrestrial GSM network. Initially, coverage was restricted to flights in Europe, the Middle East, and Africa, but the surveillance programme was expected to go global at the time the presentation was made. GCHQ's Thieving Magpie presentation explains how in-flight mobile works. Ars has asked these three companies to comment on the extent to which they were aware of the spying, and whether they are able to improve security for their users to mitigate its effects, but was yet to receive replies from Inmarsat or AeroMobile at time of publication. A SitaOnAir spokesperson told Ars in an e-mail: The Thieving Magpie presentation explains that it is not necessary for calls to be made, or data to be sent, for surveillance to take place. If the phone is switched on, and registers with the in-flight GSM service, it can be tracked provided the plane is flying high enough that ground stations are out of reach. The data, we're told, was collected in "near real time," thus enabling "surveillance or arrest teams to be put in place in advance" to meet the plane when it lands. Using this system, aircraft can be tracked every two minutes while in flight. If data is sent via the GSM network, GCHQ's presentation says that e-mail addresses, Facebook IDs, and Skype addresses can all be gathered. Online services observed by GCHQ using its airborne surveillance include Twitter, Google Maps, VoIP, and BitTorrent. Meanwhile, Le Monde reported that "GCHQ could even, remotely, interfere with the working of the phone; as a result the user was forced to redial using his or her access codes." No source is given for that information, which presumably is found in other Snowden documents, not yet published. As the French newspaper also points out, judging by the information provided by Snowden, the NSA seemed to have something of a fixation with Air France flights. Apparently that was because "the CIA considered that Air France and Air Mexico flights were potential targets for terrorists." GCHQ shared that focus: the Thieving Magpie presentation uses aircraft bearing Air France livery to illustrate how in-flight GSM services work. Ars asked the UK's spies to comment on the latest revelations, and received the usual boilerplate response from a GCHQ spokesperson: It is longstanding policy that we do not comment on intelligence matters. So that's OK, then. Source
  21. Uber Knows Where You Go, Even After Ride Is Over Enlarge / Uber's iOS popup asking for new surveillance permissions. “We do this to improve pickups, drop-offs, customer service, and to enhance safety.” As promised, Uber is now tracking you even when your ride is over. The ride-hailing service said the surveillance—even when riders close the app—will improve its service. The company now tracks customers from when they request a ride until five minutes after the ride has ended. According to Uber, the move will help drivers locate riders without having to call them, and it will also allow Uber to analyze whether people are being dropped off and picked up properly—like on the correct side of the street. "We do this to improve pickups, drop-offs, customer service, and to enhance safety," Uber said. In a statement, the company said: Uber announced that it would make the change last year to allow surveillance in the app's background, prompting a Federal Trade Commission complaint. (PDF) The Electronic Privacy Information Center said at the time that "this collection of user's information far exceeds what customers expect from the transportation service. Users would not expect the company to collect location information when customers are not actively using the app." The complaint went nowhere. However, users must consent to the new surveillance. A popup—like the one shown at the top of this story—asks users to approve the tracking. Uber says on its site that riders "can disable location services through your device settings" and manually enter a pickup address. Uber and the New York Attorney General's office in January entered into an agreement to help protect users' location data. The deal requires Uber to encrypt location data and to protect it with multi-factor authentication. Source
  22. Encrypted Email Sign-Ups Instantly Double In Wake of Trump Victory ProtonMail suggests fear of the Donald prompting lockdown "ProtonMail follows the Swiss policy of neutrality. We do not take any position for or against Trump," the Swiss company's CEO stated on Monday, before revealing that new user sign-ups immediately doubled following Trump's election victory. ProtonMail has published figures showing that as soon as the election results rolled in, the public began to seek out privacy-focused services such as its own. CEO Andy Yen said that, in communicating with these new users, the company found people apprehensive about the decisions that President Trump might take and what they would mean considering the surveillance activities of the National Security Agency. "Given Trump's campaign rhetoric against journalists, political enemies, immigrants, and Muslims, there is concern that Trump could use the new tools at his disposal to target certain groups," Yen said. "As the NSA currently operates completely out of the public eye with very little legal oversight, all of this could be done in secret." ProtonMail was launched back in May 2014 by scientists who had met at CERN and MIT. In response to the Snowden revelations regarding collusion between the NSA and other email providers such as Google, they created a government-resistant, end-to-end encrypted email service. The service was so popular that it was "forced to institute a waiting list for new accounts after signups exceeded 10,000 per day" within the first three days of opening, the CEO previously told The Register when ProtonMail reopened free registration to all earlier this year. ProtonMail new user signups doubled immediately after Trump's election victory Yen said his service was now "seeing an influx of liberal users" despite its popularity on both sides of the political spectrum. "ProtonMail has also long been popular with the political right, who were truly worried about big government spying, and the Obama administration having access to their communications. Now the tables have turned," Yen noted. "One of the problems with having a technological infrastructure that can be abused for mass surveillance purposes is that governments can and do change, quite regularly in fact. "The only way to protect our freedom is to build technologies, such as end-to-end encryption, which cannot be abused for mass surveillance," Yen added. "Governments can change, but the laws of mathematics upon which encryption is based are much harder to change." Source
  23. In Germany journalists uncovered that the browser add-on Web of Trust (WOT) saves users' surf history to sell this data. While the company claims that the data being sold is anonymized, the journalists were able to identify several users, among those journalists, judges, policemen and politicians of the German government. The politicians reacted shocked when they were confronted with the findings from the journalists. The data contained all websites people visited, for instance traveling information or porn websites. In one case the journalists could even access banking details and a copy of an identification card all stored in an unencrypted online storage service. This opens the door for blackmail and identity theft The German politician Valerie Wilms (member of the Bundestag) was shocked when confronted with the data. It contained information such as journey routes, tax data as well as ideas about her political work. The politician said that this kind of data “can be very harmful. It can open the door for blackmail”. She would feel “naked”. Other politicians called for laws against such data mining if the companies mining the data could not be trusted. How does it work? The journalists explained that the data they received contained information collected by the browser plugin Web of Trust. This plugin verifies that each website a person is visiting can be trusted. For doing so the plugin sends information about every visited website to their server. This data is stored and a profile of the user is being created. While the company claims that it only sells the data in an anonymized form, the journalists said it was rather easy to figure out who the person in question was. For instance, the data contained information such as email addresses or login names that made it easy to conclude the user's name. Mass surveillance should be illegal. The politicians reacted shocked when they were confronted with the data that showed what websites they were visiting. Their statements proved one thing: The politicians being monitored did not feel secure. And they all agreed on one thing: That such a surveillance should be illegal. We at Tutanota agree completely. This is why we encrypt all user data end-to-end. We want to thank the investigative journalists at NDR for their great research. We hope that journalists - and politicians! - will more and more understand what the consequences of all-round surveillance are. Whenever there is surveillance the data can - and will - find its way into the wrong hands. We have to stop any form of monitoring in the first place. We can win the battle for privacy. When politicians start fighting along with us, we can win this battle and take back what belongs to us: Our personal data. Because no one is allowed to accumulate our data and sell it. As for now we can be smarter than the data miners when using the internet: Encrypt as much information as possible. Use only very few browser plugins and make sure they do not collect your data. Use privacy-friendly services that do not collect and sell you data. Pay for your online services, instead of paying with your data! Article source
×
×
  • Create New...