Jump to content

Search the Community

Showing results for tags 'surveillance'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 42 results

  1. Russia's slowly building its own Great Firewall model, centralizing internet traffic through government servers. Today, a new "internet sovereignty" law entered into effect in Russia, a law that grants the government the ability to disconnect the entire country from the global internet. The law was formally approved by President Putin back in May. The Kremlin government cited the need to have the ability to disconnect Russia's cyberspace from the rest of the world in the event of a national emergency or foreign threat, such as a cyberattack. In order to achieve these goals, the law mandates that all local ISPs route traffic through special servers managed by the Roskomnadzor, the country's telecoms regulator. These servers would act as kill-switches and disconnect Russia from external connections while re-routing internet traffic inside Russia's own internet space, akin to a country-wide intranet -- which the government is calling RuNet. The Kremlin's recent law didn't come out of the blue. Russian officials have been working on establishing RuNet for more than half a decade. Past efforts included passing laws that force foreign companies to keep the data of Russian citizens on servers located in Russia. However, internet infrastructure experts have called Russia's "disconnect plan" both impractical and idealistic, pointing to the global DNS system as the plan's Achille's heel. Even US officials doubt that Russia would be able to pull it off. Speaking on stage at the RSA 2019 security conference in March, NSA Director General Paul Nakasone said he didn't expect Russia to succeed and disconnect from the global internet. The technicalities of disconnecting an entire country are just to complex not to cripple Russia's entire economy, plunging modern services like healthcare or banking back into a dark age. IT'S A LAW ABOUT SURVEILLANCE, NOT SOVEREIGNTY The reality is that experts in Russian politics, human rights, and internet privacy have come up with a much more accurate explanation of what's really going on. Russia's new law is just a ruse, a feint, a gimmick. The law's true purpose is to create a legal basis to force ISPs to install deep-packet inspection equipment on their networks and force them to re-route all internet traffic through Roskomnadzor strategic chokepoints. These Roskomnadzor servers are where Russian authorities will be able to intercept and filter traffic at their discretion and with no judicial oversight, similar to China's Great Firewall. The law is believed to be an upgrade to Russia's SORM (System for Operative Investigative Activities). But while SORM provides passive reconnaissance capabilities, allowing Russian law enforcement to retrieve traffic metadata from ISPs, the new "internet sovereignty" law provides a more hands-on approach, including active traffic shaping capabilities. Experts say the law was never about internet sovereignty, but about legalizing and disguising mass surveillance without triggering protests from Russia's younger population, who has gotten accustomed to the freedom the modern internet provides. Experts at Human Rights Watch have seen through the law's true purpose ever since it was first proposed in the Russian Parliament. Earlier this year, they've called the law "very broad, overly vague, and [that it vests] in the government unlimited and opaque discretion to define threats." This vagueness in the law's text allows the government to use it whenever it wishes, for any circumstance. Many have pointed out that Russia is doing nothing more than copying the Beijing regime, which also approved a similarly vague law in 2016, granting its government the ability to take any actions it sees fit within the country's cyberspace. The two countries have formally cooperated, with China providing help to Russia in implementing a similar Great Firewall technology. PLANNED DISCONNECT TEST But while Russia's new law entered into effect today, officials sill have to carry out a ton of tests. Last week, the Russian government published a document detailing a scheduled test to take place this month. No exact date was provided. Sources at three Russian ISPs have told ZDNet this week that they haven't been notified of any such tests; however, if they take place, they don't expect the "disconnect" to last more than a few minutes. Tens of thousands protested this new law earlier this year across Russia; however, the government hasn't relented, choosing to arrest protesters and go forward with its plans. Source: Russia's new 'disconnect from the internet' law is actually about surveillance (via ZDNet)
  2. Allowing facial recognition technology to spread without understanding its impact could have serious consequences. In the last few years facial recognition has been gradually introduced across a range of different technologies. Some of these are relatively modest and useful; thanks to facial recognition software you can open you smartphone just by looking at it, and log into your PC without a password. You can even use your face to get cash out of an ATM, and increasingly it's becoming a standard part of your journey through the airport now. And facial recognition is still getting smarter. Increasingly it's not just faces that can be recognised, but emotional states too, if only with limited success right now. Soon it won't be too hard for a camera to not only recognise who you are, but also to make a pretty good guess at how you are feeling. But one of the biggest potential applications of facial recognition on the near horizon is, of course, for law and order. It is already being used by private companies to deter persistent shoplifters and pickpockets. In the UK and other countries police have been testing facial recognition in a number of situations, with varying results. There's a bigger issue here, as the UK's Information Commissioner Elizabeth Denham notes: "How far should we, as a society, consent to police forces reducing our privacy in order to keep us safe?" She warns that when it comes to live facial recognition "never before have we seen technologies with the potential for such widespread invasiveness," and has called for police, government and tech companies to work together to eliminate bias in the algorithms used; particularly that associated with ethnicity. She is not the only one to be raising questions about the use of facial recognition by police; similar questions are being asked in the US, and rightly so. There is always a trade-off between privacy and security. Deciding where to draw the line between the two is key. But we also have to make the decision clearly and explicitly. At the moment there is a great risk that as the use of facial recognition technology by government and business spreads, the decision will be taken away from us. In the UK we've already built up plenty of the infrastructure that you'd need if you were looking to build a total surveillance state. There are probably somewhere around two million private and local government security cameras in the UK; a number that is rising rapidly as we add our own smart doorbells or other web-connected security cameras to watch over our homes and businesses. In many cases it will be very easy to add AI-powered facial recognition analysis to all those video streams. I can easily see a scenario where we achieve an almost-accidental surveillance state, through small steps, each of which make sense on their own terms but which together combine to hugely reduce our privacy and freedoms, all in the name of security and efficiency. It is much easier to have legitimate concerns about privacy addressed before facial recognition is a ubiquitous feature of society. And the same applies other related technologies like gait recognition or other biometric systems that can recognise us from afar. New technology rolled out in the name of security is all but impossible to roll back. For sure, these technologies can have many benefits, from making it quicker to unlock your phone or recognising criminals in the street. But allowing these technologies to become pervasive without rigorous debate about the need for them, the effectiveness of them and their broader impact on society is deeply unwise and could leave us facing much bigger problems ahead. Source: We must stop smiling our way towards a surveillance state (via ZDNet)
  3. Those who know about us have power over us. Obfuscation may be our best digital weapon. Consider a day in the life of a fairly ordinary person in a large city in a stable, democratically governed country. She is not in prison or institutionalized, nor is she a dissident or an enemy of the state, yet she lives in a condition of permanent and total surveillance unprecedented in its precision and intimacy. As soon as she leaves her apartment, she is on camera: while in the hallway and the elevator of her building, when using the ATM outside her bank, while passing shops and waiting at crosswalks, while in the subway station and on the train — and all that before lunch. A montage of nearly every move of her life in the city outside her apartment could be assembled, and each step accounted for. But that montage would hardly be necessary: Her mobile phone, in the course of its ordinary operation of seeking base stations and antennas to keep her connected as she walks, provides a constant log of her position and movements. Her apps are keeping tabs, too. Any time she spends in “dead zones” without phone reception can also be accounted for: Her subway pass logs her entry into the subway, and her radio-frequency identification badge produces a record of her entry into the building in which she works. (If she drives a car, her electronic toll-collection pass serves a similar purpose, as does automatic license-plate imaging.) If her apartment is part of a smart-grid program, spikes in her electricity usage can reveal exactly when she is up and around, turning on lights and ventilation fans and using the microwave oven and the coffee maker. Surely some of the fault must lie with this individual for using services or engaging with institutions that offer unfavorable terms of service and are known to misbehave. Isn’t putting all the blame on government institutions and private services unfair, when they are trying to maintain security and capture some of the valuable data produced by their users? Can’t we users just opt out of systems with which we disagree? Before we return to the question of opting out, consider how thoroughly the systems mentioned are embedded in our hypothetical ordinary person’s everyday life, far more invasively than mere logs of her daily comings and goings. Someone observing her could assemble in forensic detail her social and familial connections, her struggles and interests, and her beliefs and commitments. From Amazon purchases and Kindle highlights, from purchase records linked with her loyalty cards at the drugstore and the supermarket, from Gmail metadata and chat logs, from search history and checkout records from the public library, from Netflix-streamed movies, and from activity on Facebook and Twitter, dating sites, and other social networks, a very specific and personal narrative is clear. If the apparatus of total surveillance that we have described here were deliberate, centralized, and explicit, a Big Brother machine toggling between cameras, it would demand revolt, and we could conceive of a life outside the totalitarian microscope. But if we are nearly as observed and documented as any person in history, our situation is a prison that, although it has no walls, bars, or wardens, is difficult to escape. Which brings us back to the problem of “opting out.” For all the dramatic language about prisons and panopticons, the sorts of data collection we describe here are, in democratic countries, still theoretically voluntary. But the costs of refusal are high and getting higher: A life lived in social isolation means living far from centers of business and commerce, without access to many forms of credit, insurance, or other significant financial instruments, not to mention the minor inconveniences and disadvantages — long waits at road toll cash lines, higher prices at grocery stores, inferior seating on airline flights. It isn’t possible for everyone to live on principle; as a practical matter, many of us must make compromises in asymmetrical relationships, without the control or consent for which we might wish. In those situations — everyday 21st-century life — there are still ways to carve out spaces of resistance, counterargument, and autonomy. We are surrounded by examples of obfuscation that we do not yet think of under that name. Lawyers engage in overdisclosure, sending mountains of vaguely related client documents in hopes of burying a pertinent detail. Teenagers on social media — surveilled by their parents — will conceal a meaningful communication to a friend in a throwaway line or a song title surrounded by banal chatter. Literature and history provide many instances of “collective names,” where a population took a single identifier to make attributing any action or identity to a particular person impossible, from the fictional “I am Spartacus” to the real “Poor Conrad” and “Captain Swing” in prior centuries — and “Anonymous,” of course, in ours. We can apply obfuscation in our own lives by using practices and technologies that make use of it, including: The secure browser Tor, which (among other anti-surveillance technologies) muddles our Internet activity with that of other Tor users, concealing our trail in that of many others. The browser plugins TrackMeNot and AdNauseam, which explore obfuscation techniques by issuing many fake search requests and loading and clicking every ad, respectively. The browser extension Go Rando, which randomly chooses your emotional “reactions” on Facebook, interfering with their emotional profiling and analysis. Playful experiments like Adam Harvey’s “HyperFace” project, finding patterns on textiles that fool facial recognition systems – not by hiding your face, but by creating the illusion of many faces. If obfuscation has an emblematic animal, it is the family of orb-weaving spiders, Cyclosa mulmeinensis, which fill their webs with decoys of themselves. The decoys are far from perfect copies, but when a wasp strikes they work well enough to give the orb-weaver a second or two to scramble to safety. At its most abstract, obfuscation is the production of noise modeled on an existing signal in order to make a collection of data more ambiguous, confusing, harder to exploit, more difficult to act on, and therefore less valuable. Obfuscation assumes that the signal can be spotted in some way and adds a plethora of related, similar, and pertinent signals — a crowd which an individual can mix, mingle, and, if only for a short time, hide. There is real utility in an obfuscation approach, whether that utility lies in bolstering an existing strong privacy system, in covering up some specific action, in making things marginally harder for an adversary, or even in the “mere gesture” of registering our discontent and refusal. After all, those who know about us have power over us. They can deny us employment, deprive us of credit, restrict our movements, refuse us shelter, membership, or education, manipulate our thinking, suppress our autonomy, and limit our access to the good life. There is no simple solution to the problem of privacy, because privacy itself is a solution to societal challenges that are in constant flux. Some are natural and beyond our control; others are technological and should be within our control but are shaped by a panoply of complex social and material forces with indeterminate effects. Privacy does not mean stopping the flow of data; it means channeling it wisely and justly to serve societal ends and values and the individuals who are its subjects, particularly the vulnerable and the disadvantaged. Innumerable customs, concepts, tools, laws, mechanisms, and protocols have evolved to achieve privacy, so conceived, and it is to that collection that we add obfuscation to sustain it — as an active conversation, a struggle, and a choice. By Finn Brunton assistant professor in the Department of Media, Culture, and Communication at New York University and Helen Nissenbaum is professor of information science at Cornell Tech. Source
  4. Facial recognition and algorithm searches of social media are only part of it. An Israeli technician climbs up a pole to install a surveillance camera on a street in the east Jerusalem neighbourhood of Ras al-Amud. An Israeli startup invested in heavily by American companies, including Microsoft, produces facial recognition software used to conduct biometric surveillance on Palestinians, investigations by NBC and Haaretz revealed. In June, Microsoft — which has touted its framework for ethical use of facial recognition — joined a group investment of $78 million to AnyVision, an international tech company based in Israel. One of AnyVision’s flagship products is Better Tomorrow, a program that allows the tracking of objects and people on live video feeds, even tracking between independent camera feeds. AnyVision’s facial recognition software is at the heart of a military mass surveillance project in the West Bank, according to the NBC and Haaretz reporting. An Israeli Defense Forces statement in February acknowledged the addition of facial recognition verification technology to at least 27 checkpoints between Israel and the West Bank to “upgrade the crossings” and, in an effort to “deter terror attacks,” rapidly installed a network of over 1,700 cameras across the occupied territories. The combination of tools gives Israel the ability to watch Palestinians all over the West Bank. This is not the first time Israel has engaged in mass surveillance. In the late 2000s, Israeli intelligence services were monitoring Israeli citizens, mostly Arabs, and Palestinians use of Facebook and other social media platforms by looking for specific keywords. China has ramped up its surveillance of its Uighur minority population using artificial intelligence and facial recognition technology But Microsoft has positioned itself against this kind of use of facial recognition, even releasing guiding ethical principles to the line of work. Shankar Narayan, director of the Technology and Liberty Project at the American Civil Liberties Union, said in an interview with Forbes that he met with Microsoft last year and they had reciprocated his interests in slowing international access to facial recognition technology. Still, he said, he wasn’t surprised. “This particular investment is not a big surprise to me—there’s a demonstrable gap between action and rhetoric in the case of most big tech companies and Microsoft in particular,” Narayan told Forbes’s Thomas Brewster. How Israel uses technology to conduct surveillance on Palestinians A 2018 privacy law updated Israeli citizens’ protections enshrined in a constitutional right to privacy, required that databases of information collection register with the government, and requires that information on Israeli citizens be done only with their consent. With exceptions for national security, this move brought Israeli privacy law to a higher bar than that of EU regulations. But Palestinians living in the West Bank don’t hold Israeli citizenship and, therefore, are not protected by Israeli privacy laws. Israeli lawyer Jonathan Klinger attributed the surveillance to permissive legal gaps. “What you have to understand that Israel has three separate legal systems” — one for Israelis inside Israel, one for Israeli settlers in the West Bank, and one for Palestinians living in the territories — “which cause a lot of the actual legal problems we face,” Klinger said. Israel’s monitoring of Palestinians goes far beyond facial recognition. The country also monitors Palestinian’s social media for cause to arrest them on charges of incitement or intent to carry out a terror attack. The artificial intelligence provided by AnyVision is virtually inescapable. Yael Berda, a Hebrew University professor who served as a lawyer representing Palestinians denied entry permits in Israeli courts, writes in her book Living Emergency that in order to have a permit to cross into Israel proper, Palestinians must consent to the collection of their biometric data. “The intelligence services are seen as omnipotent,” said Berda in an interview with Vox. “It creates a powerful vortex of control.” Searching social media, fielding tips, and considering demographics results in organized tracking of Palestinians for use by the Israeli civil administration, Berda said. Palestinians determined to be a threat or danger end up on a list that bars them from moving through checkpoints. She estimates that upward of 250,000 people are on this list, and that number is only growing. Berda’s experience with the way that Israel collects information on Palestinians makes her skeptical of the country refraining from using facial recognition. “I don’t see a reason, legally, not to use [facial recognition] in checkpoints against terrorists, but I don’t like it,” Klinger said. “It’s a huge violation of [Palestinians’] privacy.” Source
  5. Over 30 civil rights groups are calling on local, state, and federal officials to end police partnerships with Ring, Amazon’s home surveillance company. Thirty-six civil rights organizations signed an open letter demanding local, state, and federal officials to end partnerships between Ring, Amazon’s home surveillancecompany, and over 405 law enforcement agencies around the country. The open letter, which was published yesterday by digital rights advocacy group Fight for the Future, also demands municipalities to pass surveillance oversight ordinances in order to "deter" police from partnering with companies like Ring in the future. The letter also calls on Congress to investigate Ring's practices. The open letter escalates the petition campaign that Fight for the Future launched in August, which called on civilians to petition their local officials to stop, halt, or ban any police partnerships with Ring. The signatories of the open letter include privacy advocacy groups like Media Justice, the Tor Project, and Media Mobilizing Project, and racial justice coalition like The Black Alliance for Just Immigration, Mijente, and the American-Arab Anti-Discriimnination Committee. The open letter points out that the proliferation of privatized surveillance cameras like Ring, which make it easy for law enforcement to request footage without a warrant, disproportionately puts marginalized and over-policed communities of color at risk. “There’s been a lot of reporting and general discussion about these surveillance partnerships and the potential risks to privacy and civil liberties,” Evan Greer deputy director of Fight for the Future, said in a phone call. “But this is the first time that a major coalition of significant organizations are explicitly calling on elected officials to do something about it.” When Ring partners with police, the company provides police with a tool called the Law Enforcement Neighborhoods Portal. This tool is an interactive map that allows police to request footage directly from residents, streamlining the process of voluntary evidence sharing. The map previously showed the approximate location of Ring users in a heat map, but the heat map interface was removed in August, according to an email from Ring. As reported by Motherboard, police then have to make an exchange. Some police have to promote Ring either implicitly, through only speaking about Ring in company-approved statements and providing download links to Ring’s “neighborhood watch” app, Neighbors. Others must promote it explicitly, by signing agreements stipulating that police must “encourage adoption” of Ring cameras and Neighbors. The open letter points out that some cities subsidize discounts on Ring cameras. As reported by Motherboard, some cities have paid up to $100,000 of taxpayer money in order to fund these discount programs. Greer said that the letter acts on an urgent need for local lawmakers to ask questions about Ring, and federal lawmakers to formally investigate the company’s practices. “Our elected officials in Congress, at the very least, have to be asking what the implications for our country are if a company like Amazon is able to exponentially increase the number of cameras in our neighborhoods, and at the same time, enter into these cozy partnerships with law enforcement,” Greer said. ‘“There’s no oversight for what [police] can do with [data] once they collect it.” A Ring spokesperson said in an email statement that the company's mission is to "help make neighborhoods safer." "We work towards this mission in a number of ways, including providing a free tool that can help build stronger relationships between communities and their local law enforcement agencies," Ring said. "We have taken care to design these features in a way that keeps users in control and protects their privacy." Source
  6. Tech investor John Borthwick doesn’t mince words when it comes to how he perceives smart speakers from the likes of Google and Amazon . The founder of venture capital firm Betaworks and former Time Warner and AOL executive believes the information-gathering performed by such devices is tantamount to surveillance. Image: John Borthwick “I would say that there's two or three layers sort of problematic layers with these new smart speakers, smart earphones that are in market now,” Borthwick told Yahoo Finance Editor-in-Chief Andy Serwer during an interview for his series “Influencers.” “And so the first is, from a consumer standpoint, user standpoint, is that these, these devices are being used for what's — it's hard to call it anything but surveillance,” Borthwick said. The way forward? Some form of regulation that gives users more control over their own data. “I personally believe that you, as a user and as somebody who likes technology, who wants to use technology, that you should have far more rights about your data usage than we have today,” Borthwick said. Smart assistants face a reckoning The venture capitalist’s comments follow a string of controversies surrounding smart assistants including Google’s Assistant, Amazon’s Alexa, and Apple’s (AAPL) Siri, in which each company admitted that human workers listen to users’ queries as a means of improving their digital assistants’ voice recognition capabilities. “They've gone to those devices and they've said, ‘Give us data when people passively act upon the device.’ So in other words, I walk over to that light switch,” Borthwick said. “I turn it off, turn it on, it's now giving data back to the smart speaker.” It’s important to note that smart speakers from every major company are only activated when you use their appropriate wake word. To activate your Echo speaker, for instance, you need to say “Alexa” followed by your command. The same thing goes for Google’s Assistant and Apple’s Siri. The uproar surrounding smart speakers and their assistants began when Bloomberg reported in April that Amazon used a global team of employees and contractors to listen to users’ voice commands to Alexa to improve its speech recognition. Image: Amazon Echo That was followed by a similar report by Belgian-based VRT News about Google employees listening to users’ voice commands for Google Assistant. The Guardian then published a third piece about Apple employees listening to users’ Siri commands. Facebook was also pulled into the controversy when Bloomberg reported it had employees listen to users’ voice commands made through its Messenger app. Google and Apple have since apologized, with Google halting the practice, and Apple announcing that it will automatically opt users out of voice sample collection. Users instead will have to opt in if they want to provide voice samples to improve Siri’s voice recognition. Amazon, for its part, has allowed users to opt out of having their voice samples shared with employees, while Facebook said it has paused the practice. Image: Google Home Mini smart speaker The use of voice samples has helped improved the voice recognition features of digital assistants, ensuring they are better able to understand what users say and the context in which they say it. At issue is whether users were aware that employees of these companies were listening. There’s also the matter of accidental activations, which can result in employees hearing conversations or snippets of conversations they might otherwise not have been meant to hear. As for how such issues can be dealt with in the future, Borthwick points to some form of tech regulation. Though he doesn’t offer specifics, the VC says that users need to be able to understand how their data is being used, and be able to take control of it. “I think generally, it's about giving the users a lot more power over the decisions that are being made. I think that's one piece of it,” he said. Source
  7. Millions of security cameras become equipped with “video analytics” and other AI-infused technologies that allow computers not only record but “understand” the objects they’re capturing, they could be used for both security and marketing purposes, the American Civil Liberties Union (ACLU) warned in a recent report ,“The Dawn of Robot Surveillance.” As they become more advanced, the camera use is shifting from simply capturing and storing video “just in case” to actively evaluating video with real-time analytics and for surveillance. While ownership of cameras is mostly under decentralized ownership and control the ACLU cautioned policymakers to be proactive and create rules to regulate the potential negative impact this could have. The report also listed specific features that could allow for intrusive surveillance and recommendations to curtail potential abuse. Source
  8. Spotify pursues emotional surveillance for global profit Music is emotional, and so our listening often signals something deeply personal and private. Today, this means music streaming platforms are in a unique position within the greater platform economy: they have troves of data related to our emotional states, moods, and feelings. It’s a matter of unprecedented access to our interior lives, which is buffered by the flimsy illusion of privacy. When a user chooses, for example, a “private listening” session on Spotify, the effect is to make them feel that it’s a one-way relation between person and machine. Of course, that personalization process is Spotify’s way of selling users on its product. But, as it turns out, in a move that should not surprise anyone at this point, Spotify has been selling access to that listening data to multinational corporations. Where other platforms might need to invest more to piece together emotional user profiles, Spotify streamlines the process by providing boxes that users click on to indicate their moods: Happy Hits, Mood Booster, Rage Beats, Life Sucks. All of these are examples of what can now be found on Spotify’s Browse page under the “mood” category, which currently contains eighty-five playlists. If you need a lift in the morning, there’s Wake Up Happy, A Perfect Day, or Ready for the Day. If you’re feeling down, there’s Feeling Down, Sad Vibe, Down in the Dumps, Drifting Apart, Sad Beats, Sad Indie, and Devastating. If you’re grieving, there’s even Coping with Loss, with the tagline: “When someone you love becomes a memory, find solace in these songs.” Over the years, streaming services have pushed a narrative about these mood playlists, suggesting, through aggressive marketing, that the rise of listening by way of moods and activities was a service to listeners and artists alike—a way to help users navigate infinite choice, to find their way through a vast library of forty million songs. It’s a powerful arm of the industry-crafted mythology of the so-called streaming revolution: platforms celebrating this grand recontextualization of music into mood playlists as an engine of discovery. Spotify is currently running a campaign centered on moods—the company’s Twitter tagline is currently “Music for every mood”—complete with its own influencer campaign. But a more careful look into Spotify’s history shows that the decision to define audiences by their moods was part of a strategic push to grow Spotify’s advertising business in the years leading up to its IPO—and today, Spotify’s enormous access to mood-based data is a pillar of its value to brands and advertisers, allowing them to target ads on Spotify by moods and emotions. Further, since 2016, Spotify has shared this mood data directly with the world’s biggest marketing and advertising firms. Streaming Intelligence Surveillance In 2015, Spotify began selling advertisers on the idea of marketing to moods, moments, and activities instead of genres. This was one year after Spotify acquired the “music intelligence” firm Echo Nest. Together they began looking at the 1.5 billion user-generated playlists at Spotify’s disposal. Studying these playlists allowed Spotify to more deeply analyze the contexts in which listening was happening on its platform. And so, right around the time that Spotify realized it had 400,000 user-generated barbecue playlists, Brian Benedik, then Spotify’s North American Vice President of Advertising and Partnerships, noted in an Ad Age interview that the company would focus on moods as a way to grow its advertising mechanism: “This is not something that’s just randomly thrown out there,” Benedik said. “It’s a strategic evolution of the Spotify ads business.” As of May 1, 2015, advertisers would be able to target ads to users of the free ad-supported service based on activities and moods: “Mood categories like happy, chill, and sad will let a brand like Coca-Cola play on its ‘Open Happiness’ campaign when people are listening to mood-boosting music,” the Ad Age article explained. Four years later, Spotify is the world’s biggest streaming subscription service, with 207 million users in seventy-nine different countries. And as Spotify has grown, its advertising machine has exploded. Of those 207 million users, it claims 96 million are subscribers, meaning that 111 million users rely on the ad-supported version. Spotify top marketing execs have expressed that the company’s ambition is “absolutely” to be the third-largest player in the digital ad market behind Google and Facebook. In turn, since 2015, Spotify’s strategic use of mood and emotion-based targeting has only become even more entrenched in its business model. “At Spotify we have a personal relationship with over 191 million people who show us their true colors with zero filter,” reads a current advertising deck. “That’s a lot of authentic engagement with our audience: billions of data points every day across devices! This data fuels Spotify’s streaming intelligence—our secret weapon that gives brands the edge to be relevant in real-time moments.” Another brand-facing pitch proclaims: “The most exciting part? This new research is starting to reveal the streaming generation’s offline behaviors through their streaming habits.” Today, Spotify Ad Studio, a self-service portal automating the ad-purchase process, promises access to “rich and textured datasets,” allowing brands to instantly target their ads by mood and activity categories like “Chill,” “Commute,” “Dinner,” “Focus/Study,” “Girls Night Out,” and more. And across the Spotify for Brands website are a number of “studies” and “insights reports” regarding research that Spotify has undertaken about streaming habits: “You are what you stream,” they reiterate over and over. In a 2017 package titled Understanding People Through Music—Millennial Edition, Spotify (with help from “youth marketing and millennial research firm” Ypulse) set out to help advertisers better target millennial users by mood, emotion, and activity specifically. Spotify explains that “unlike generations past, millennials aren’t loyal to any specific music genre.” They conflate this with a greater reluctance toward labels and binaries, pointing out the rising number of individuals who identify as gender fluid and the growing demographic of millennials who do not have traditional jobs—and chalk these up to consumer preferences. “This throws a wrench is marketers’ neat audience segmentations,” Spotify commiserates. For the study, they also gathered six hundred in-depth “day in a life” interviews recorded as “behavioral diaries.” All participants were surveilled by demographics, platform usage, playlist behavior, feature usage, and music tastes—and in the United States (where privacy is taken less seriously), Spotify and Ypulse were able to pair Spotify’s own streaming data with additional third-party information on “broader interests, lifestyle, and shopping behaviors.” The result is an interactive hub on the Spotify for Brands website detailing seven distinct “key audio streaming moments for marketers to tap into,” including Working, Chilling, Cooking, Chores, Gaming, Workout, Partying, and Driving. Spotify also dutifully outlines recommendations for how to use this information to sell shit, alongside success stories from Dunkin’ Donuts, Snickers, Gatorade, Wild Turkey, and BMW. More startlingly, for each of these “moments” there is an animated trajectory of a typical “emotional journey” claiming to predict the various emotional states users will experience while listening to particular playlists. Listeners who are “working,” for instance, are likely to start out feeling pressured and stressed, before they become more energized and focused and end up feeling fine and accomplished at the end of the playlist queue. If they listen while doing chores, the study claims to know that they start out feeling stressed and lazy, then grow motivated and entertained, and end by feeling similarly good and accomplished. In Spotify’s world, listening data has become the oil that fuels a monetizable metrics machine, pumping the numbers that lure advertisers to the platform. In a data-driven listening environment, the commodity is no longer music. The commodity is listening. The commodity is users and their moods. The commodity is listening habits as behavioral data. Indeed, what Spotify calls “streaming intelligence” should be understood as surveillance of its users to fuel its own growth and ability to sell mood-and-moment data to brands. A Leviathan of Ads The potential of music to provide mood-related data useful to marketers has long been studied. In 1990, the Journal of Marketing published an article dubbed “Music, Mood and Marketing” that surveyed some of this history while bemoaning how “despite being a prominent promotional tool, music is not well understood or controlled by marketers.” The text outlines how “marketers are precariously dependent on musicians for their insight into the selection or composition of the ‘right’ music for particular situations.” This view of music as a burdensome means to a marketer’s end is absurd, but it’s also the logic that rules the current era of algorithmic music platforms. Unsurprisingly, this 1990 article aimed to overcome challenges for marketers by figuring out new ways to extract value from music that would be beyond the control of musicians themselves: studying the “behavioral effects” of music with a “special emphasis on music’s emotional expressionism and role as mood influencer” in order to create new forms of power and control. Today, marketers want mood-related data more than ever, at least in part to fuel automated, personalized ad targeting. In 2016, the world’s largest holding company for advertising and PR agencies, WPP, announced that it had struck a multi-year partnership with Spotify, giving the conglomerate unprecedented access to Spotify’s mood data specifically. The partnership with the WPP, it turns out, was part of Spotify’s plan to ramp up its advertising business in advance of its IPO. WPP is the parent company to several of the world’s largest and oldest advertising, PR, and brand consulting firms, including Ogilvy, Grey Global Group, and at least eighteen others. Across their portfolio, WPP owns companies that work with numerous mega-corporations and household brands, helping shill everything from cars, Coca-Cola, and KFC to booze, banks, and Burger King. Over the decades, these companies have worked on campaigns spanning from Merrill Lynch and Lay’s potato chips to Colgate-Palmolive and Ford. Additionally, WPP properties also include tech-focused companies that claim proficiency in automation- and personalization-driven ad sales, all of which would now benefit from Spotify’s mood data. The 2016 announcement of WPP and Spotify’s global partnership in “data, insights, creative, technology, innovation, programmatic solution, and new growth markets” speaks for itself: WPP now has unique listening preferences and behaviors of Spotify’s 100 million users in 60 countries. The multi-year deal provides differentiating value to WPP and its clients by harnessing insights from the connection between music and audiences’ moods and activities. Music attributes such as tempo and energy are proven to be highly relevant in predicting mood, which enables advertisers to understand their audiences in a new emotional dimension. What’s more, WPP-owned advertising agencies could now access the “Wunderman Zipline™ Data Management Platform” to gain direct access to Spotify users’ “mood, listening and playlist behavior, activity and location.” They’d also potentially make use of “Spotify’s data on connected device usage” while the WPP-owned company GroupM specifically would retain access to “an exclusive infusion of Spotify data” into its own platform made for corporate ad targeting. Per the announcement, WPP companies would also serve as launch partners for new types of advertising and new markets unveiled by Spotify, while procuring “visibility into Spotify’s product roadmap and access to beta testing.” At the time the partnership was announced, Anas Ghazi, then Managing Director of Global Partnerships at WPP’s Data Alliance, noted that all WPP agencies would be able to “grab these insights. . . . If you think about how music shapes your activity and thoughts, this is a new, unique play for us to find audiences. Mood and moments are the next pieces of understanding audiences.” And Harvey Goldhersz, then CEO of GroupM Data & Analytics, salivated: “The insights we’ll develop from Spotify’s behavioral data will help our clients realize a material marketplace advantage, aiding delivery of ads that are appropriate to the consumer’s mood and the device used.” Ongoing Synergies While this deal was announced via the WPP Data Alliance, visiting that particular organization’s website now auto-directs back to the main WPP website, likely a result of corporate restructuring that WPP underwent over the past year. Currently, the only public-facing evidence of the relationship between WPP and Spotify is listed online under the WPP-owned data and insights company Kantar, which WPP describes as “the world’s leading marketing data, insight and consultancy company.” What might Kantar be doing with this user data? The current splash video deck on its website is useful: it claims to be the first agency to use “facial recognition in advertising testing,” and it professes to be exploring new technologies “from biodata and biometrics and healthcare, to capturing human sentiment and even voting intentions by analyzing social media.” And, finally, it admits to “exploiting big data, artificial intelligence and analytics . . . to predict attitudes and behavior.” When we reached out to see if the relationship between Kantar and Spotify had changed since the initial 2016 announcement, Kantar sent The Baffler this statement: The 2016 Spotify collaboration was the first chapter of many-a collaboration and has continued to evolve to meet the dynamic needs of our clients and marketplace. Spotify continues to be a valued partner of larger enterprise and Kantar with on-going synergies. One year after the announcement of the partnership, in 2017, Spotify further confirmed its desire to establish direct relationships with the world’s biggest advertising agencies when it hired two executives from WPP: Angela Solk, now Global Head of Agency Partnerships, whose job at Spotify includes teaching WPP and other ad agencies how to best make use of Spotify’s first-party data. (In Solk’s first year at Spotify, she helped create the Smirnoff Equalizer; in a 2018 interview, she reflected on the “beauty” of that branded content campaign and Spotify’s ability to extract listener insight and make it “part of Smirnoff’s DNA.”) Spotify also hired Craig Weingarten as its Head of Industry, Automotive, which now leads Spotify’s Detroit-based auto ad sales team. According to its own media narrative, Spotify offers data access to brands that competitor platforms do not, and it has gained a reputation for its eagerness to share its first-party data. At advertising conferences and in the ad press, Spotify top ad exec Marco Bertozzi has emphasized how Spotify hopes to widely share first-party data, going so far as to confess, “When other walled gardens say no to data questions . . . we say yes.” (Bertozzi was also the mind behind an internal Spotify campaign adorably branded “#LoveAds” to combat growing societal disgust with digital advertising. #LoveAds started as a mantra of the advertising team, but as Bertozzi proudly explained in late 2018, “#LoveAds became a movement within the company.”) Spotify has spent tremendous energy on its ad team’s proficiency with cross-device advertising options (likely due to the imminent ubiquity of Spotify in the car and the so-called “smart home”), as well as “programmatic advertising,” otherwise understood as the targeted advertising sold through an automated process, often in milliseconds—Spotify seeks to be the most dominant seller of such advertising in the audio space. And there’s also the self-serve platform, direct inventory sales, Spotify’s private marketplace (an invite-only inventory for select advertisers), programmatic guaranteed deals (a guaranteed volume of impressions at a fixed price)—the jargon ad-speak lists could go on and on. Trying to keep tabs on Spotify’s advertising products and partnerships is dizzying. But what is clear is that the hype surrounding these partnerships has often focused on “moods and moments”-related data Spotify offers brands—not to mention the company’s penchant for allowing brands to combine their own data with Spotify’s. In 2017, Spotify’s Brian Benedik told The Drum that Spotify’s access to listening habits and first-party data is “one of the reasons that some of these big multi-national brands like the Samsungs and the Heinekens and the Microsofts and Procter and Gambles of the world are working with us a lot closer than they ever have . . . they don’t see that or get that from any other platform out there.” And it appears that things will only get darker. Julie Clark, Spotify’s Global Head of Automation Sales, said earlier this year in an interview that its targeting capabilities are growing: “There’s deeper first party-data that’s going to become available as well.” Mood-Boosterism Recently, I tried out a mood-related experiment on Spotify. I created a new account and only listened to the “Coping with Loss” playlist on loop for a few days. I paid particular attention to the advertisements that I was served by Spotify. And while I do not know for sure whether listening to the “Coping with Loss” playlist caused me to receive an unusually nostalgic Home Depot ad about how your carpets contain memories, or an ad for a particularly angsty new album called Amidst the Chaos, the extent to which Spotify is matching moods and emotions with advertisements certainly makes it seem possible. What was clearer: during my time spent listening exclusively to songs about grieving, Spotify was quick to recommend that I brighten my mood. Under the heading “More like Coping With Loss . . .” it recommended playlists themed for Father’s Day and Mother’s Day, and playlists called “Warm Fuzzy Feelings,” “Soundtrack Love Songs,” “90s Love Songs,” “Love Ballads,” and “Acoustic Hits.” Spotify evidently did not want me to sit with my sorrow; it wanted my mood to improve. It wanted me to be happy. This is because Spotify specifically wants to be seen as a mood-boosting platform. In Spotify for Brands blog posts, the company routinely emphasizes how its own platform distinguishes itself from other streams of digital content, particularly because it gives marketers a chance to reach users through a medium that is widely seen as a “positive enhancer”: a medium they turn to for “music to help them get through the less desirable moments in their day, improve the more positive ones and even discover new things about their personality,” says Spotify. “We’re quite unique in that we have people’s ears . . . combine that with the psycho-graphic data that we have and that becomes very powerful for brands,” said Jana Jakovljevic in 2015, then Head of Programmatic Solutions; she is now employed by AI ad-tech company Cognitiv, which claims to be “the first neural network technology that unearths patterns of consumer behavior” using “deep learning” to predict and target consumers. The fact that experience at Spotify could prepare someone for such a career shift is worth some reflection. But more interestingly, Jakovljevic added that Spotify was using this data in many ways, including to determine exactly what type of music to recommend, which is important to remember: the data that is used to sell advertisers on the platform is also the data driving recommendations. The platform can recommend music in ways that appease advertisers while promising them that mood-boosting ad space. What’s in question here isn’t just how Spotify monitors and mines data on our listening in order to use their “audience segments” as a form of currency—but also how it then creates environments more suitable for advertisers through what it recommends, manipulating future listening on the platform. In appealing to advertisers, Spotify also celebrates its position as a background experience and in particular how this benefits advertisers and brands. Jorge Espinel, who was Head of Global Business Development at Spotify for five years, once said in an interview: “We love to be a background experience. You’re competing for consumer attention. Everyone is fighting for the foreground. We have the ability to fight for the background. And really no one is there. You’re doing your email, you’re doing your social network, etcetera.” In other words, it is in advertisers’ best interests that Spotify stays a background experience. When a platform like Spotify sells advertisers on its mood-boosting, background experience, and then bakes these aims into what it recommends to listeners, a twisted form of behavior manipulation is at play. It’s connected to what Shoshana Zuboff, in The Age of Surveillance Capitalism: The Fight for A Human Future at the New Frontier of Power, calls the “behavioral futures market”—where “many companies are eager to lay their bets on our future behavior.” Indeed, Spotify seeks not just to monitor and mine our mood, but also to manipulate future behavior. “What we’d ultimately like to do is be able to predict people’s behavior through music,” Les Hollander, the Global Head of Audio and Podcast Monetization, said in 2017. “We know that if you’re listening to your chill playlist in the morning, you may be doing yoga, you may be meditating . . . so we’d serve a contextually relevant ad with information and tonality and pace to that particular moment.” Very Zen! Spotify’s treatment of its mood and emotion data as a form of currency in the greater data marketplace should be considered more generally in the context of the tech industry’s rush to quantify our emotions. There is a burgeoning industry surrounding technology that alleges to mine our emotional states in order to feed AI projects; take, for example, car companies that claim they can use facial recognition to read your mood and keep you safer on the road. Or Facebook’s patents on facial recognition software. Or unnerving technologies like Affectiva, which claim to be developing an industry around “emotion AI” and “affective computing” processes that measure human emotions. It remains to be seen how Spotify could leverage such tech to maintain its reputation as a mood-boosting platform. And yet we should admit that it’s good for business for Spotify to manipulate people’s emotions on the platform toward feelings of chillness, contentment, and happiness. This has immense consequences for music, of course, but what does it mean for news and politics and culture at large, as the platform is set to play a bigger role in mediating all of the above, especially as its podcasting efforts grow? On the Spotify for Brands blog, the streaming giant explains that its research shows millennials are weary of most social media and news platforms, feeling that these mediums affect them negatively. Spotify is a solution for brands, it explains, because it is a platform where people go to feel good. Of course, in this telling of things, Spotify conveniently ignores why those other forms of media feel so bad. It’s because they are platforms that prioritize their own product and profit above all else. It’s because they are platforms governed by nothing more than surveillance technology and the mechanisms of advertising. Source
  9. DUBLIN (Reuters) - The European Court of Justice (ECJ) will hear a landmark privacy case regarding the transfer of EU citizens’ data to the United States in July, after Facebook’s bid to stop its referral was blocked by Ireland’s Supreme Court on Friday. The case, which was initially brought against Facebook by Austrian privacy activist Max Schrems, is the latest to question whether methods used by technology firms to transfer data outside the 28-nation European Union give EU consumers sufficient protection from U.S. surveillance. A ruling by Europe’s top court against the current legal arrangements would have major implications for thousands of companies, which make millions of such transfers every day, including human resources databases, credit card transactions and storage of internet browsing histories. The Irish High Court, which heard Schrems’ case against Facebook last year, said there were well-founded concerns about an absence of an effective remedy in U.S. law compatible with EU legal requirements, which prohibit personal data being transferred to a country with inadequate privacy protections. The High Court ordered the case be referred to the ECJ to assess whether the methods used for data transfers - including standard contractual clauses and the so called Privacy Shield agreement - were legal. Facebook took the case to the Supreme Court when the High Court refused its request to appeal the referral, but in a unanimous decision on Friday, the Supreme Court said it would not overturn any aspect the ruling. The High Court’s original five-page referral asks the ECJ if the Privacy Shield - under which companies certify they comply with EU privacy law when transferring data to the United States - does in fact mean that the United States “ensures an adequate level of protection”. Facebook came under scrutiny last year after it emerged the personal information of up to 87 million users, mostly in the United States, may have been improperly shared with political consultancy Cambridge Analytica. More generally, data privacy has been a growing public concern since revelations in 2013 by former U.S. intelligence contractor Edward Snowden of mass U.S. surveillance caused political outrage in Europe. The Privacy Shield was hammered out between the EU and the United States after the ECJ struck down its predecessor, Safe Harbour, on the grounds that it did not afford Europeans’ data enough protection from U.S. surveillance. That case was also brought by Schrems via the Irish courts. “Facebook likely again invested millions to stop this case from progressing. It is good to see that the Supreme Court has not followed,” Schrems said in a statement. Source
  10. T-Mobile, Sprint, and AT&T are selling access to their customers’ location data, and that data is ending up in the hands of bounty hunters and others not authorized to possess it, letting them track most phones in the country. Nervously, I gave a bounty hunter a phone number. He had offered to geolocate a phone for me, using a shady, overlooked service intended not for the cops, but for private individuals and businesses. Armed with just the number and a few hundred dollars, he said he could find the current location of most phones in the United States. The bounty hunter sent the number to his own contact, who would track the phone. The contact responded with a screenshot of Google Maps, containing a blue circle indicating the phone’s current location, approximate to a few hundred metres. Queens, New York. More specifically, the screenshot showed a location in a particular neighborhood—just a couple of blocks from where the target was. The hunter had found the phone (the target gave their consent to Motherboard to be tracked via their T-Mobile phone.) The bounty hunter did this all without deploying a hacking tool or having any previous knowledge of the phone’s whereabouts. Instead, the tracking tool relies on real-time location data sold to bounty hunters that ultimately originated from the telcos themselves, including T-Mobile, AT&T, and Sprint, a Motherboard investigation has found. These surveillance capabilities are sometimes sold through word-of-mouth networks. Whereas it’s common knowledge that law enforcement agencies can track phones with a warrant to service providers, IMSI catchers, or until recently via other companies that sell location data such as one called Securus, at least one company, called Microbilt, is selling phone geolocation services with little oversight to a spread of different private industries, ranging from car salesmen and property managers to bail bondsmen and bounty hunters, according to sources familiar with the company’s products and company documents obtained by Motherboard. Compounding that already highly questionable business practice, this spying capability is also being resold to others on the black market who are not licensed by the company to use it, including me, seemingly without Microbilt’s knowledge. Motherboard’s investigation shows just how exposed mobile networks and the data they generate are, leaving them open to surveillance by ordinary citizens, stalkers, and criminals, and comes as media and policy makers are paying more attention than ever to how location and other sensitive data is collected and sold. The investigation also shows that a wide variety of companies can access cell phone location data, and that the information trickles down from cell phone providers to a wide array of smaller players, who don’t necessarily have the correct safeguards in place to protect that data. “People are reselling to the wrong people,” the bail industry source who flagged the company to Motherboard said. Motherboard granted the source and others in this story anonymity to talk more candidly about a controversial surveillance capability. Your mobile phone is constantly communicating with nearby cell phone towers, so your telecom provider knows where to route calls and texts. From this, telecom companies also work out the phone’s approximate location based on its proximity to those towers. Although many users may be unaware of the practice, telecom companies in the United States sell access to their customers’ location data to other companies, called location aggregators, who then sell it to specific clients and industries. Last year, one location aggregator called LocationSmart faced harsh criticism for selling data that ultimately ended up in the hands of Securus, a company which provided phone tracking to low level enforcement without requiring a warrant. LocationSmart also exposed the very data it was selling through a buggy website panel, meaning anyone could geolocate nearly any phone in the United States at a click of a mouse. There’s a complex supply chain that shares some of American cell phone users’ most sensitive data, with the telcos potentially being unaware of how the data is being used by the eventual end user, or even whose hands it lands in. Financial companies use phone location data to detect fraud; roadside assistance firms use it to locate stuck customers. But AT&T, for example, told Motherboard the use of its customers’ data by bounty hunters goes explicitly against the company’s policies, raising questions about how AT&T allowed the sale for this purpose in the first place. “The allegation here would violate our contract and Privacy Policy,” an AT&T spokesperson told Motherboard in an email. In the case of the phone we tracked, six different entities had potential access to the phone’s data. T-Mobile shares location data with an aggregator called Zumigo, which shares information with Microbilt. Microbilt shared that data with a customer using its mobile phone tracking product. The bounty hunter then shared this information with a bail industry source, who shared it with Motherboard. The CTIA, a telecom industry trade group of which AT&T, Sprint, and T-Mobile are members, has official guidelines for the use of so-called “location-based services” that “rely on two fundamental principles: user notice and consent,” the group wrote in those guidelines. Telecom companies and data aggregators that Motherboard spoke to said that they require their clients to get consent from the people they want to track, but it’s clear that this is not always happening. A second source who has tracked the geolocation industry told Motherboard, while talking about the industry generally, “If there is money to be made they will keep selling the data.” “Those third-level companies sell their services. That is where you see the issues with going to shady folks [and] for shady reasons,” the source added. Frederike Kaltheuner, data exploitation programme lead at campaign group Privacy International, told Motherboard in a phone call that “it’s part of a bigger problem; the US has a completely unregulated data ecosystem.” Microbilt buys access to location data from an aggregator called Zumigo and then sells it to a dizzying number of sectors, including landlords to scope out potential renters; motor vehicle salesmen, and others who are conducting credit checks. Armed with just a phone number, Microbilt’s “Mobile Device Verify” product can return a target’s full name and address, geolocate a phone in an individual instance, or operate as a continuous tracking service. “You can set up monitoring with control over the weeks, days and even hours that location on a device is checked as well as the start and end dates of monitoring,” a company brochure Motherboard found online reads. Posing as a potential customer, Motherboard explicitly asked a Microbilt customer support staffer whether the company offered phone geolocation for bail bondsmen. Shortly after, another staffer emailed with a price list—locating a phone can cost as little as $4.95 each if searching for a low number of devices. That price gets even cheaper as the customer buys the capability to track more phones. Getting real-time updates on a phone’s location can cost around $12.95. “Dirt cheap when you think about the data you can get,” the source familiar with the industry added. It’s bad enough that access to highly sensitive phone geolocation data is already being sold to a wide range of industries and businesses. But there is also an underground market that Motherboard used to geolocate a phone—one where Microbilt customers resell their access at a profit, and with minimal oversight. “Blade Runner, the iconic sci-fi movie, is set in 2019. And here we are: there's an unregulated black market where bounty-hunters can buy information about where we are, in real time, over time, and come after us. You don't need to be a replicant to be scared of the consequences,” Thomas Rid, professor of strategic studies at Johns Hopkins University, told Motherboard in an online chat. The bail industry source said his middleman used Microbilt to find the phone. This middleman charged $300, a sizeable markup on the usual Microbilt price. The Google Maps screenshot provided to Motherboard of the target phone’s location also included its approximate longitude and latitude coordinates, and a range of how accurate the phone geolocation is: 0.3 miles, or just under 500 metres. It may not necessarily be enough to geolocate someone to a specific building in a populated area, but it can certainly pinpoint a particular borough, city, or neighborhood. In other cases of phone geolocation it is typically done with the consent of the target, perhaps by sending a text message the user has to deliberately reply to, signalling they accept their location being tracked. This may be done in the earlier roadside assistance example or when a company monitors its fleet of trucks. But when Motherboard tested the geolocation service, the target phone received no warning it was being tracked. The bail source who originally alerted Microbilt to Motherboard said that bounty hunters have used phone geolocation services for non-work purposes, such as tracking their girlfriends. Motherboard was unable to identify a specific instance of this happening, but domestic stalkers have repeatedly used technology, such as mobile phone malware, to track spouses. As Motherboard was reporting this story, Microbilt removed documents related to its mobile phone location product from its website. https://www.documentcloud.org/documents/5676919-Microbilt-Mobile-Device-Verify-2018.html A Microbilt spokesperson told Motherboard in a statement that the company requires anyone using its mobile device verification services for fraud prevention must first obtain consent of the consumer. Microbilt also confirmed it found an instance of abuse on its platform—our phone ping. “The request came through a licensed state agency that writes in approximately $100 million in bonds per year and passed all up front credentialing under the pretense that location was being verified to mitigate financial exposure related to a bond loan being considered for the submitted consumer,” Microbilt said in an emailed statement. In this case, “licensed state agency” is referring to a private bail bond company, Motherboard confirmed. “As a result, MicroBilt was unaware that its terms of use were being violated by the rogue individual that submitted the request under false pretenses, does not approve of such use cases, and has a clear policy that such violations will result in loss of access to all MicroBilt services and termination of the requesting party’s end-user agreement,” Microbilt added. “Upon investigating the alleged abuse and learning of the violation of our contract, we terminated the customer’s access to our products and they will not be eligible for reinstatement based on this violation.” Zumigo confirmed it was the company that provided the phone location to Microbilt and defended its practices. In a statement, Zumigo did not seem to take issue with the practice of providing data that ultimately ended up with licensed bounty hunters, but wrote, “illegal access to data is an unfortunate occurrence across virtually every industry that deals in consumer or employee data, and it is impossible to detect a fraudster, or rogue customer, who requests location data of his or her own mobile devices when the required consent is provided. However, Zumigo takes steps to protect privacy by providing a measure of distance (approx. 0.5-1.0 mile) from an actual address.” Zumigo told Motherboard it has cut Microbilt’s data access. In Motherboard’s case, the successfully geolocated phone was on T-Mobile. “We take the privacy and security of our customers’ information very seriously and will not tolerate any misuse of our customers’ data,” A T-Mobile spokesperson told Motherboard in an emailed statement. “While T-Mobile does not have a direct relationship with Microbilt, our vendor Zumigo was working with them and has confirmed with us that they have already shut down all transmission of T-Mobile data. T-Mobile has also blocked access to device location data for any request submitted by Zumigo on behalf of Microbilt as an additional precaution.” Microbilt’s product documentation suggests the phone location service works on all mobile networks, however the middleman was unable or unwilling to conduct a search for a Verizon device. Verizon did not respond to a request for comment. AT&T told Motherboard it has cut access to Microbilt as the company investigates. “We only permit the sharing of location when a customer gives permission for cases like fraud prevention or emergency roadside assistance, or when required by law,” the AT&T spokesperson said. Sprint told Motherboard in a statement that “protecting our customers’ privacy and security is a top priority, and we are transparent about that in our Privacy Policy [...] Sprint does not have a direct relationship with MicroBilt. If we determine that any of our customers do and have violated the terms of our contract, we will take appropriate action based on those findings.” Sprint would not clarify the contours of its relationship with Microbilt. These statements sound very familiar. When The New York Times and Senator Ron Wyden published details of Securus last year, the firm that was offering geolocation to low level law enforcement without a warrant, the telcos said they were taking extra measures to make sure their customers’ data would not be abused again. Verizon announced it was going to limit data access to companies not using it for legitimate purposes. T-Mobile, Sprint, and AT&T followed suit shortly after with similar promises. After Wyden’s pressure, T-Mobile’s CEO John Legere tweeted in June last year “I’ve personally evaluated this issue & have pledged that @tmobile will not sell customer location data to shady middlemen.” Months after the telcos said they were going to combat this problem, in the face of an arguably even worse case of abuse and data trading, they are saying much the same thing. Last year, Motherboard reported on a company that previously offered phone geolocation to bounty hunters; here Microbilt is operating even after a wave of outrage from policy makers. In its statement to Motherboard on Monday, T-Mobile said it has nearly finished the process of terminating its agreements with location aggregators. “It would be bad if this was the first time we learned about it. It’s not. Every major wireless carrier pledged to end this kind of data sharing after I exposed this practice last year. Now it appears these promises were little more than worthless spam in their customers’ inboxes,” Wyden told Motherboard in a statement. Wyden is proposing legislation to safeguard personal data. Due to the ongoing government shutdown, the Federal Communications Commission (FCC) was unable to provide a statement. “Wireless carriers’ continued sale of location data is a nightmare for national security and the personal safety of anyone with a phone,” Wyden added. “When stalkers, spies, and predators know when a woman is alone, or when a home is empty, or where a White House official stops after work, the possibilities for abuse are endless.” Source
  11. from the 'intel-techniques,'-indeed dept A little opsec goes a long way. The Massachusetts State Police -- one of the most secretive law enforcement agencies in the nation -- gave readers of its Twitter feed a free look at the First Amendment-protected activities it keeps tabs on… by uploading a screenshot showing its browser bookmarks. Alex Press of Jacobin Magazine was one of the Twitter users to catch the inadvertent exposure of MSP operations. If you can't read/see the tweet, it says: the MA staties just unintentionally tweeted a photo that shows their bookmarks include a whole number of Boston’s left-wing orgs The tweet was quickly scrubbed by the MSP, but not before other Twitter users had grabbed screenshots. Some of the activist groups bookmarked by the state police include Mass. Action Against Police Brutality, the Coalition to Organize and Mobilize Boston Against Trump, and Resistance Calendar. Here's a closer look at the bookmarks. The MSP did not deny they keep (browser) tabs on protest organizations. Instead, it attempted to portray this screen of left-leaning bookmarks as some sort of non-partisan, non-cop-centric attempt to keep the community safe by being forewarned and forearmed. Ok. But mainly these groups? The ones against police brutality and the back-the-blue President? Seems a little one-sided for an "of any type and by any group" declaration. The statement continues in the same defensive vein for a few more sentences, basically reiterating the false conceit that cops don't take sides when it comes to activist groups and the good people of Massachusetts are lucky to have such proactive public servants at their disposal. Whatever. If it wasn't a big deal, the MSP wouldn't have vanished the original tweet into the internet ether. The screenshot came from a "fusion center" -- one of those DHS partnerships that results in far more rights violations and garbage "see something, say something" reports than "actionable intelligence". Fusion centers are supposed to be focused on terrorism, not on people who don't like police brutality or the current Commander in Chief. What this looks like is probably what it is: police keeping tabs on people they don't like or people who don't like them. That's not really what policing is about and it sure as hell doesn't keep the community any safer. Source
  12. By Bruce Schneier The Five Eyes -- the intelligence consortium of the rich English-speaking countries (the US, Canada, the UK, Australia, and New Zealand) -- have issued a "Statement of Principles on Access to Evidence and Encryption" where they claim their needs for surveillance outweigh everyone's needs for security and privacy. ...the increasing use and sophistication of certain encryption designs present challenges for nations in combatting serious crimes and threats to national and global security. Many of the same means of encryption that are being used to protect personal, commercial and government information are also being used by criminals, including child sex offenders, terrorists and organized crime groups to frustrate investigations and avoid detection and prosecution. Privacy laws must prevent arbitrary or unlawful interference, but privacy is not absolute. It is an established principle that appropriate government authorities should be able to seek access to otherwise private information when a court or independent authority has authorized such access based on established legal standards. The same principles have long permitted government authorities to search homes, vehicles, and personal effects with valid legal authority. The increasing gap between the ability of law enforcement to lawfully access data and their ability to acquire and use the content of that data is a pressing international concern that requires urgent, sustained attention and informed discussion on the complexity of the issues and interests at stake. Otherwise, court decisions about legitimate access to data are increasingly rendered meaningless, threatening to undermine the systems of justice established in our democratic nations. To put it bluntly, this is reckless and shortsighted. I've repeatedly written about why this can't be done technically, and why trying results in insecurity. But there's a greater principle at first: we need to decide, as nations and as society, to put defense first. We need a "defense dominant" strategy for securing the Internet and everything attached to it. This is important. Our national security depends on the security of our technologies. Demanding that technology companies add backdoors to computers and communications systems puts us all at risk. We need to understand that these systems are too critical to our society and -- now that they can affect the world in a direct physical manner -- affect our lives and property as well. This is what I just wrote, in Click Here to Kill Everybody: There is simply no way to secure US networks while at the same time leaving foreign networks open to eavesdropping and attack. There's no way to secure our phones and computers from criminals and terrorists without also securing the phones and computers of those criminals and terrorists. On the generalized worldwide network that is the Internet, anything we do to secure its hardware and software secures it everywhere in the world. And everything we do to keep it insecure similarly affects the entire world. This leaves us with a choice: either we secure our stuff, and as a side effect also secure their stuff; or we keep their stuff vulnerable, and as a side effect keep our own stuff vulnerable. It's actually not a hard choice. An analogy might bring this point home. Imagine that every house could be opened with a master key, and this was known to the criminals. Fixing those locks would also mean that criminals' safe houses would be more secure, but it's pretty clear that this downside would be worth the trade-off of protecting everyone's house. With the Internet+ increasing the risks from insecurity dramatically, the choice is even more obvious. We must secure the information systems used by our elected officials, our critical infrastructure providers, and our businesses. Yes, increasing our security will make it harder for us to eavesdrop, and attack, our enemies in cyberspace. (It won't make it impossible for law enforcement to solve crimes; I'll get to that later in this chapter.) Regardless, it's worth it. If we are ever going to secure the Internet+, we need to prioritize defense over offense in all of its aspects. We've got more to lose through our Internet+ vulnerabilities than our adversaries do, and more to gain through Internet+ security. We need to recognize that the security benefits of a secure Internet+ greatly outweigh the security benefits of a vulnerable one. We need to have this debate at the level of national security. Putting spy agencies in charge of this trade-off is wrong, and will result in bad decisions. Cory Doctorow has a good reaction. Source
  13. In the decade after the 9/11 attacks, the New York City Police Department moved to put millions of New Yorkers under constant watch. Warning of terrorism threats, the department created a plan to carpet Manhattan’s downtown streets with thousands of cameras and had, by 2008, centralized its video surveillance operations to a single command center. Two years later, the NYPD announced that the command center, known as the Lower Manhattan Security Coordination Center, had integrated cutting-edge video analytics software into select cameras across the city. The video analytics software captured stills of individuals caught on closed-circuit TV footage and automatically labeled the images with physical tags, such as clothing color, allowing police to quickly search through hours of video for images of individuals matching a description of interest. At the time, the software was also starting to generate alerts for unattended packages, cars speeding up a street in the wrong direction, or people entering restricted areas. Over the years, the NYPD has shared only occasional, small updates on the program’s progress. In a 2011 interview with Scientific American, for example, Inspector Salvatore DiPace, then commanding officer of the Lower Manhattan Security Initiative, said the police department was testing whether the software could box out images of people’s faces as they passed by subway cameras and subsequently cull through the images for various unspecified “facial features.” While facial recognition technology, which measures individual faces at over 16,000 points for fine-grained comparisons with other facial images, has attracted significant legal scrutiny and media attention, this object identification software has largely evaded attention. How exactly this technology came to be developed and which particular features the software was built to catalog have never been revealed publicly by the NYPD. Now, thanks to confidential corporate documents and interviews with many of the technologists involved in developing the software, The Intercept and the Investigative Fund have learned that IBM began developing this object identification technology using secret access to NYPD camera footage. With access to images of thousands of unknowing New Yorkers offered up by NYPD officials, as early as 2012, IBM was creating new search features that allow other police departments to search camera footage for images of people by hair color, facial hair, and skin tone. IBM declined to comment on its use of NYPD footage to develop the software. However, in an email response to questions, the NYPD did tell The Intercept that “Video, from time to time, was provided to IBM to ensure that the product they were developing would work in the crowded urban NYC environment and help us protect the City. There is nothing in the NYPD’s agreement with IBM that prohibits sharing data with IBM for system development purposes. Further, all vendors who enter into contractual agreements with the NYPD have the absolute requirement to keep all data furnished by the NYPD confidential during the term of the agreement, after the completion of the agreement, and in the event that the agreement is terminated.” In an email to The Intercept, the NYPD confirmed that select counterterrorism officials had access to a pre-released version of IBM’s program, which included skin tone search capabilities, as early as the summer of 2012. NYPD spokesperson Peter Donald said the search characteristics were only used for evaluation purposes and that officers were instructed not to include the skin tone search feature in their assessment. The department eventually decided not to integrate the analytics program into its larger surveillance architecture, and phased out the IBM program in 2016. After testing out these bodily search features with the NYPD, IBM released some of these capabilities in a 2013 product release. Later versions of IBM’s software retained and expanded these bodily search capabilities. (IBM did not respond to a question about the current availability of its video analytics programs.) Asked about the secrecy of this collaboration, the NYPD said that “various elected leaders and stakeholders” were briefed on the department’s efforts “to keep this city safe,” adding that sharing camera access with IBM was necessary for the system to work. IBM did not respond to a question about why the company didn’t make this collaboration public. Donald said IBM gave the department licenses to apply the system to 512 cameras, but said the analytics were tested on “fewer than fifty.” He added that IBM personnel had access to certain cameras for the sole purpose of configuring NYPD’s system, and that the department put safeguards in place to protect the data, including “non-disclosure agreements for each individual accessing the system; non-disclosure agreements for the companies the vendors worked for; and background checks.” Civil liberties advocates contend that New Yorkers should have been made aware of the potential use of their physical data for a private company’s development of surveillance technology. The revelations come as a city council bill that would require NYPD transparency about surveillance acquisitions continues to languish, due, in part, to outspoken opposition from New York City Mayor Bill de Blasio and the NYPD. Skin Tone Search Technology, Refined on New Yorkers IBM’s initial breakthroughs in object recognition technology were envisioned for technologies like self-driving cars or image recognition on the internet, said Rick Kjeldsen, a former IBM researcher. But after 9/11, Kjeldsen and several of his colleagues realized their program was well suited for counterterror surveillance. “After 9/11, the funding sources and the customer interest really got driven toward security,” said Kjeldsen, who said he worked on the NYPD program from roughly 2009 through 2013. “Even though that hadn’t been our focus up to that point, that’s where demand was.” IBM’s first major urban video surveillance project was with the Chicago Police Department and began around 2005, according to Kjeldsen. The department let IBM experiment with the technology in downtown Chicago until 2013, but the collaboration wasn’t seen as a real business partnership. “Chicago was always known as, it’s not a real — these guys aren’t a real customer. This is kind of a development, a collaboration with Chicago,” Kjeldsen said. “Whereas New York, these guys were a customer. And they had expectations accordingly.” The NYPD acquired IBM’s video analytics software as one part of the Domain Awareness System, a shared project of the police department and Microsoft that centralized a vast web of surveillance sensors in lower and midtown Manhattan — including cameras, license plate readers, and radiation detectors — into a unified dashboard. IBM entered the picture as a subcontractor to Microsoft subsidiary Vexcel in 2007, as part of a project worth $60.7 million over six years, according to the internal IBM documents. In New York, the terrorist threat “was an easy selling point,” recalled Jonathan Connell, an IBM researcher who worked on the initial NYPD video analytics installation. “You say, ‘Look what the terrorists did before, they could come back, so you give us some money and we’ll put a camera there.” A former NYPD technologist who helped design the Lower Manhattan Security Initiative, asking to speak on background citing fears of professional reprisal, confirmed IBM’s role as a “strategic vendor.” “In our review of video analytics vendors at that time, they were well ahead of everyone else in my personal estimation,” the technologist said. According to internal IBM planning documents, the NYPD began integrating IBM’s surveillance product in March 2010 for the Lower Manhattan Security Coordination Center, a counterterrorism command center launched by Police Commissioner Ray Kelly in 2008. In a “60 Minutes” tour of the command center in 2011, Jessica Tisch, then the NYPD’s director of policy and planning for counterterrorism, showed off the software on gleaming widescreen monitors, demonstrating how it could pull up images and video clips of people in red shirts. Tisch did not mention the partnership with IBM. During Kelly’s tenure as police commissioner, the NYPD quietly worked with IBM as the company tested out its object recognition technology on a select number of NYPD and subway cameras, according to IBM documents. “We really needed to be able to test out the algorithm,” said Kjeldsen, who explained that the software would need to process massive quantities of diverse images in order to learn how to adjust to the differing lighting, shadows, and other environmental factors in its view. “We were almost using the video for both things at that time, taking it to the lab to resolve issues we were having or to experiment with new technology,” Kjeldsen said. At the time, the department hoped that video analytics would improve analysts’ ability to identify suspicious objects and persons in real time in sensitive areas, according to Conor McCourt, a retired NYPD counterterrorism sergeant who said he used IBM’s program in its initial stages. “Say you have a suspicious bag left in downtown Manhattan, as a person working in the command center,” McCourt said. “It could be that the analytics saw the object sitting there for five minutes, and says, ‘Look, there’s an object sitting there.’” Operators could then rewind the video or look at other cameras nearby, he explained, to get a few possibilities as to who had left the object behind. Over the years, IBM employees said, they started to become more concerned as they worked with the NYPD to allow the program to identify demographic characteristics. By 2012, according to the internal IBM documents, researchers were testing out the video analytics software on the bodies and faces of New Yorkers, capturing and archiving their physical data as they walked in public or passed through subway turnstiles. With these close-up images, IBM refined its ability to search for people on camera according to a variety of previously undisclosed features, such as age, gender, hair color (called “head color”), the presence of facial hair — and skin tone. The documents reference meetings between NYPD personnel and IBM researchers to review the development of body identification searches conducted at subway turnstile cameras. “We were certainly worried about where the heck this was going,” recalled Kjeldsen. “There were a couple of us that were always talking about this, you know, ‘If this gets better, this could be an issue.’” According to the NYPD, counterterrorism personnel accessed IBM’s bodily search feature capabilities only for evaluation purposes, and they were accessible only to a handful of counterterrorism personnel. “While tools that featured either racial or skin tone search capabilities were offered to the NYPD, they were explicitly declined by the NYPD,” Donald, the NYPD spokesperson, said. “Where such tools came with a test version of the product, the testers were instructed only to test other features (clothing, eyeglasses, etc.), but not to test or use the skin tone feature. That is not because there would have been anything illegal or even improper about testing or using these tools to search in the area of a crime for an image of a suspect that matched a description given by a victim or a witness. It was specifically to avoid even the suggestion or appearance of any kind of technological racial profiling.” The NYPD ended its use of IBM’s video analytics program in 2016, Donald said. Donald acknowledged that, at some point in 2016 or early 2017, IBM approached the NYPD with an upgraded version of the video analytics program that could search for people by ethnicity. “The Department explicitly rejected that product,” he said, “based on the inclusion of that new search parameter.” In 2017, IBM released Intelligent Video Analytics 2.0, a product with a body camera surveillance capability that allows users to detect people captured on camera by “ethnicity” tags, such as “Asian,” “Black,” and “White.” Kjeldsen, the former IBM researcher who helped develop the company’s skin tone analytics with NYPD camera access, said the department’s claim that the NYPD simply tested and rejected the bodily search features was misleading. “We would have not explored it had the NYPD told us, ‘We don’t want to do that,’” he said. “No company is going to spend money where there’s not customer interest.” Kjeldsen also added that the NYPD’s decision to allow IBM access to their cameras was crucial for the development of the skin tone search features, noting that during that period, New York City served as the company’s “primary testing area,” providing the company with considerable environmental diversity for software refinement. “The more different situations you can use to develop your software, the better it’s going be,” Kjeldsen said. “That obviously pertains to people, skin tones, whatever it is you might be able to classify individuals as, and it also goes for clothing.” The NYPD’s cooperation with IBM has since served as a selling point for the product at California State University, Northridge. There, campus police chief Anne Glavin said the technology firm IXP helped sell her on IBM’s object identification product by citing the NYPD’s work with the company. “They talked about what it’s done for New York City. IBM was very much behind that, so this was obviously of great interest to us,” Glavin said. Day-to-Day Policing, Civil Liberties Concerns The NYPD-IBM video analytics program was initially envisioned as a counterterrorism tool for use in midtown and lower Manhattan, according to Kjeldsen. However, the program was integrated during its testing phase into dozens of cameras across the city. According to the former NYPD technologist, it could have been integrated into everyday criminal investigations. “All bureaus of the department could make use of it,” said the former technologist, potentially helping detectives investigate everything from sex crimes to fraud cases. Kjeldsen spoke of cameras being placed at building entrances and near parking entrances to monitor for suspicious loiterers and abandoned bags. Donald, the NYPD spokesperson, said the program’s access was limited to a small number of counterterrorism officials, adding, “We are not aware of any case where video analytics was a factor in an arrest or prosecution.” Campus police at California State University, Northridge, who adopted IBM’s software, said the bodily search features have been helpful in criminal investigations. Asked about whether officers have deployed the software’s ability to filter through footage for suspects’ clothing color, hair color, and skin tone, Captain Scott VanScoy at California State University, Northridge, responded affirmatively, relaying a story about how university detectives were able to use such features to quickly filter through their cameras and find two suspects in a sexual assault case. “We were able to pick up where they were at different locations from earlier that evening and put a story together, so it saves us a ton of time,” Vanscoy said. “By the time we did the interviews, we already knew the story and they didn’t know we had known.” Glavin, the chief of the campus police, added that surveillance cameras using IBM’s software had been placed strategically across the campus to capture potential security threats, such as car robberies or student protests. “So we mapped out some CCTV in that area and a path of travel to our main administration building, which is sometimes where people will walk to make their concerns known and they like to stand outside that building,” Glavin said. “Not that we’re a big protest campus, we’re certainly not a Berkeley, but it made sense to start to build the exterior camera system there.” Civil liberties advocates say they are alarmed by the NYPD’s secrecy in helping to develop a program with the potential capacity for mass racial profiling. The identification technology IBM built could be easily misused after a major terrorist attack, argued Rachel Levinson-Waldman, senior counsel in the Brennan Center’s Liberty and National Security Program. “Whether or not the perpetrator is Muslim, the presumption is often that he or she is,” she said. “It’s easy to imagine law enforcement jumping to a conclusion about the ethnic and religious identity of a suspect, hastily going to the database of stored videos and combing through it for anyone who meets that physical description, and then calling people in for questioning on that basis.” IBM did not comment on questions about the potential use of its software for racial profiling. However, the company did send a comment to The Intercept pointing out that it was “one of the first companies anywhere to adopt a set of principles for trust and transparency for new technologies, including AI systems.” The statement continued on to explain that IBM is “making publicly available to other companies a dataset of annotations for more than a million images to help solve one of the biggest issues in facial analysis — the lack of diverse data to train AI systems.” Few laws clearly govern object recognition or the other forms of artificial intelligence incorporated into video surveillance, according to Clare Garvie, a law fellow at Georgetown Law’s Center on Privacy and Technology. “Any form of real-time location tracking may raise a Fourth Amendment inquiry,” Garvie said, citing a 2012 Supreme Court case, United States v. Jones, that involved police monitoring a car’s path without a warrant and resulted in five justices suggesting that individuals could have a reasonable expectation of privacy in their public movements. In addition, she said, any form of “identity-based surveillance” may compromise people’s right to anonymous public speech and association. Garvie noted that while facial recognition technology has been heavily criticized for the risk of false matches, that risk is even higher for an analytics system “tracking a person by other characteristics, like the color of their clothing and their height,” that are not unique characteristics. The former NYPD technologist acknowledged that video analytics systems can make mistakes, and noted a study where the software had trouble characterizing people of color: “It’s never 100 percent.” But the program’s identification of potential suspects was, he noted, only the first step in a chain of events that heavily relies on human expertise. “The technology operators hand the data off to the detective,” said the technologist. “You use all your databases to look for potential suspects and you give it to a witness to look at. … This is all about finding a way to shorten the time to catch the bad people.” Object identification programs could also unfairly drag people into police suspicion just because of generic physical characteristics, according to Jerome Greco, a digital forensics staff attorney at the Legal Aid Society, New York’s largest public defenders organization. “I imagine a scenario where a vague description, like young black male in a hoodie, is fed into the system, and the software’s undisclosed algorithm identifies a person in a video walking a few blocks away from the scene of an incident,” Greco said. “The police find an excuse to stop him, and, after the stop, an officer says the individual matches a description from the earlier incident.” All of a sudden, Greco continued, “a man who was just walking in his own neighborhood” could be charged with a serious crime without him or his attorney ever knowing “that it all stemmed from a secret program which he cannot challenge.” While the technology could be used for appropriate law enforcement work, Kjeldsen said that what bothered him most about his project was the secrecy he and his colleagues had to maintain. “We certainly couldn’t talk about what cameras we were using, what capabilities we were putting on cameras,” Kjeldsen said. “They wanted to control public perception and awareness of LMSI” — the Lower Manhattan Security Initiative — “so we always had to be cautious about even that part of it, that we’re involved, and who we were involved with, and what we were doing.” (IBM did not respond to a question about instructing its employees not to speak publicly about its work with the NYPD.) The way the NYPD helped IBM develop this technology without the public’s consent sets a dangerous precedent, Kjeldsen argued. “Are there certain activities that are nobody’s business no matter what?” he asked. “Are there certain places on the boundaries of public spaces that have an expectation of privacy? And then, how do we build tools to enforce that? That’s where we need the conversation. That’s exactly why knowledge of this should become more widely available — so that we can figure that out.” This article was reported in partnership with the Investigative Fund at the Nation Institute. Source
  14. from the result-of-asking-'why-not?'-rather-than-'why?' dept Reuters has a long, detailed examination of the Chinese surveillance state. China's intrusion into the lives of its citizens has never been minimal, but advances in technology have allowed the government to keep tabs on pretty much every aspect of citizens' lives. Facial recognition has been deployed at scale and it's not limited to finding criminals. It's used to identify regular citizens as they go about their daily lives. This is paired with license plate readers and a wealth of information gathered from online activity to provide the government dozens of data points for every citizen that wanders into the path of its cameras. Other biometric information is gathered and analyzed to help the security and law enforcement agencies better pin down exactly who it is they're looking at. But it goes further than that. The Chinese version of stop-and-frisk involves "patting down" cellphones for illegal content or evidence of illegal activities. China is home to several companies offering phone cracking services and forensic software. It's not only Cellebrite and Grayshift, although these two are best known for selling tech to US law enforcement. Not that phone cracking is really a necessity in China. Most citizens hand over passwords when asked, considering the alternative isn't going to be a detainment while a warrant is sought. The option is far more likely to be something like a trip to a modern dungeon for a little conversational beating. What's notable about this isn't the tech. This tech is everywhere. US law enforcement has access to much of this, minus the full-blown facial recognition and other biometric tracking. (That's on its way, though.) Plate readers, forensic devices, numerous law enforcement databases, social media tracking software… these are all in use already. Much of what China has deployed is being done in the name of security. That's the same justification for the massive surveillance apparatus erected after the 2001 attacks. The framework for a totalitarian state is already in place. The only thing separating us from China is our Constitutional rights. Whenever you hear a US government official lamenting perps walking on technicalities or encryption making it tough to lock criminals up, keep in mind the alternative is China: a full-blown police state stocked to the teeth with surveillance tech. Source
  15. Researchers believe a new encryption technique may be key to maintaining a balance between user privacy and government demands. For governments worldwide, encryption is a thorn in the side in the quest for surveillance, cracking suspected criminal phones, and monitoring communication. Officials are applying pressure on technology firms and app developers which provide end-to-end encryption services provide a way for police forces to break encryption. However, the moment you provide a backdoor into such services, you are creating a weak point that not only law enforcement and governments can use -- assuming that tunneling into a handset and monitoring is even within legal bounds -- but threat actors, and undermining the security of encryption as a whole. As the mass surveillance and data collection activities of the US National Security Agency hit the headlines, faith in governments and their ability to restrain such spying to genuine cases of criminality began to weaken. Now, the use of encryption and secure communication channels is ever-more popular, technology firms are resisting efforts to implant deliberate weaknesses in encryption protocols, and neither side wants to budge. What can be done? From the outset, something has got to give. However, researchers from Boston University believe they may have come up with a solution. On Monday, the team said they have developed a new encryption technique which will give authorities some access, but without providing unlimited access in practice, to communication. In other words, a middle ground -- a way to break encryption to placate law enforcement, but not to the extent that mass surveillance on the general public is possible. Mayank Varia, Research Associate Professor at Boston University and cryptography expert, has developed the new technique, known as cryptographic "crumpling." In a paper documenting the research, lead author Varia says that the new cryptography methods could be used for "exceptional access" to encrypted data for government purposes while keeping user privacy at large at a reasonable level. "Our approach places most of the responsibility for achieving exceptional access on the government, rather than on the users or developers of cryptographic tools," the paper notes. "As a result, our constructions are very simple and lightweight, and they can be easily retrofitted onto existing applications and protocols." The crumpling techniques use two approaches -- the first being a Diffie-Hellman key exchange over modular arithmetic groups which leads to an "extremely expensive" puzzle which must be solved to break the protocol, and the second a "hash-based proof of work to impose a linear cost on the adversary for each message" to recover. Crumpling requires strong, modern cryptography as a precondition as it allows per-message encryption keys and detailed management. The system requires this infrastructure so a small number of messages can be targeted without full-scale exposure. The team says that this condition will also only permit "passive" decryption attempts, rather than man-in-the-middle (MiTM) attacks. By introducing cryptographic puzzles into the generation of per-message cryptographic keys, the keys will be possible to decrypt but will require vast resources to do so. In addition, each puzzle will be chosen independently for each key, which means "the government must expend effort to solve each one." "Like a crumple zone in automotive engineering, in an emergency situation the construction should break a little bit in order to protect the integrity of the system as a whole and the safety of its human users," the paper notes. "We design a portion of our puzzles to match Bitcoin's proof of work computation so that we can predict their real-world marginal cost with reasonable confidence." To prevent unauthorized attempts to break encryption an "abrasion puzzle" serves as a gatekeeper which is more expensive to solve than individual key puzzles. While this would not necessarily deter state-sponsored threat actors, it may at least deter individual cyberattackers as the cost would not be worth the result. The new technique would allow governments to recover the plaintext for targeted messages, however, it would also be prohibitively expensive. A key length of 70 bits, for example -- with today's hardware -- would cost millions and force government agencies to choose their targets carefully and the expense would potentially prevent misuse. The research team estimates that the government could recover less than 70 keys per year with a budget of close to $70 million dollars upfront -- one million dollars per message and the full amount set out in the US' expanded federal budget to break encryption. However, there could also be additional costs of $1,000 to $1 million per message, and these kind of figures are difficult to conceal, especially as one message from a suspected criminal in a conversation without contextual data is unlikely to ever be enough to secure conviction. The research team says that crumpling can be adapted for use in common encryption services including PGP, Signal, as well as full-disk and file-based encryption. "We view this work as a catalyst that can inspire both the research community and the public at large to explore this space further," the researchers say. "Whether such a system will ever be (or should ever be) adopted depends less on technology and more on questions for society to answer collectively: whether to entrust the government with the power of targeted access and whether to accept the limitations on law enforcement possible with only targeted access." The research was funded by the National Science Foundation. Source
  16. part 1 (YET ANOTHER) WARNING .... Your online activities are now being tracked and recorded by various government and corporate entities around the world. This information can be used against you at any time and there is no real way to “opt out”. In the past decade, we have seen the systematic advancement of the surveillance apparatus throughout the world. The United States, United Kingdom, Australia, and Canada have all passed laws allowing, and in some cases forcing, telecom companies to bulk-collect your data: United States – In March 2017 the US Congress passed legislation that allows internet service providers to collect, store, and sell your private browsing history, app usage data, location information and more – without your consent. This essentially allows Comcast, Verizon, AT&T and other providers to monetize and sell their customers to the highest bidders (usually for targeted advertising). United Kingdom – In November 2016 the UK Parliament passed the infamous Snoopers Charter (Investigatory Powers Act) which forces internet providers and phone companies to bulk-collect customer data. This includes private browsing history, social media posts, phone calls, text messages, and more. This information is stored for 12 months in a giant database that is accessible to 48 different government agencies. The erosion of free speech is also rapidly underway as various laws allow UK authorities to lock up anyone they deem to be “offensive” (1984 is already here). Australia – In April 2017 the Australian government passed a massive data retention law that forces telecoms to collect and store text messages, phone calls, location information, and internet connection data for a full two years, with the data being accessible to authorities without a warrant. Canada, Europe, and other parts of the world have similar laws and policies already in place. What you are witnessing is the rapid expansion of the global surveillance state, whereby corporate and government entities work together to monitor and record everything you do. What the hell is going on here? Perhaps you are wondering why all this is happening. There is a simple answer to that question. Control Just like we have seen throughout history, government surveillance is simply a tool used for control. This could be for maintaining control of power, controlling a population, or controlling the flow of information in a society. You will notice that the violation of your right to privacy will always be justified by various excuses – from “terrorism” to tax evasion – but never forget, it’s really about control. Along the same lines, corporate surveillance is also about control. Collecting your data helps private entities control your buying decisions, habits, and desires. The tools for doing this are all around you: apps on your devices, social networks, tracking ads, and many free products which simply bulk-collect your data (when something is free, you are the product). This is why the biggest collectors of private data – Google and Facebook – are also the two businesses that completely dominate the online advertising industry. So to sum this up, advertising today is all about the buying and selling of individuals. But it gets even worse… Now we have the full-scale cooperation between government and corporate entities to monitor your every move. In other words, governments are now enlisting private corporations to carry out bulk data collection on entire populations. Your internet service provider is your adversary working on behalf of the surveillance state. This basic trend is happening in much of the world, but it has been well documented in the United States with the PRISM Program. So why should you care? Everything that’s being collected could be used against you today, or at any time in the future, in ways you may not be able to imagine. In many parts of the world, particularly in the UK, thought crime laws are already in place. If you do something that is deemed to be “offensive”, you could end up rotting away in a jail cell for years. Again, we have seen this tactic used throughout history for locking up dissidents – and it is alive and well in the Western world today. From a commercial standpoint, corporate surveillance is already being used to steal your data and hit you with targeted ads, thereby monetizing your private life. Reality check Many talking heads in the media will attempt to confuse you by pretending this is a problem with a certain politician or perhaps a political party. But that’s a bunch of garbage to distract you from the bigger truth. For decades, politicians from all sides (left and right) have worked hard to advance the surveillance agenda around the world. Again, it’s all about control, regardless of which puppet is in office. So contrary to what various groups are saying, you are not going to solve this problem by writing a letter to another politician or signing some online petition. Forget about it. Instead, you can take concrete steps right now to secure your data and protect your privacy. Restore Privacy is all about giving you the tools and information to do that. If you feel overwhelmed by all this, just relax. The privacy tools you need are easy to use no matter what level of experience you have. Arguably the most important privacy tool is a good VPN (virtual private network). A VPN will encrypt and anonymize your online activity by creating a secured tunnel between your computer and a VPN server. This makes your data and online activities unreadable to government surveillance, your internet provider, hackers, and other third-party snoopers. A VPN will also allow you to spoof your location, hide your real IP address, and allow you to access blocked content from anywhere in the world. Check out the best VPN guide to get started. Stay safe! SOURCE
  17. MOSCOW - Edward Snowden, who exposed extensive U.S. surveillance programs in 2013, warned this week that Japan may be moving closer to sweeping surveillance of ordinary citizens as the government eyes a legal change to enhance police powers in the name of counterterrorism. "This is the beginning of a new wave of mass surveillance in Japan," the 33-year-old American said in an exclusive interview with Kyodo News while in exile in Russia, referring to a so-called anti-conspiracy bill that has stirred controversy in and outside Japan as having the potential to undermine civil liberties. The consequences could be even graver when combined with the use of a wide-reaching online data collection tool called XKEYSCORE, the former contractor for the U.S. National Security Agency said. He also gave credence to the authenticity of new NSA papers exposed through The Intercept, a U.S. online media outlet, earlier this year that showed the agency's surveillance tool has already been shared with Japan. Edward Snowden: Exclusive interview with Kyodo News 1 The remarks by the intelligence expert are the latest warning over the Japanese government's push to pass the controversial bill through parliament, which criminalizes the planning and preparatory actions of 277 serious crimes. In an open letter addressed to Prime Minister Shinzo Abe in mid-May, a U.N. special rapporteur on the right to privacy stated that the bill could lead to undue restrictions of privacy and freedom of expression due to its potentially broad application -- a claim the Japanese government has strongly protested against. Snowden said he agrees with the U.N.-appointed expert Joseph Cannataci, arguing the bill is "not well explained" and raises concerns that the government may have intentions other than its stated goal of cracking down on terrorism and organized crimes ahead of the 2020 Tokyo Olympics. The anti-conspiracy law proposed by the government "focuses on terrorism and everything else that's not related to terrorism -- things like taking plants from the forestry reserve," he said. "And the only real understandable answer (to the government's desire to pass the bill)...is that this is a bill that authorizes the use of surveillance in new ways because now everyone can be a criminal." Based on his experience of using XKEYSCORE himself, Snowden said authorities could become able to intercept everyone's communications, including people organizing political movements or protests, and put them "in a bucket." The records would be simply "pulled out of the bucket" whenever necessary and the public would not be able to know whether such activities are done legally or secretly by the government because there are no sufficient legal safeguards in the bill, Snowden said. Snowden finds the current situation in Japan reminiscent of what he went through in the United States following the terror attacks on Sept. 11, 2001. In passing the Patriot Act, which strengthened the U.S. government's investigative powers in the wake of the attacks, the government said similar things to what the Japanese government is saying now, such as "these powers are not going to be targeted against ordinary citizens" and "we're only interested in finding al-Qaida and terrorists," according to Snowden. But within a few short years of the enactment of the Patriot Act, the U.S. government was using the law secretly to "collect the phone records of everyone in the United States, and everyone around the world who they could access" through the largest phone companies in the United States, Snowden said, referring to the revelations made in 2013 through top-secret documents he leaked. Even though it sacrifices civil liberties, mass surveillance is not effective, Snowden said. The U.S. government's privacy watchdog concluded in its report in 2014 that the NSA's massive telephone records program showed "minimal value" in safeguarding the nation from terrorism and that it must be ended. On Japan's anti-conspiracy bill, Snowden said it should include strong guarantees of human rights and privacy and ensure that those guarantees are "not enforced through the words of politicians but through the actions of courts." "This means in advance of surveillance, in all cases the government should seek an individualized warrant, and individualized authorization that this surveillance is lawful and appropriate in relationship to the threat that's presented by the police," he said. He also said allowing a government to get into the habit of collecting the communications of everyone through powerful surveillance tools could dangerously change the power relationship between the public and government to something closer to "subject and ruler" instead of partners, which is how it should be in a democracy. Arguably, people in Japan may not make much of what Snowden sees as the rise of new untargeted and indiscriminate mass surveillance, thinking that they have nothing to hide or fear. But he insists that privacy is not about something to "hide" but about "protecting" an open and free society where people can be different and can have their own ideas. Freedom of speech would not mean much if people do not have the space to figure out what they want to say, or share their views with others they trust, to develop them before introducing them into the context of the world, he said. "When you say 'I don't care about privacy, because I've nothing to hide,' that's no different than saying you don't care about freedom of speech, because you've nothing to say," he added. Snowden, who was dressed in a black suit, said toward the end of his more than 100-minute interview at a hotel in Moscow that living in exile is not "a lifestyle that anyone chooses voluntarily." He hopes to return home while continuing active exchanges online with people in various countries. "The beautiful thing about today is that I can be in every corner of the world every night. I speak at U.S. universities every month. It's important to understand that I don't really live in Moscow. I live on the internet," he said. Snowden showed no regrets over taking the risk of becoming a whistleblower and being painted by his home country as a "criminal" or "traitor," facing espionage charges at home for his historic document leak. "It's scary as hell, but it's worth it. Because if we don't do it, if we see the truth of crimes or corruption in government, and we don't say something about it, we're not just making the world worse for our children, we're making the world worse for us, and we're making ourselves worse," he said. Article source
  18. Facebook Bans Devs From Creating Surveillance Tools With User Data Without a hint of irony, Facebook has told developers that they may not use data from Instagram and Facebook in surveillance tools. The social network says that the practice has long been a contravention of its policies, but it is now tidying up and clarifying the wording of its developer policies. American Civil Liberties Union, Color of Change and the Center for Media Justice put pressure on Facebook after it transpired that data from users' feeds was being gathered and sold on to law enforcement agencies. The re-written developer policy now explicitly states that developers are not allowed to "use data obtained from us to provide tools that are used for surveillance." It remains to be seen just how much of a difference this will make to the gathering and use of data, and there is nothing to say that Facebook's own developers will not continue to engage in the same practices. Deputy chief privacy officer at Facebook, Rob Sherman, says: Transparency reports published by Facebook show that the company has complied with government requests for data. The secrecy such requests and dealings are shrouded in means that there is no way of knowing whether Facebook is engaged in precisely the sort of activity it is banning others from performing. Source
  19. Legislation introduced today by New York City council members Dan Garodnick and Vanessa Gibson would finally compel the NYPD — one of the most technology-laden police forces in the country — to make public its rulebook for deploying its controversial surveillance arsenal. The bill, named the Public Oversight of Surveillance Technology (POST) act, would require the NYPD to detail how, when, and with what authority it uses technologies like Stingray devices, which can monitor and interfere with the cellular communications of an entire crowd at once. Specifically, the department would have to publicize the “rules, processes and guidelines issued by the department regulating access to or use of such surveillance technology as well as any prohibitions or restrictions on use, including whether the department obtains a court authorization for each use of a surveillance technology, and what specific type of court authorization is sought.” The NYPD would also have to say how it protects the gathered surveillance data itself (for example, X-ray imagery, or individuals captured in a facial recognition scan), and whether or not this data is shared with other governmental organizations. A period of public comment would follow these disclosures. In a press release, the New York Civil Liberties Union, which has been instrumental in fighting to reveal the mere fact that the NYPD possesses devices like the Stingray, hailed the bill: Public awareness of how the NYPD conducts intrusive surveillance, especially the impacts on vulnerable New Yorkers, is critical to democracy. For too long the NYPD has been using technology that spies on cellphones, sees through buildings and follows your car under a shroud of secrecy, and the bill is a significant step out of the dark ages. It’s unclear whether the bill would apply to products that have both powerful surveillance and non-surveillance functionality, a la Palantir, but the legislation’s definition of “surveillance technology” is sufficiently broad: The term “surveillance technology” means equipment, software, or system capable of, or used or designed for, collecting, retaining, processing, or sharing audio, video, location, thermal, biometric, or similar information, that is operated by or at the direction of the department. Though the bill might do little to curb the use of such technologies, it would at least give those on the sidewalk a better idea of how and when they’re being watched, if not why. The NYPD did not immediately return a request for comment. By Sam Biddle https://theintercept.com/2017/03/01/new-bill-would-force-nypd-to-disclose-its-surveillance-tech-playbook/
  20. The Tor Project, responsible for software that enables anonymous Internet use and communication, is launching a new mobile app to detect internet censorship and surveillance around the world. The app, called “OONIProbe,” alerts users to the blocking of websites, censorship and surveillance systems and the speed of networks. Slowing internet speeds down to a crawl is one way governments censor internet content they deem illegal. The app also spells out how users might be able to circumvent the blockage. Ooni on the iPhone Operating under the Tor Project umbrella, the Open Observatory of Network Interference (OONI) is a global observation network watching online censorship since 2012. Data from OONI has detected censorship in countries including Iran, Saudi Arabia, Turkey, South Korea, Greece, China, Russia, India, Indonesia and Sudan. The project watches over 100 countries and serves as a resource to journalists, lawyers, activists, researchers and people on the ground in countries where censorship is prevalent. In 2016, internet censorship was used in countries like the African nation of Gabon during highly contested elections and subsequent protests. To stop citizens from sharing videos of election irregularities, the country’s internet was down for four days. Earlier in 2016, Uganda engaged in similar widespread censorship. Both countries at times denied their actions, making tools like OONI ever more valuable. “What Signal did for end-to-end encryption, OONI did for unmasking censorship,” Moses Karanja, a Kenyan researcher on the politics of information controls at Strathmore University’s CIPIT, said in a statement. “Most Africans rely on mobile phones as their primary means of accessing the internet and OONI’s mobile app allows for decentralized efforts in unmasking the nature of censorship and internet performance. The possibilities are exciting for researchers, business and the human rights community around the world. We look forward to interesting days ahead. ” Internet freedom declined for the sixth year in a row in 2016, according to a report from Freedom House, making censorship and surveillance transparency a high priority for activists looking to turn back that momentum. Twenty-four governments blocked access to social media sites and communication services in 2016, compared with 15 governments doing so last year, according to Freedom House. Internet freedom fell most precipitously in Uganda, Bangladesh, Cambodia, Ecuador and Libya. Several countries, including Egypt and the United Arab Emirates, reportedly tried to block Signal, the increasingly popular encrypted messenger developed in the United States. That’s part of a global trend that’s seen governments go after apps like WhatsApp and Telegram in an effort to stymie secure communications. “Never before has it been so easy to uncover evidence of internet censorship,” Arturo Filastò, OONI’s project lead and core developer said in a statement. “By simply owning a smartphone (and running ooniprobe), you can now play an active role in increasing transparency around internet controls.” The app will be available on the Google Play and iOS app stores this week, according to Tor Project spokeswoman Kate Krauss. Article source
  21. Four in Five Britons Fearful Trump Will Abuse their Data More than three-quarters of Britons believe incoming US President Donald Trump will use his surveillance powers for personal gain, and a similar number want reassurances from the government that data collected by GCHQ will be safeguarded against such misuse. These are the headline findings from a new Privacy International poll of over 1600 Brits on the day Trump is inaugurated as the 45th President of the most powerful nation on earth. With that role comes sweeping surveillance powers – the extent of which was only revealed after NSA whistleblower Edward Snowden went public in 2013. There are many now concerned that Trump, an eccentric reality TV star and gregarious property mogul, could abuse such powers for personal gain. That’s what 78% of UK adults polled by Privacy International believe, and 54% said they had no trust that Trump would use surveillance for legitimate purposes. Perhaps more important for those living in the United Kingdom is the extent of the information sharing partnership between the US and the UK. Some 73% of respondents said they wanted the government to explain what safeguards exist to ensure any data swept up by their domestic secret services doesn’t end up being abused by the new US administration. That fear has become even more marked since the passage of the Investigatory Powers Act or 'Snoopers’ Charter', which granted the British authorities unprecedented mass surveillance and hacking powers, as well as forcing ISPs to retain all web records for up to 12 months. Privacy International claimed that although it has privately been presented with documents detailing the info sharing partnership between the two nations, Downing Street has so far refused to make the information public. The rights group and nine others are currently appealing to the European Court of Human Rights to overturn a decision by the Investigatory Powers Tribunal (IPT) not to release information about the rules governing the US-UK agreement. “UK and the US spies have enjoyed a cosy secret relationship for a long time, sharing sensitive intelligence data with each other, without parliament knowing anything about it, and without any public consent. Slowly, we’re learning more about the staggering scale of this cooperation and a dangerous lack of sufficient oversight,” argued Privacy International research officer, Edin Omanovic. “Today, a new President will take charge of US intelligence agencies – a President whose appetite for surveillance powers and how they’re used put him at odds with British values, security, and its people… Given that our intelligence agencies are giving him unfettered access to massive troves of personal data, including potentially about British people, it is essential that the details behind all this are taken out of the shadows.” Source
  22. Mozilla: The Internet Is Unhealthy And Urgently Needs Your Help Mozilla argues that the internet's decentralized design is under threat by a few key players, including Google, Facebook, Apple, Tencent, Alibaba and Amazon, monopolizing messaging, commerce, and search. Can the internet as we know it survive the many efforts to dominate and control it, asks Firefox maker Mozilla. Much of the internet is in a perilous state, and we, its citizens, all need to help save it, says Mark Surman, executive director of Firefox maker the Mozilla Foundation. We may be in awe of the web's rise over the past 30 years, but Surman highlights numerous signs that the internet is dangerously unhealthy, from last year's Mirai botnet attacks, to market concentration, government surveillance and censorship, data breaches, and policies that smother innovation. "I wonder whether this precious public resource can remain safe, secure and dependable. Can it survive?" Surman asks. "These questions are even more critical now that we move into an age where the internet starts to wrap around us, quite literally," he adds, pointing to the Internet of Things, autonomous systems, and artificial intelligence. In this world, we don't use a computer, "we live inside it", he adds. "How [the internet] works -- and whether it's healthy -- has a direct impact on our happiness, our privacy, our pocketbooks, our economies and democracies." Surman's call to action coincides with nonprofit Mozilla's first 'prototype' of the Internet Health Report, which looks at healthy and unhealthy trends that are shaping the internet. Its five key areas include open innovation, digital inclusion, decentralization, privacy and security, and web literacy. Mozilla will launch the first report after October, once it has incorporated feedback on the prototype. That there are over 1.1 billion websites today, running on mostly open-source software, is a positive sign for open innovation. However, Mozilla says the internet is "constantly dodging bullets" from bad policy, such as outdated copyright laws, secretly negotiated trade agreements, and restrictive digital-rights management. Similarly, while mobile has helped put more than three billion people online today, there were 56 internet shutdowns last year, up from 15 shutdowns in 2015, it notes. Mozilla fears the internet's decentralized design, while flourishing and protected by laws, is under threat by a few key players, including Facebook, Google, Apple, Tencent, Alibaba and Amazon, monopolizing messaging, commerce and search. "While these companies provide hugely valuable services to billions of people, they are also consolidating control over human communication and wealth at a level never before seen in history," it says. Mozilla approves of the wider adoption of encryption today on the web and in communications but highlights the emergence of new surveillance laws, such as the UK's so-called Snooper's Charter. It also cites as a concern the Mirai malware behind last year's DDoS attacks, which abused unsecured webcams and other IoT devices, and is calling for safety standards, rules and accountability measures. The report also draws attention to the policy focus on web literacy in the context of learning how to code or use a computer, which ignores other literacy skills, such as the ability to spot fake news, and separate ads from search results. Source Alternate Source - 1: Mozilla’s First Internet Health Report Tackles Security, Privacy Alternate Source - 2: Mozilla Wants Infosec Activism To Be The Next Green Movement
  23. Chinese Citizens Can Be Tracked In Real Time A group of researchers have revealed that the Chinese government is collecting data on its citizens to an extent where their movements can even be tracked in real-time using their mobile devices. This discovery was made by The Citizen Lab at the University of Toronto's Munk School of Global Affairs who specialize in studying the ways in which information technology affects both personal and human rights worldwide. It has been known for some time that the Chinese government employs a number of invasive tactics to be fully aware of the lives of its citizens. Though Citizen Lab was able to discover that the government has begun to monitor its populace using apps and services designed and run by the private sector. The discovery was made when the researchers began exploring Tencent's popular chat app WeChat that is installed on the devices of almost every Chinese citizen with 800 million active users each month. Citizen Lab found that not only does the app help the government censor chats between users but that it is also being used as a state surveillance tool. WeChat's restrictions even remain active for Chinese students studying abroad. Ronald Deibert, a researcher at Citizen Lab, offered further insight on the team's discovery, saying: "What the government has managed to do, I think quite successfully, is download the controls to the private sector, to make it incumbent upon them to police their own networks". To make matters worse, the data collected by WeChat and other Chinese apps and services is currently being sold online. The Guangzhou Southern Metropolis Daily led an investigation that found that large amounts of personal data on nearly anyone could be purchased online for a little over a hundred US dollars. The newspaper also found another service that offered the ability to track users in real-time via their mobile devices. Users traveling to China anytime soon should be extra cautious as to their activities online and should think twice before installing WeChat during their stay. Published under license from ITProPortal.com, a Future plc Publication. All rights reserved. Source
×
×
  • Create New...