Jump to content

Search the Community

Showing results for tags 'Surveillance'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 35 results

  1. Purchasing devices that constantly monitor, track and record us for convenience or a sense of safety is laying the foundation for an oppressive future. A Ring doorbell camera is seen at a home in Wolcott, Conn. on July 16, 2019. In his book “1984,” George Orwell imagined that Big Brother-type surveillance would be imposed on us by a violent state. But it’s becoming clear that, in fact, we are eager participants in building our own dystopia, purchasing the very devices that constantly monitor, track and record us simply for our own convenience or a sense of safety. In the process, we’re laying the foundations for a bleak and oppressive future, because corporate surveillance sold for consumer purposes can just as easily be used as a tool for tyranny — and history shows us that it will. And our self-surveillance devices have already begun to betray us. Take Ashley LeMay, who bought an Amazon Ring surveillance camera because she thought it would keep her family safe. Instead, a grown man hacked into the camera she had placed in the bedroom of her three young daughters. He used it to stalk the children and even spoke to 8-year-old Alyssa through the camera, saying “I’m Santa Claus. Don’t you want to be my best friend?” It was terrifying, and it wasn't an isolated incident. A family in Florida also had their Ring camera hacked by someone who broadcast the whole thing live on a podcast. He monitored the couple before starting to harass them, shouting racist epithets and activating their alarm. A woman in Georgia who installed a Ring to monitor her dog discovered that it had been hacked four separate times after a man spoke to her through the camera, saying “I can see you in bed.” Someone threatened a couple in Texas through their Ring, demanding a ransom in bitcoin. Amazon claims that these chilling incidents were not caused by a security lapse on the company's end. But that’s just not true. The company sells cheap, insecure, internet-connected cameras knowing full well the dangers associated with these devices. And it doesn’t even require users to protect their cameras with strong passwords or two-factor authentication — basic security measures that should be the default for devices that collect sensitive information, let alone broadcast footage from inside people's homes. Amazon, though, simply seems interested in scaring people into buying their devices than keeping those devices safe. Investigative journalists have uncovered a pattern of lax security and privacy practices: Ring doorbells have leaked their owners' home Wi-Fi passwords, and the associated Neighbors app has exposed users' home addresses. And Amazon’s surveillance doorbell cameras are just the start: The company is selling a multitude of other gadgets equipped with microphones, cameras and sensors, all designed to gather enormous amounts of data, which Amazon refines — like crude oil — into power and profit. Alexa, the company’s “home assistant,” is constantly listening and explosive investigations revealed that Amazon employees have also listened, via Alexa, not just to private conversations, but also to deeply intimate moments such as couples having sex or kids singing in the shower. Amazon even marketed its microphone-enabled Echo Dot Kids directly to children. Amazon devices aren’t just giving hackers and Amazon employees an eye or an ear into our homes. The company has also partnered with more than 600 police departments across the country to tap into the surveillance network its customers are creating for them, developing a seamless process for government agents to request footage from tens of thousands of Ring cameras without warrants or any judicial oversight. In exchange for their VIP access to footage from Amazon customers, police officers act as an extension of Amazon’s marketing department by encouraging residents to buy the spy doorbells, promoting them on official city social media accounts and, in some cases, even using taxpayer dollars to subsidize private purchases of them. The societal implications of these private surveillance partnerships are staggering: Reports show a disturbing trend of racial profiling, with police flooded by false reports of “suspicious” individuals. Ring claims to care about the safety of its customers, but the company doesn’t even pretend to care about the safety of neighbors or community members who are disproportionately targeted by law enforcement based on the color of their skin. It’s easy to feel like a world without human privacy is inevitable, or that it’s too late to stop the end of privacy; Silicon Valley elites are, in fact, counting on us feeling that way, taking our cynicism and apathy to the bank. But there’s a difference between posting on social media or browsing the web knowing that your data is being harvested, and a world where there are devices, whether Amazon's Ring or Google's Nest, literally watching and listening to us everywhere we go. We can’t go back in time to a world without smartphones or CCTV cameras. But we can avoid diving headlong into a nightmarish future. Spurred by journalism and grassroots action, elected officials at the local, state and federal levels have started asking questions about the dangers associated with Amazon’s surveillance-driven business model. Tens of thousands of people have called for a congressional investigation into Amazon’s surveillance-based business model, and dozens of civil rights groups have spoken out about the ways that Amazon's partnership between Ring and local police departments exacerbates existing forms of discrimination. And many Ring cameras are placed on people’s front doors, capturing endless hours of video footage of innocent passersby, including children — most of whom are unaware that they are being surveilled and have not consented to being recorded. Amazon has openly admitted that there are no safeguards in place to prevent abuse once footage is shared with law enforcement, and that the government can store that footage indefinitely or share it with federal agencies, like Immigration and Customs Enforcement, at will. There seems to be no end to Amazon’s surveillance plans –– the company also sells facial recognition software to governments, claiming that it’s powerful enough to detect human emotions such as fear, and has toyed with the idea of including the software in Ring cameras. This week, my organization Fight for the Future is joining other consumer privacy and civil liberties experts and issuing an official product warning encouraging people to not buy Amazon Ring cameras because of the clear threat that they pose to all of our privacy, safety, and security. For too long, we’ve been sold a false choice between privacy and security. It’s more clear every day that more surveillance does not mean more safety, especially for the most vulnerable. Talk to your family and friends and encourage them to do their research before putting any private company's surveillance devices on your door or in your home. In the end, companies like Amazon and Google don’t care about keeping our communities safe; they care about making money. Source
  2. Russia's slowly building its own Great Firewall model, centralizing internet traffic through government servers. Today, a new "internet sovereignty" law entered into effect in Russia, a law that grants the government the ability to disconnect the entire country from the global internet. The law was formally approved by President Putin back in May. The Kremlin government cited the need to have the ability to disconnect Russia's cyberspace from the rest of the world in the event of a national emergency or foreign threat, such as a cyberattack. In order to achieve these goals, the law mandates that all local ISPs route traffic through special servers managed by the Roskomnadzor, the country's telecoms regulator. These servers would act as kill-switches and disconnect Russia from external connections while re-routing internet traffic inside Russia's own internet space, akin to a country-wide intranet -- which the government is calling RuNet. The Kremlin's recent law didn't come out of the blue. Russian officials have been working on establishing RuNet for more than half a decade. Past efforts included passing laws that force foreign companies to keep the data of Russian citizens on servers located in Russia. However, internet infrastructure experts have called Russia's "disconnect plan" both impractical and idealistic, pointing to the global DNS system as the plan's Achille's heel. Even US officials doubt that Russia would be able to pull it off. Speaking on stage at the RSA 2019 security conference in March, NSA Director General Paul Nakasone said he didn't expect Russia to succeed and disconnect from the global internet. The technicalities of disconnecting an entire country are just to complex not to cripple Russia's entire economy, plunging modern services like healthcare or banking back into a dark age. IT'S A LAW ABOUT SURVEILLANCE, NOT SOVEREIGNTY The reality is that experts in Russian politics, human rights, and internet privacy have come up with a much more accurate explanation of what's really going on. Russia's new law is just a ruse, a feint, a gimmick. The law's true purpose is to create a legal basis to force ISPs to install deep-packet inspection equipment on their networks and force them to re-route all internet traffic through Roskomnadzor strategic chokepoints. These Roskomnadzor servers are where Russian authorities will be able to intercept and filter traffic at their discretion and with no judicial oversight, similar to China's Great Firewall. The law is believed to be an upgrade to Russia's SORM (System for Operative Investigative Activities). But while SORM provides passive reconnaissance capabilities, allowing Russian law enforcement to retrieve traffic metadata from ISPs, the new "internet sovereignty" law provides a more hands-on approach, including active traffic shaping capabilities. Experts say the law was never about internet sovereignty, but about legalizing and disguising mass surveillance without triggering protests from Russia's younger population, who has gotten accustomed to the freedom the modern internet provides. Experts at Human Rights Watch have seen through the law's true purpose ever since it was first proposed in the Russian Parliament. Earlier this year, they've called the law "very broad, overly vague, and [that it vests] in the government unlimited and opaque discretion to define threats." This vagueness in the law's text allows the government to use it whenever it wishes, for any circumstance. Many have pointed out that Russia is doing nothing more than copying the Beijing regime, which also approved a similarly vague law in 2016, granting its government the ability to take any actions it sees fit within the country's cyberspace. The two countries have formally cooperated, with China providing help to Russia in implementing a similar Great Firewall technology. PLANNED DISCONNECT TEST But while Russia's new law entered into effect today, officials sill have to carry out a ton of tests. Last week, the Russian government published a document detailing a scheduled test to take place this month. No exact date was provided. Sources at three Russian ISPs have told ZDNet this week that they haven't been notified of any such tests; however, if they take place, they don't expect the "disconnect" to last more than a few minutes. Tens of thousands protested this new law earlier this year across Russia; however, the government hasn't relented, choosing to arrest protesters and go forward with its plans. Source: Russia's new 'disconnect from the internet' law is actually about surveillance (via ZDNet)
  3. Allowing facial recognition technology to spread without understanding its impact could have serious consequences. In the last few years facial recognition has been gradually introduced across a range of different technologies. Some of these are relatively modest and useful; thanks to facial recognition software you can open you smartphone just by looking at it, and log into your PC without a password. You can even use your face to get cash out of an ATM, and increasingly it's becoming a standard part of your journey through the airport now. And facial recognition is still getting smarter. Increasingly it's not just faces that can be recognised, but emotional states too, if only with limited success right now. Soon it won't be too hard for a camera to not only recognise who you are, but also to make a pretty good guess at how you are feeling. But one of the biggest potential applications of facial recognition on the near horizon is, of course, for law and order. It is already being used by private companies to deter persistent shoplifters and pickpockets. In the UK and other countries police have been testing facial recognition in a number of situations, with varying results. There's a bigger issue here, as the UK's Information Commissioner Elizabeth Denham notes: "How far should we, as a society, consent to police forces reducing our privacy in order to keep us safe?" She warns that when it comes to live facial recognition "never before have we seen technologies with the potential for such widespread invasiveness," and has called for police, government and tech companies to work together to eliminate bias in the algorithms used; particularly that associated with ethnicity. She is not the only one to be raising questions about the use of facial recognition by police; similar questions are being asked in the US, and rightly so. There is always a trade-off between privacy and security. Deciding where to draw the line between the two is key. But we also have to make the decision clearly and explicitly. At the moment there is a great risk that as the use of facial recognition technology by government and business spreads, the decision will be taken away from us. In the UK we've already built up plenty of the infrastructure that you'd need if you were looking to build a total surveillance state. There are probably somewhere around two million private and local government security cameras in the UK; a number that is rising rapidly as we add our own smart doorbells or other web-connected security cameras to watch over our homes and businesses. In many cases it will be very easy to add AI-powered facial recognition analysis to all those video streams. I can easily see a scenario where we achieve an almost-accidental surveillance state, through small steps, each of which make sense on their own terms but which together combine to hugely reduce our privacy and freedoms, all in the name of security and efficiency. It is much easier to have legitimate concerns about privacy addressed before facial recognition is a ubiquitous feature of society. And the same applies other related technologies like gait recognition or other biometric systems that can recognise us from afar. New technology rolled out in the name of security is all but impossible to roll back. For sure, these technologies can have many benefits, from making it quicker to unlock your phone or recognising criminals in the street. But allowing these technologies to become pervasive without rigorous debate about the need for them, the effectiveness of them and their broader impact on society is deeply unwise and could leave us facing much bigger problems ahead. Source: We must stop smiling our way towards a surveillance state (via ZDNet)
  4. Those who know about us have power over us. Obfuscation may be our best digital weapon. Consider a day in the life of a fairly ordinary person in a large city in a stable, democratically governed country. She is not in prison or institutionalized, nor is she a dissident or an enemy of the state, yet she lives in a condition of permanent and total surveillance unprecedented in its precision and intimacy. As soon as she leaves her apartment, she is on camera: while in the hallway and the elevator of her building, when using the ATM outside her bank, while passing shops and waiting at crosswalks, while in the subway station and on the train — and all that before lunch. A montage of nearly every move of her life in the city outside her apartment could be assembled, and each step accounted for. But that montage would hardly be necessary: Her mobile phone, in the course of its ordinary operation of seeking base stations and antennas to keep her connected as she walks, provides a constant log of her position and movements. Her apps are keeping tabs, too. Any time she spends in “dead zones” without phone reception can also be accounted for: Her subway pass logs her entry into the subway, and her radio-frequency identification badge produces a record of her entry into the building in which she works. (If she drives a car, her electronic toll-collection pass serves a similar purpose, as does automatic license-plate imaging.) If her apartment is part of a smart-grid program, spikes in her electricity usage can reveal exactly when she is up and around, turning on lights and ventilation fans and using the microwave oven and the coffee maker. Surely some of the fault must lie with this individual for using services or engaging with institutions that offer unfavorable terms of service and are known to misbehave. Isn’t putting all the blame on government institutions and private services unfair, when they are trying to maintain security and capture some of the valuable data produced by their users? Can’t we users just opt out of systems with which we disagree? Before we return to the question of opting out, consider how thoroughly the systems mentioned are embedded in our hypothetical ordinary person’s everyday life, far more invasively than mere logs of her daily comings and goings. Someone observing her could assemble in forensic detail her social and familial connections, her struggles and interests, and her beliefs and commitments. From Amazon purchases and Kindle highlights, from purchase records linked with her loyalty cards at the drugstore and the supermarket, from Gmail metadata and chat logs, from search history and checkout records from the public library, from Netflix-streamed movies, and from activity on Facebook and Twitter, dating sites, and other social networks, a very specific and personal narrative is clear. If the apparatus of total surveillance that we have described here were deliberate, centralized, and explicit, a Big Brother machine toggling between cameras, it would demand revolt, and we could conceive of a life outside the totalitarian microscope. But if we are nearly as observed and documented as any person in history, our situation is a prison that, although it has no walls, bars, or wardens, is difficult to escape. Which brings us back to the problem of “opting out.” For all the dramatic language about prisons and panopticons, the sorts of data collection we describe here are, in democratic countries, still theoretically voluntary. But the costs of refusal are high and getting higher: A life lived in social isolation means living far from centers of business and commerce, without access to many forms of credit, insurance, or other significant financial instruments, not to mention the minor inconveniences and disadvantages — long waits at road toll cash lines, higher prices at grocery stores, inferior seating on airline flights. It isn’t possible for everyone to live on principle; as a practical matter, many of us must make compromises in asymmetrical relationships, without the control or consent for which we might wish. In those situations — everyday 21st-century life — there are still ways to carve out spaces of resistance, counterargument, and autonomy. We are surrounded by examples of obfuscation that we do not yet think of under that name. Lawyers engage in overdisclosure, sending mountains of vaguely related client documents in hopes of burying a pertinent detail. Teenagers on social media — surveilled by their parents — will conceal a meaningful communication to a friend in a throwaway line or a song title surrounded by banal chatter. Literature and history provide many instances of “collective names,” where a population took a single identifier to make attributing any action or identity to a particular person impossible, from the fictional “I am Spartacus” to the real “Poor Conrad” and “Captain Swing” in prior centuries — and “Anonymous,” of course, in ours. We can apply obfuscation in our own lives by using practices and technologies that make use of it, including: The secure browser Tor, which (among other anti-surveillance technologies) muddles our Internet activity with that of other Tor users, concealing our trail in that of many others. The browser plugins TrackMeNot and AdNauseam, which explore obfuscation techniques by issuing many fake search requests and loading and clicking every ad, respectively. The browser extension Go Rando, which randomly chooses your emotional “reactions” on Facebook, interfering with their emotional profiling and analysis. Playful experiments like Adam Harvey’s “HyperFace” project, finding patterns on textiles that fool facial recognition systems – not by hiding your face, but by creating the illusion of many faces. If obfuscation has an emblematic animal, it is the family of orb-weaving spiders, Cyclosa mulmeinensis, which fill their webs with decoys of themselves. The decoys are far from perfect copies, but when a wasp strikes they work well enough to give the orb-weaver a second or two to scramble to safety. At its most abstract, obfuscation is the production of noise modeled on an existing signal in order to make a collection of data more ambiguous, confusing, harder to exploit, more difficult to act on, and therefore less valuable. Obfuscation assumes that the signal can be spotted in some way and adds a plethora of related, similar, and pertinent signals — a crowd which an individual can mix, mingle, and, if only for a short time, hide. There is real utility in an obfuscation approach, whether that utility lies in bolstering an existing strong privacy system, in covering up some specific action, in making things marginally harder for an adversary, or even in the “mere gesture” of registering our discontent and refusal. After all, those who know about us have power over us. They can deny us employment, deprive us of credit, restrict our movements, refuse us shelter, membership, or education, manipulate our thinking, suppress our autonomy, and limit our access to the good life. There is no simple solution to the problem of privacy, because privacy itself is a solution to societal challenges that are in constant flux. Some are natural and beyond our control; others are technological and should be within our control but are shaped by a panoply of complex social and material forces with indeterminate effects. Privacy does not mean stopping the flow of data; it means channeling it wisely and justly to serve societal ends and values and the individuals who are its subjects, particularly the vulnerable and the disadvantaged. Innumerable customs, concepts, tools, laws, mechanisms, and protocols have evolved to achieve privacy, so conceived, and it is to that collection that we add obfuscation to sustain it — as an active conversation, a struggle, and a choice. By Finn Brunton assistant professor in the Department of Media, Culture, and Communication at New York University and Helen Nissenbaum is professor of information science at Cornell Tech. Source
  5. Facial recognition and algorithm searches of social media are only part of it. An Israeli technician climbs up a pole to install a surveillance camera on a street in the east Jerusalem neighbourhood of Ras al-Amud. An Israeli startup invested in heavily by American companies, including Microsoft, produces facial recognition software used to conduct biometric surveillance on Palestinians, investigations by NBC and Haaretz revealed. In June, Microsoft — which has touted its framework for ethical use of facial recognition — joined a group investment of $78 million to AnyVision, an international tech company based in Israel. One of AnyVision’s flagship products is Better Tomorrow, a program that allows the tracking of objects and people on live video feeds, even tracking between independent camera feeds. AnyVision’s facial recognition software is at the heart of a military mass surveillance project in the West Bank, according to the NBC and Haaretz reporting. An Israeli Defense Forces statement in February acknowledged the addition of facial recognition verification technology to at least 27 checkpoints between Israel and the West Bank to “upgrade the crossings” and, in an effort to “deter terror attacks,” rapidly installed a network of over 1,700 cameras across the occupied territories. The combination of tools gives Israel the ability to watch Palestinians all over the West Bank. This is not the first time Israel has engaged in mass surveillance. In the late 2000s, Israeli intelligence services were monitoring Israeli citizens, mostly Arabs, and Palestinians use of Facebook and other social media platforms by looking for specific keywords. China has ramped up its surveillance of its Uighur minority population using artificial intelligence and facial recognition technology But Microsoft has positioned itself against this kind of use of facial recognition, even releasing guiding ethical principles to the line of work. Shankar Narayan, director of the Technology and Liberty Project at the American Civil Liberties Union, said in an interview with Forbes that he met with Microsoft last year and they had reciprocated his interests in slowing international access to facial recognition technology. Still, he said, he wasn’t surprised. “This particular investment is not a big surprise to me—there’s a demonstrable gap between action and rhetoric in the case of most big tech companies and Microsoft in particular,” Narayan told Forbes’s Thomas Brewster. How Israel uses technology to conduct surveillance on Palestinians A 2018 privacy law updated Israeli citizens’ protections enshrined in a constitutional right to privacy, required that databases of information collection register with the government, and requires that information on Israeli citizens be done only with their consent. With exceptions for national security, this move brought Israeli privacy law to a higher bar than that of EU regulations. But Palestinians living in the West Bank don’t hold Israeli citizenship and, therefore, are not protected by Israeli privacy laws. Israeli lawyer Jonathan Klinger attributed the surveillance to permissive legal gaps. “What you have to understand that Israel has three separate legal systems” — one for Israelis inside Israel, one for Israeli settlers in the West Bank, and one for Palestinians living in the territories — “which cause a lot of the actual legal problems we face,” Klinger said. Israel’s monitoring of Palestinians goes far beyond facial recognition. The country also monitors Palestinian’s social media for cause to arrest them on charges of incitement or intent to carry out a terror attack. The artificial intelligence provided by AnyVision is virtually inescapable. Yael Berda, a Hebrew University professor who served as a lawyer representing Palestinians denied entry permits in Israeli courts, writes in her book Living Emergency that in order to have a permit to cross into Israel proper, Palestinians must consent to the collection of their biometric data. “The intelligence services are seen as omnipotent,” said Berda in an interview with Vox. “It creates a powerful vortex of control.” Searching social media, fielding tips, and considering demographics results in organized tracking of Palestinians for use by the Israeli civil administration, Berda said. Palestinians determined to be a threat or danger end up on a list that bars them from moving through checkpoints. She estimates that upward of 250,000 people are on this list, and that number is only growing. Berda’s experience with the way that Israel collects information on Palestinians makes her skeptical of the country refraining from using facial recognition. “I don’t see a reason, legally, not to use [facial recognition] in checkpoints against terrorists, but I don’t like it,” Klinger said. “It’s a huge violation of [Palestinians’] privacy.” Source
  6. Over 30 civil rights groups are calling on local, state, and federal officials to end police partnerships with Ring, Amazon’s home surveillance company. Thirty-six civil rights organizations signed an open letter demanding local, state, and federal officials to end partnerships between Ring, Amazon’s home surveillancecompany, and over 405 law enforcement agencies around the country. The open letter, which was published yesterday by digital rights advocacy group Fight for the Future, also demands municipalities to pass surveillance oversight ordinances in order to "deter" police from partnering with companies like Ring in the future. The letter also calls on Congress to investigate Ring's practices. The open letter escalates the petition campaign that Fight for the Future launched in August, which called on civilians to petition their local officials to stop, halt, or ban any police partnerships with Ring. The signatories of the open letter include privacy advocacy groups like Media Justice, the Tor Project, and Media Mobilizing Project, and racial justice coalition like The Black Alliance for Just Immigration, Mijente, and the American-Arab Anti-Discriimnination Committee. The open letter points out that the proliferation of privatized surveillance cameras like Ring, which make it easy for law enforcement to request footage without a warrant, disproportionately puts marginalized and over-policed communities of color at risk. “There’s been a lot of reporting and general discussion about these surveillance partnerships and the potential risks to privacy and civil liberties,” Evan Greer deputy director of Fight for the Future, said in a phone call. “But this is the first time that a major coalition of significant organizations are explicitly calling on elected officials to do something about it.” When Ring partners with police, the company provides police with a tool called the Law Enforcement Neighborhoods Portal. This tool is an interactive map that allows police to request footage directly from residents, streamlining the process of voluntary evidence sharing. The map previously showed the approximate location of Ring users in a heat map, but the heat map interface was removed in August, according to an email from Ring. As reported by Motherboard, police then have to make an exchange. Some police have to promote Ring either implicitly, through only speaking about Ring in company-approved statements and providing download links to Ring’s “neighborhood watch” app, Neighbors. Others must promote it explicitly, by signing agreements stipulating that police must “encourage adoption” of Ring cameras and Neighbors. The open letter points out that some cities subsidize discounts on Ring cameras. As reported by Motherboard, some cities have paid up to $100,000 of taxpayer money in order to fund these discount programs. Greer said that the letter acts on an urgent need for local lawmakers to ask questions about Ring, and federal lawmakers to formally investigate the company’s practices. “Our elected officials in Congress, at the very least, have to be asking what the implications for our country are if a company like Amazon is able to exponentially increase the number of cameras in our neighborhoods, and at the same time, enter into these cozy partnerships with law enforcement,” Greer said. ‘“There’s no oversight for what [police] can do with [data] once they collect it.” A Ring spokesperson said in an email statement that the company's mission is to "help make neighborhoods safer." "We work towards this mission in a number of ways, including providing a free tool that can help build stronger relationships between communities and their local law enforcement agencies," Ring said. "We have taken care to design these features in a way that keeps users in control and protects their privacy." Source
  7. Tech investor John Borthwick doesn’t mince words when it comes to how he perceives smart speakers from the likes of Google and Amazon . The founder of venture capital firm Betaworks and former Time Warner and AOL executive believes the information-gathering performed by such devices is tantamount to surveillance. Image: John Borthwick “I would say that there's two or three layers sort of problematic layers with these new smart speakers, smart earphones that are in market now,” Borthwick told Yahoo Finance Editor-in-Chief Andy Serwer during an interview for his series “Influencers.” “And so the first is, from a consumer standpoint, user standpoint, is that these, these devices are being used for what's — it's hard to call it anything but surveillance,” Borthwick said. The way forward? Some form of regulation that gives users more control over their own data. “I personally believe that you, as a user and as somebody who likes technology, who wants to use technology, that you should have far more rights about your data usage than we have today,” Borthwick said. Smart assistants face a reckoning The venture capitalist’s comments follow a string of controversies surrounding smart assistants including Google’s Assistant, Amazon’s Alexa, and Apple’s (AAPL) Siri, in which each company admitted that human workers listen to users’ queries as a means of improving their digital assistants’ voice recognition capabilities. “They've gone to those devices and they've said, ‘Give us data when people passively act upon the device.’ So in other words, I walk over to that light switch,” Borthwick said. “I turn it off, turn it on, it's now giving data back to the smart speaker.” It’s important to note that smart speakers from every major company are only activated when you use their appropriate wake word. To activate your Echo speaker, for instance, you need to say “Alexa” followed by your command. The same thing goes for Google’s Assistant and Apple’s Siri. The uproar surrounding smart speakers and their assistants began when Bloomberg reported in April that Amazon used a global team of employees and contractors to listen to users’ voice commands to Alexa to improve its speech recognition. Image: Amazon Echo That was followed by a similar report by Belgian-based VRT News about Google employees listening to users’ voice commands for Google Assistant. The Guardian then published a third piece about Apple employees listening to users’ Siri commands. Facebook was also pulled into the controversy when Bloomberg reported it had employees listen to users’ voice commands made through its Messenger app. Google and Apple have since apologized, with Google halting the practice, and Apple announcing that it will automatically opt users out of voice sample collection. Users instead will have to opt in if they want to provide voice samples to improve Siri’s voice recognition. Amazon, for its part, has allowed users to opt out of having their voice samples shared with employees, while Facebook said it has paused the practice. Image: Google Home Mini smart speaker The use of voice samples has helped improved the voice recognition features of digital assistants, ensuring they are better able to understand what users say and the context in which they say it. At issue is whether users were aware that employees of these companies were listening. There’s also the matter of accidental activations, which can result in employees hearing conversations or snippets of conversations they might otherwise not have been meant to hear. As for how such issues can be dealt with in the future, Borthwick points to some form of tech regulation. Though he doesn’t offer specifics, the VC says that users need to be able to understand how their data is being used, and be able to take control of it. “I think generally, it's about giving the users a lot more power over the decisions that are being made. I think that's one piece of it,” he said. Source
  8. Millions of security cameras become equipped with “video analytics” and other AI-infused technologies that allow computers not only record but “understand” the objects they’re capturing, they could be used for both security and marketing purposes, the American Civil Liberties Union (ACLU) warned in a recent report ,“The Dawn of Robot Surveillance.” As they become more advanced, the camera use is shifting from simply capturing and storing video “just in case” to actively evaluating video with real-time analytics and for surveillance. While ownership of cameras is mostly under decentralized ownership and control the ACLU cautioned policymakers to be proactive and create rules to regulate the potential negative impact this could have. The report also listed specific features that could allow for intrusive surveillance and recommendations to curtail potential abuse. Source
  9. Spotify pursues emotional surveillance for global profit Music is emotional, and so our listening often signals something deeply personal and private. Today, this means music streaming platforms are in a unique position within the greater platform economy: they have troves of data related to our emotional states, moods, and feelings. It’s a matter of unprecedented access to our interior lives, which is buffered by the flimsy illusion of privacy. When a user chooses, for example, a “private listening” session on Spotify, the effect is to make them feel that it’s a one-way relation between person and machine. Of course, that personalization process is Spotify’s way of selling users on its product. But, as it turns out, in a move that should not surprise anyone at this point, Spotify has been selling access to that listening data to multinational corporations. Where other platforms might need to invest more to piece together emotional user profiles, Spotify streamlines the process by providing boxes that users click on to indicate their moods: Happy Hits, Mood Booster, Rage Beats, Life Sucks. All of these are examples of what can now be found on Spotify’s Browse page under the “mood” category, which currently contains eighty-five playlists. If you need a lift in the morning, there’s Wake Up Happy, A Perfect Day, or Ready for the Day. If you’re feeling down, there’s Feeling Down, Sad Vibe, Down in the Dumps, Drifting Apart, Sad Beats, Sad Indie, and Devastating. If you’re grieving, there’s even Coping with Loss, with the tagline: “When someone you love becomes a memory, find solace in these songs.” Over the years, streaming services have pushed a narrative about these mood playlists, suggesting, through aggressive marketing, that the rise of listening by way of moods and activities was a service to listeners and artists alike—a way to help users navigate infinite choice, to find their way through a vast library of forty million songs. It’s a powerful arm of the industry-crafted mythology of the so-called streaming revolution: platforms celebrating this grand recontextualization of music into mood playlists as an engine of discovery. Spotify is currently running a campaign centered on moods—the company’s Twitter tagline is currently “Music for every mood”—complete with its own influencer campaign. But a more careful look into Spotify’s history shows that the decision to define audiences by their moods was part of a strategic push to grow Spotify’s advertising business in the years leading up to its IPO—and today, Spotify’s enormous access to mood-based data is a pillar of its value to brands and advertisers, allowing them to target ads on Spotify by moods and emotions. Further, since 2016, Spotify has shared this mood data directly with the world’s biggest marketing and advertising firms. Streaming Intelligence Surveillance In 2015, Spotify began selling advertisers on the idea of marketing to moods, moments, and activities instead of genres. This was one year after Spotify acquired the “music intelligence” firm Echo Nest. Together they began looking at the 1.5 billion user-generated playlists at Spotify’s disposal. Studying these playlists allowed Spotify to more deeply analyze the contexts in which listening was happening on its platform. And so, right around the time that Spotify realized it had 400,000 user-generated barbecue playlists, Brian Benedik, then Spotify’s North American Vice President of Advertising and Partnerships, noted in an Ad Age interview that the company would focus on moods as a way to grow its advertising mechanism: “This is not something that’s just randomly thrown out there,” Benedik said. “It’s a strategic evolution of the Spotify ads business.” As of May 1, 2015, advertisers would be able to target ads to users of the free ad-supported service based on activities and moods: “Mood categories like happy, chill, and sad will let a brand like Coca-Cola play on its ‘Open Happiness’ campaign when people are listening to mood-boosting music,” the Ad Age article explained. Four years later, Spotify is the world’s biggest streaming subscription service, with 207 million users in seventy-nine different countries. And as Spotify has grown, its advertising machine has exploded. Of those 207 million users, it claims 96 million are subscribers, meaning that 111 million users rely on the ad-supported version. Spotify top marketing execs have expressed that the company’s ambition is “absolutely” to be the third-largest player in the digital ad market behind Google and Facebook. In turn, since 2015, Spotify’s strategic use of mood and emotion-based targeting has only become even more entrenched in its business model. “At Spotify we have a personal relationship with over 191 million people who show us their true colors with zero filter,” reads a current advertising deck. “That’s a lot of authentic engagement with our audience: billions of data points every day across devices! This data fuels Spotify’s streaming intelligence—our secret weapon that gives brands the edge to be relevant in real-time moments.” Another brand-facing pitch proclaims: “The most exciting part? This new research is starting to reveal the streaming generation’s offline behaviors through their streaming habits.” Today, Spotify Ad Studio, a self-service portal automating the ad-purchase process, promises access to “rich and textured datasets,” allowing brands to instantly target their ads by mood and activity categories like “Chill,” “Commute,” “Dinner,” “Focus/Study,” “Girls Night Out,” and more. And across the Spotify for Brands website are a number of “studies” and “insights reports” regarding research that Spotify has undertaken about streaming habits: “You are what you stream,” they reiterate over and over. In a 2017 package titled Understanding People Through Music—Millennial Edition, Spotify (with help from “youth marketing and millennial research firm” Ypulse) set out to help advertisers better target millennial users by mood, emotion, and activity specifically. Spotify explains that “unlike generations past, millennials aren’t loyal to any specific music genre.” They conflate this with a greater reluctance toward labels and binaries, pointing out the rising number of individuals who identify as gender fluid and the growing demographic of millennials who do not have traditional jobs—and chalk these up to consumer preferences. “This throws a wrench is marketers’ neat audience segmentations,” Spotify commiserates. For the study, they also gathered six hundred in-depth “day in a life” interviews recorded as “behavioral diaries.” All participants were surveilled by demographics, platform usage, playlist behavior, feature usage, and music tastes—and in the United States (where privacy is taken less seriously), Spotify and Ypulse were able to pair Spotify’s own streaming data with additional third-party information on “broader interests, lifestyle, and shopping behaviors.” The result is an interactive hub on the Spotify for Brands website detailing seven distinct “key audio streaming moments for marketers to tap into,” including Working, Chilling, Cooking, Chores, Gaming, Workout, Partying, and Driving. Spotify also dutifully outlines recommendations for how to use this information to sell shit, alongside success stories from Dunkin’ Donuts, Snickers, Gatorade, Wild Turkey, and BMW. More startlingly, for each of these “moments” there is an animated trajectory of a typical “emotional journey” claiming to predict the various emotional states users will experience while listening to particular playlists. Listeners who are “working,” for instance, are likely to start out feeling pressured and stressed, before they become more energized and focused and end up feeling fine and accomplished at the end of the playlist queue. If they listen while doing chores, the study claims to know that they start out feeling stressed and lazy, then grow motivated and entertained, and end by feeling similarly good and accomplished. In Spotify’s world, listening data has become the oil that fuels a monetizable metrics machine, pumping the numbers that lure advertisers to the platform. In a data-driven listening environment, the commodity is no longer music. The commodity is listening. The commodity is users and their moods. The commodity is listening habits as behavioral data. Indeed, what Spotify calls “streaming intelligence” should be understood as surveillance of its users to fuel its own growth and ability to sell mood-and-moment data to brands. A Leviathan of Ads The potential of music to provide mood-related data useful to marketers has long been studied. In 1990, the Journal of Marketing published an article dubbed “Music, Mood and Marketing” that surveyed some of this history while bemoaning how “despite being a prominent promotional tool, music is not well understood or controlled by marketers.” The text outlines how “marketers are precariously dependent on musicians for their insight into the selection or composition of the ‘right’ music for particular situations.” This view of music as a burdensome means to a marketer’s end is absurd, but it’s also the logic that rules the current era of algorithmic music platforms. Unsurprisingly, this 1990 article aimed to overcome challenges for marketers by figuring out new ways to extract value from music that would be beyond the control of musicians themselves: studying the “behavioral effects” of music with a “special emphasis on music’s emotional expressionism and role as mood influencer” in order to create new forms of power and control. Today, marketers want mood-related data more than ever, at least in part to fuel automated, personalized ad targeting. In 2016, the world’s largest holding company for advertising and PR agencies, WPP, announced that it had struck a multi-year partnership with Spotify, giving the conglomerate unprecedented access to Spotify’s mood data specifically. The partnership with the WPP, it turns out, was part of Spotify’s plan to ramp up its advertising business in advance of its IPO. WPP is the parent company to several of the world’s largest and oldest advertising, PR, and brand consulting firms, including Ogilvy, Grey Global Group, and at least eighteen others. Across their portfolio, WPP owns companies that work with numerous mega-corporations and household brands, helping shill everything from cars, Coca-Cola, and KFC to booze, banks, and Burger King. Over the decades, these companies have worked on campaigns spanning from Merrill Lynch and Lay’s potato chips to Colgate-Palmolive and Ford. Additionally, WPP properties also include tech-focused companies that claim proficiency in automation- and personalization-driven ad sales, all of which would now benefit from Spotify’s mood data. The 2016 announcement of WPP and Spotify’s global partnership in “data, insights, creative, technology, innovation, programmatic solution, and new growth markets” speaks for itself: WPP now has unique listening preferences and behaviors of Spotify’s 100 million users in 60 countries. The multi-year deal provides differentiating value to WPP and its clients by harnessing insights from the connection between music and audiences’ moods and activities. Music attributes such as tempo and energy are proven to be highly relevant in predicting mood, which enables advertisers to understand their audiences in a new emotional dimension. What’s more, WPP-owned advertising agencies could now access the “Wunderman Zipline™ Data Management Platform” to gain direct access to Spotify users’ “mood, listening and playlist behavior, activity and location.” They’d also potentially make use of “Spotify’s data on connected device usage” while the WPP-owned company GroupM specifically would retain access to “an exclusive infusion of Spotify data” into its own platform made for corporate ad targeting. Per the announcement, WPP companies would also serve as launch partners for new types of advertising and new markets unveiled by Spotify, while procuring “visibility into Spotify’s product roadmap and access to beta testing.” At the time the partnership was announced, Anas Ghazi, then Managing Director of Global Partnerships at WPP’s Data Alliance, noted that all WPP agencies would be able to “grab these insights. . . . If you think about how music shapes your activity and thoughts, this is a new, unique play for us to find audiences. Mood and moments are the next pieces of understanding audiences.” And Harvey Goldhersz, then CEO of GroupM Data & Analytics, salivated: “The insights we’ll develop from Spotify’s behavioral data will help our clients realize a material marketplace advantage, aiding delivery of ads that are appropriate to the consumer’s mood and the device used.” Ongoing Synergies While this deal was announced via the WPP Data Alliance, visiting that particular organization’s website now auto-directs back to the main WPP website, likely a result of corporate restructuring that WPP underwent over the past year. Currently, the only public-facing evidence of the relationship between WPP and Spotify is listed online under the WPP-owned data and insights company Kantar, which WPP describes as “the world’s leading marketing data, insight and consultancy company.” What might Kantar be doing with this user data? The current splash video deck on its website is useful: it claims to be the first agency to use “facial recognition in advertising testing,” and it professes to be exploring new technologies “from biodata and biometrics and healthcare, to capturing human sentiment and even voting intentions by analyzing social media.” And, finally, it admits to “exploiting big data, artificial intelligence and analytics . . . to predict attitudes and behavior.” When we reached out to see if the relationship between Kantar and Spotify had changed since the initial 2016 announcement, Kantar sent The Baffler this statement: The 2016 Spotify collaboration was the first chapter of many-a collaboration and has continued to evolve to meet the dynamic needs of our clients and marketplace. Spotify continues to be a valued partner of larger enterprise and Kantar with on-going synergies. One year after the announcement of the partnership, in 2017, Spotify further confirmed its desire to establish direct relationships with the world’s biggest advertising agencies when it hired two executives from WPP: Angela Solk, now Global Head of Agency Partnerships, whose job at Spotify includes teaching WPP and other ad agencies how to best make use of Spotify’s first-party data. (In Solk’s first year at Spotify, she helped create the Smirnoff Equalizer; in a 2018 interview, she reflected on the “beauty” of that branded content campaign and Spotify’s ability to extract listener insight and make it “part of Smirnoff’s DNA.”) Spotify also hired Craig Weingarten as its Head of Industry, Automotive, which now leads Spotify’s Detroit-based auto ad sales team. According to its own media narrative, Spotify offers data access to brands that competitor platforms do not, and it has gained a reputation for its eagerness to share its first-party data. At advertising conferences and in the ad press, Spotify top ad exec Marco Bertozzi has emphasized how Spotify hopes to widely share first-party data, going so far as to confess, “When other walled gardens say no to data questions . . . we say yes.” (Bertozzi was also the mind behind an internal Spotify campaign adorably branded “#LoveAds” to combat growing societal disgust with digital advertising. #LoveAds started as a mantra of the advertising team, but as Bertozzi proudly explained in late 2018, “#LoveAds became a movement within the company.”) Spotify has spent tremendous energy on its ad team’s proficiency with cross-device advertising options (likely due to the imminent ubiquity of Spotify in the car and the so-called “smart home”), as well as “programmatic advertising,” otherwise understood as the targeted advertising sold through an automated process, often in milliseconds—Spotify seeks to be the most dominant seller of such advertising in the audio space. And there’s also the self-serve platform, direct inventory sales, Spotify’s private marketplace (an invite-only inventory for select advertisers), programmatic guaranteed deals (a guaranteed volume of impressions at a fixed price)—the jargon ad-speak lists could go on and on. Trying to keep tabs on Spotify’s advertising products and partnerships is dizzying. But what is clear is that the hype surrounding these partnerships has often focused on “moods and moments”-related data Spotify offers brands—not to mention the company’s penchant for allowing brands to combine their own data with Spotify’s. In 2017, Spotify’s Brian Benedik told The Drum that Spotify’s access to listening habits and first-party data is “one of the reasons that some of these big multi-national brands like the Samsungs and the Heinekens and the Microsofts and Procter and Gambles of the world are working with us a lot closer than they ever have . . . they don’t see that or get that from any other platform out there.” And it appears that things will only get darker. Julie Clark, Spotify’s Global Head of Automation Sales, said earlier this year in an interview that its targeting capabilities are growing: “There’s deeper first party-data that’s going to become available as well.” Mood-Boosterism Recently, I tried out a mood-related experiment on Spotify. I created a new account and only listened to the “Coping with Loss” playlist on loop for a few days. I paid particular attention to the advertisements that I was served by Spotify. And while I do not know for sure whether listening to the “Coping with Loss” playlist caused me to receive an unusually nostalgic Home Depot ad about how your carpets contain memories, or an ad for a particularly angsty new album called Amidst the Chaos, the extent to which Spotify is matching moods and emotions with advertisements certainly makes it seem possible. What was clearer: during my time spent listening exclusively to songs about grieving, Spotify was quick to recommend that I brighten my mood. Under the heading “More like Coping With Loss . . .” it recommended playlists themed for Father’s Day and Mother’s Day, and playlists called “Warm Fuzzy Feelings,” “Soundtrack Love Songs,” “90s Love Songs,” “Love Ballads,” and “Acoustic Hits.” Spotify evidently did not want me to sit with my sorrow; it wanted my mood to improve. It wanted me to be happy. This is because Spotify specifically wants to be seen as a mood-boosting platform. In Spotify for Brands blog posts, the company routinely emphasizes how its own platform distinguishes itself from other streams of digital content, particularly because it gives marketers a chance to reach users through a medium that is widely seen as a “positive enhancer”: a medium they turn to for “music to help them get through the less desirable moments in their day, improve the more positive ones and even discover new things about their personality,” says Spotify. “We’re quite unique in that we have people’s ears . . . combine that with the psycho-graphic data that we have and that becomes very powerful for brands,” said Jana Jakovljevic in 2015, then Head of Programmatic Solutions; she is now employed by AI ad-tech company Cognitiv, which claims to be “the first neural network technology that unearths patterns of consumer behavior” using “deep learning” to predict and target consumers. The fact that experience at Spotify could prepare someone for such a career shift is worth some reflection. But more interestingly, Jakovljevic added that Spotify was using this data in many ways, including to determine exactly what type of music to recommend, which is important to remember: the data that is used to sell advertisers on the platform is also the data driving recommendations. The platform can recommend music in ways that appease advertisers while promising them that mood-boosting ad space. What’s in question here isn’t just how Spotify monitors and mines data on our listening in order to use their “audience segments” as a form of currency—but also how it then creates environments more suitable for advertisers through what it recommends, manipulating future listening on the platform. In appealing to advertisers, Spotify also celebrates its position as a background experience and in particular how this benefits advertisers and brands. Jorge Espinel, who was Head of Global Business Development at Spotify for five years, once said in an interview: “We love to be a background experience. You’re competing for consumer attention. Everyone is fighting for the foreground. We have the ability to fight for the background. And really no one is there. You’re doing your email, you’re doing your social network, etcetera.” In other words, it is in advertisers’ best interests that Spotify stays a background experience. When a platform like Spotify sells advertisers on its mood-boosting, background experience, and then bakes these aims into what it recommends to listeners, a twisted form of behavior manipulation is at play. It’s connected to what Shoshana Zuboff, in The Age of Surveillance Capitalism: The Fight for A Human Future at the New Frontier of Power, calls the “behavioral futures market”—where “many companies are eager to lay their bets on our future behavior.” Indeed, Spotify seeks not just to monitor and mine our mood, but also to manipulate future behavior. “What we’d ultimately like to do is be able to predict people’s behavior through music,” Les Hollander, the Global Head of Audio and Podcast Monetization, said in 2017. “We know that if you’re listening to your chill playlist in the morning, you may be doing yoga, you may be meditating . . . so we’d serve a contextually relevant ad with information and tonality and pace to that particular moment.” Very Zen! Spotify’s treatment of its mood and emotion data as a form of currency in the greater data marketplace should be considered more generally in the context of the tech industry’s rush to quantify our emotions. There is a burgeoning industry surrounding technology that alleges to mine our emotional states in order to feed AI projects; take, for example, car companies that claim they can use facial recognition to read your mood and keep you safer on the road. Or Facebook’s patents on facial recognition software. Or unnerving technologies like Affectiva, which claim to be developing an industry around “emotion AI” and “affective computing” processes that measure human emotions. It remains to be seen how Spotify could leverage such tech to maintain its reputation as a mood-boosting platform. And yet we should admit that it’s good for business for Spotify to manipulate people’s emotions on the platform toward feelings of chillness, contentment, and happiness. This has immense consequences for music, of course, but what does it mean for news and politics and culture at large, as the platform is set to play a bigger role in mediating all of the above, especially as its podcasting efforts grow? On the Spotify for Brands blog, the streaming giant explains that its research shows millennials are weary of most social media and news platforms, feeling that these mediums affect them negatively. Spotify is a solution for brands, it explains, because it is a platform where people go to feel good. Of course, in this telling of things, Spotify conveniently ignores why those other forms of media feel so bad. It’s because they are platforms that prioritize their own product and profit above all else. It’s because they are platforms governed by nothing more than surveillance technology and the mechanisms of advertising. Source
  10. DUBLIN (Reuters) - The European Court of Justice (ECJ) will hear a landmark privacy case regarding the transfer of EU citizens’ data to the United States in July, after Facebook’s bid to stop its referral was blocked by Ireland’s Supreme Court on Friday. The case, which was initially brought against Facebook by Austrian privacy activist Max Schrems, is the latest to question whether methods used by technology firms to transfer data outside the 28-nation European Union give EU consumers sufficient protection from U.S. surveillance. A ruling by Europe’s top court against the current legal arrangements would have major implications for thousands of companies, which make millions of such transfers every day, including human resources databases, credit card transactions and storage of internet browsing histories. The Irish High Court, which heard Schrems’ case against Facebook last year, said there were well-founded concerns about an absence of an effective remedy in U.S. law compatible with EU legal requirements, which prohibit personal data being transferred to a country with inadequate privacy protections. The High Court ordered the case be referred to the ECJ to assess whether the methods used for data transfers - including standard contractual clauses and the so called Privacy Shield agreement - were legal. Facebook took the case to the Supreme Court when the High Court refused its request to appeal the referral, but in a unanimous decision on Friday, the Supreme Court said it would not overturn any aspect the ruling. The High Court’s original five-page referral asks the ECJ if the Privacy Shield - under which companies certify they comply with EU privacy law when transferring data to the United States - does in fact mean that the United States “ensures an adequate level of protection”. Facebook came under scrutiny last year after it emerged the personal information of up to 87 million users, mostly in the United States, may have been improperly shared with political consultancy Cambridge Analytica. More generally, data privacy has been a growing public concern since revelations in 2013 by former U.S. intelligence contractor Edward Snowden of mass U.S. surveillance caused political outrage in Europe. The Privacy Shield was hammered out between the EU and the United States after the ECJ struck down its predecessor, Safe Harbour, on the grounds that it did not afford Europeans’ data enough protection from U.S. surveillance. That case was also brought by Schrems via the Irish courts. “Facebook likely again invested millions to stop this case from progressing. It is good to see that the Supreme Court has not followed,” Schrems said in a statement. Source
  11. T-Mobile, Sprint, and AT&T are selling access to their customers’ location data, and that data is ending up in the hands of bounty hunters and others not authorized to possess it, letting them track most phones in the country. Nervously, I gave a bounty hunter a phone number. He had offered to geolocate a phone for me, using a shady, overlooked service intended not for the cops, but for private individuals and businesses. Armed with just the number and a few hundred dollars, he said he could find the current location of most phones in the United States. The bounty hunter sent the number to his own contact, who would track the phone. The contact responded with a screenshot of Google Maps, containing a blue circle indicating the phone’s current location, approximate to a few hundred metres. Queens, New York. More specifically, the screenshot showed a location in a particular neighborhood—just a couple of blocks from where the target was. The hunter had found the phone (the target gave their consent to Motherboard to be tracked via their T-Mobile phone.) The bounty hunter did this all without deploying a hacking tool or having any previous knowledge of the phone’s whereabouts. Instead, the tracking tool relies on real-time location data sold to bounty hunters that ultimately originated from the telcos themselves, including T-Mobile, AT&T, and Sprint, a Motherboard investigation has found. These surveillance capabilities are sometimes sold through word-of-mouth networks. Whereas it’s common knowledge that law enforcement agencies can track phones with a warrant to service providers, IMSI catchers, or until recently via other companies that sell location data such as one called Securus, at least one company, called Microbilt, is selling phone geolocation services with little oversight to a spread of different private industries, ranging from car salesmen and property managers to bail bondsmen and bounty hunters, according to sources familiar with the company’s products and company documents obtained by Motherboard. Compounding that already highly questionable business practice, this spying capability is also being resold to others on the black market who are not licensed by the company to use it, including me, seemingly without Microbilt’s knowledge. Motherboard’s investigation shows just how exposed mobile networks and the data they generate are, leaving them open to surveillance by ordinary citizens, stalkers, and criminals, and comes as media and policy makers are paying more attention than ever to how location and other sensitive data is collected and sold. The investigation also shows that a wide variety of companies can access cell phone location data, and that the information trickles down from cell phone providers to a wide array of smaller players, who don’t necessarily have the correct safeguards in place to protect that data. “People are reselling to the wrong people,” the bail industry source who flagged the company to Motherboard said. Motherboard granted the source and others in this story anonymity to talk more candidly about a controversial surveillance capability. Your mobile phone is constantly communicating with nearby cell phone towers, so your telecom provider knows where to route calls and texts. From this, telecom companies also work out the phone’s approximate location based on its proximity to those towers. Although many users may be unaware of the practice, telecom companies in the United States sell access to their customers’ location data to other companies, called location aggregators, who then sell it to specific clients and industries. Last year, one location aggregator called LocationSmart faced harsh criticism for selling data that ultimately ended up in the hands of Securus, a company which provided phone tracking to low level enforcement without requiring a warrant. LocationSmart also exposed the very data it was selling through a buggy website panel, meaning anyone could geolocate nearly any phone in the United States at a click of a mouse. There’s a complex supply chain that shares some of American cell phone users’ most sensitive data, with the telcos potentially being unaware of how the data is being used by the eventual end user, or even whose hands it lands in. Financial companies use phone location data to detect fraud; roadside assistance firms use it to locate stuck customers. But AT&T, for example, told Motherboard the use of its customers’ data by bounty hunters goes explicitly against the company’s policies, raising questions about how AT&T allowed the sale for this purpose in the first place. “The allegation here would violate our contract and Privacy Policy,” an AT&T spokesperson told Motherboard in an email. In the case of the phone we tracked, six different entities had potential access to the phone’s data. T-Mobile shares location data with an aggregator called Zumigo, which shares information with Microbilt. Microbilt shared that data with a customer using its mobile phone tracking product. The bounty hunter then shared this information with a bail industry source, who shared it with Motherboard. The CTIA, a telecom industry trade group of which AT&T, Sprint, and T-Mobile are members, has official guidelines for the use of so-called “location-based services” that “rely on two fundamental principles: user notice and consent,” the group wrote in those guidelines. Telecom companies and data aggregators that Motherboard spoke to said that they require their clients to get consent from the people they want to track, but it’s clear that this is not always happening. A second source who has tracked the geolocation industry told Motherboard, while talking about the industry generally, “If there is money to be made they will keep selling the data.” “Those third-level companies sell their services. That is where you see the issues with going to shady folks [and] for shady reasons,” the source added. Frederike Kaltheuner, data exploitation programme lead at campaign group Privacy International, told Motherboard in a phone call that “it’s part of a bigger problem; the US has a completely unregulated data ecosystem.” Microbilt buys access to location data from an aggregator called Zumigo and then sells it to a dizzying number of sectors, including landlords to scope out potential renters; motor vehicle salesmen, and others who are conducting credit checks. Armed with just a phone number, Microbilt’s “Mobile Device Verify” product can return a target’s full name and address, geolocate a phone in an individual instance, or operate as a continuous tracking service. “You can set up monitoring with control over the weeks, days and even hours that location on a device is checked as well as the start and end dates of monitoring,” a company brochure Motherboard found online reads. Posing as a potential customer, Motherboard explicitly asked a Microbilt customer support staffer whether the company offered phone geolocation for bail bondsmen. Shortly after, another staffer emailed with a price list—locating a phone can cost as little as $4.95 each if searching for a low number of devices. That price gets even cheaper as the customer buys the capability to track more phones. Getting real-time updates on a phone’s location can cost around $12.95. “Dirt cheap when you think about the data you can get,” the source familiar with the industry added. It’s bad enough that access to highly sensitive phone geolocation data is already being sold to a wide range of industries and businesses. But there is also an underground market that Motherboard used to geolocate a phone—one where Microbilt customers resell their access at a profit, and with minimal oversight. “Blade Runner, the iconic sci-fi movie, is set in 2019. And here we are: there's an unregulated black market where bounty-hunters can buy information about where we are, in real time, over time, and come after us. You don't need to be a replicant to be scared of the consequences,” Thomas Rid, professor of strategic studies at Johns Hopkins University, told Motherboard in an online chat. The bail industry source said his middleman used Microbilt to find the phone. This middleman charged $300, a sizeable markup on the usual Microbilt price. The Google Maps screenshot provided to Motherboard of the target phone’s location also included its approximate longitude and latitude coordinates, and a range of how accurate the phone geolocation is: 0.3 miles, or just under 500 metres. It may not necessarily be enough to geolocate someone to a specific building in a populated area, but it can certainly pinpoint a particular borough, city, or neighborhood. In other cases of phone geolocation it is typically done with the consent of the target, perhaps by sending a text message the user has to deliberately reply to, signalling they accept their location being tracked. This may be done in the earlier roadside assistance example or when a company monitors its fleet of trucks. But when Motherboard tested the geolocation service, the target phone received no warning it was being tracked. The bail source who originally alerted Microbilt to Motherboard said that bounty hunters have used phone geolocation services for non-work purposes, such as tracking their girlfriends. Motherboard was unable to identify a specific instance of this happening, but domestic stalkers have repeatedly used technology, such as mobile phone malware, to track spouses. As Motherboard was reporting this story, Microbilt removed documents related to its mobile phone location product from its website. https://www.documentcloud.org/documents/5676919-Microbilt-Mobile-Device-Verify-2018.html A Microbilt spokesperson told Motherboard in a statement that the company requires anyone using its mobile device verification services for fraud prevention must first obtain consent of the consumer. Microbilt also confirmed it found an instance of abuse on its platform—our phone ping. “The request came through a licensed state agency that writes in approximately $100 million in bonds per year and passed all up front credentialing under the pretense that location was being verified to mitigate financial exposure related to a bond loan being considered for the submitted consumer,” Microbilt said in an emailed statement. In this case, “licensed state agency” is referring to a private bail bond company, Motherboard confirmed. “As a result, MicroBilt was unaware that its terms of use were being violated by the rogue individual that submitted the request under false pretenses, does not approve of such use cases, and has a clear policy that such violations will result in loss of access to all MicroBilt services and termination of the requesting party’s end-user agreement,” Microbilt added. “Upon investigating the alleged abuse and learning of the violation of our contract, we terminated the customer’s access to our products and they will not be eligible for reinstatement based on this violation.” Zumigo confirmed it was the company that provided the phone location to Microbilt and defended its practices. In a statement, Zumigo did not seem to take issue with the practice of providing data that ultimately ended up with licensed bounty hunters, but wrote, “illegal access to data is an unfortunate occurrence across virtually every industry that deals in consumer or employee data, and it is impossible to detect a fraudster, or rogue customer, who requests location data of his or her own mobile devices when the required consent is provided. However, Zumigo takes steps to protect privacy by providing a measure of distance (approx. 0.5-1.0 mile) from an actual address.” Zumigo told Motherboard it has cut Microbilt’s data access. In Motherboard’s case, the successfully geolocated phone was on T-Mobile. “We take the privacy and security of our customers’ information very seriously and will not tolerate any misuse of our customers’ data,” A T-Mobile spokesperson told Motherboard in an emailed statement. “While T-Mobile does not have a direct relationship with Microbilt, our vendor Zumigo was working with them and has confirmed with us that they have already shut down all transmission of T-Mobile data. T-Mobile has also blocked access to device location data for any request submitted by Zumigo on behalf of Microbilt as an additional precaution.” Microbilt’s product documentation suggests the phone location service works on all mobile networks, however the middleman was unable or unwilling to conduct a search for a Verizon device. Verizon did not respond to a request for comment. AT&T told Motherboard it has cut access to Microbilt as the company investigates. “We only permit the sharing of location when a customer gives permission for cases like fraud prevention or emergency roadside assistance, or when required by law,” the AT&T spokesperson said. Sprint told Motherboard in a statement that “protecting our customers’ privacy and security is a top priority, and we are transparent about that in our Privacy Policy [...] Sprint does not have a direct relationship with MicroBilt. If we determine that any of our customers do and have violated the terms of our contract, we will take appropriate action based on those findings.” Sprint would not clarify the contours of its relationship with Microbilt. These statements sound very familiar. When The New York Times and Senator Ron Wyden published details of Securus last year, the firm that was offering geolocation to low level law enforcement without a warrant, the telcos said they were taking extra measures to make sure their customers’ data would not be abused again. Verizon announced it was going to limit data access to companies not using it for legitimate purposes. T-Mobile, Sprint, and AT&T followed suit shortly after with similar promises. After Wyden’s pressure, T-Mobile’s CEO John Legere tweeted in June last year “I’ve personally evaluated this issue & have pledged that @tmobile will not sell customer location data to shady middlemen.” Months after the telcos said they were going to combat this problem, in the face of an arguably even worse case of abuse and data trading, they are saying much the same thing. Last year, Motherboard reported on a company that previously offered phone geolocation to bounty hunters; here Microbilt is operating even after a wave of outrage from policy makers. In its statement to Motherboard on Monday, T-Mobile said it has nearly finished the process of terminating its agreements with location aggregators. “It would be bad if this was the first time we learned about it. It’s not. Every major wireless carrier pledged to end this kind of data sharing after I exposed this practice last year. Now it appears these promises were little more than worthless spam in their customers’ inboxes,” Wyden told Motherboard in a statement. Wyden is proposing legislation to safeguard personal data. Due to the ongoing government shutdown, the Federal Communications Commission (FCC) was unable to provide a statement. “Wireless carriers’ continued sale of location data is a nightmare for national security and the personal safety of anyone with a phone,” Wyden added. “When stalkers, spies, and predators know when a woman is alone, or when a home is empty, or where a White House official stops after work, the possibilities for abuse are endless.” Source
  12. from the 'intel-techniques,'-indeed dept A little opsec goes a long way. The Massachusetts State Police -- one of the most secretive law enforcement agencies in the nation -- gave readers of its Twitter feed a free look at the First Amendment-protected activities it keeps tabs on… by uploading a screenshot showing its browser bookmarks. Alex Press of Jacobin Magazine was one of the Twitter users to catch the inadvertent exposure of MSP operations. If you can't read/see the tweet, it says: the MA staties just unintentionally tweeted a photo that shows their bookmarks include a whole number of Boston’s left-wing orgs The tweet was quickly scrubbed by the MSP, but not before other Twitter users had grabbed screenshots. Some of the activist groups bookmarked by the state police include Mass. Action Against Police Brutality, the Coalition to Organize and Mobilize Boston Against Trump, and Resistance Calendar. Here's a closer look at the bookmarks. The MSP did not deny they keep (browser) tabs on protest organizations. Instead, it attempted to portray this screen of left-leaning bookmarks as some sort of non-partisan, non-cop-centric attempt to keep the community safe by being forewarned and forearmed. Ok. But mainly these groups? The ones against police brutality and the back-the-blue President? Seems a little one-sided for an "of any type and by any group" declaration. The statement continues in the same defensive vein for a few more sentences, basically reiterating the false conceit that cops don't take sides when it comes to activist groups and the good people of Massachusetts are lucky to have such proactive public servants at their disposal. Whatever. If it wasn't a big deal, the MSP wouldn't have vanished the original tweet into the internet ether. The screenshot came from a "fusion center" -- one of those DHS partnerships that results in far more rights violations and garbage "see something, say something" reports than "actionable intelligence". Fusion centers are supposed to be focused on terrorism, not on people who don't like police brutality or the current Commander in Chief. What this looks like is probably what it is: police keeping tabs on people they don't like or people who don't like them. That's not really what policing is about and it sure as hell doesn't keep the community any safer. Source
  13. By Bruce Schneier The Five Eyes -- the intelligence consortium of the rich English-speaking countries (the US, Canada, the UK, Australia, and New Zealand) -- have issued a "Statement of Principles on Access to Evidence and Encryption" where they claim their needs for surveillance outweigh everyone's needs for security and privacy. ...the increasing use and sophistication of certain encryption designs present challenges for nations in combatting serious crimes and threats to national and global security. Many of the same means of encryption that are being used to protect personal, commercial and government information are also being used by criminals, including child sex offenders, terrorists and organized crime groups to frustrate investigations and avoid detection and prosecution. Privacy laws must prevent arbitrary or unlawful interference, but privacy is not absolute. It is an established principle that appropriate government authorities should be able to seek access to otherwise private information when a court or independent authority has authorized such access based on established legal standards. The same principles have long permitted government authorities to search homes, vehicles, and personal effects with valid legal authority. The increasing gap between the ability of law enforcement to lawfully access data and their ability to acquire and use the content of that data is a pressing international concern that requires urgent, sustained attention and informed discussion on the complexity of the issues and interests at stake. Otherwise, court decisions about legitimate access to data are increasingly rendered meaningless, threatening to undermine the systems of justice established in our democratic nations. To put it bluntly, this is reckless and shortsighted. I've repeatedly written about why this can't be done technically, and why trying results in insecurity. But there's a greater principle at first: we need to decide, as nations and as society, to put defense first. We need a "defense dominant" strategy for securing the Internet and everything attached to it. This is important. Our national security depends on the security of our technologies. Demanding that technology companies add backdoors to computers and communications systems puts us all at risk. We need to understand that these systems are too critical to our society and -- now that they can affect the world in a direct physical manner -- affect our lives and property as well. This is what I just wrote, in Click Here to Kill Everybody: There is simply no way to secure US networks while at the same time leaving foreign networks open to eavesdropping and attack. There's no way to secure our phones and computers from criminals and terrorists without also securing the phones and computers of those criminals and terrorists. On the generalized worldwide network that is the Internet, anything we do to secure its hardware and software secures it everywhere in the world. And everything we do to keep it insecure similarly affects the entire world. This leaves us with a choice: either we secure our stuff, and as a side effect also secure their stuff; or we keep their stuff vulnerable, and as a side effect keep our own stuff vulnerable. It's actually not a hard choice. An analogy might bring this point home. Imagine that every house could be opened with a master key, and this was known to the criminals. Fixing those locks would also mean that criminals' safe houses would be more secure, but it's pretty clear that this downside would be worth the trade-off of protecting everyone's house. With the Internet+ increasing the risks from insecurity dramatically, the choice is even more obvious. We must secure the information systems used by our elected officials, our critical infrastructure providers, and our businesses. Yes, increasing our security will make it harder for us to eavesdrop, and attack, our enemies in cyberspace. (It won't make it impossible for law enforcement to solve crimes; I'll get to that later in this chapter.) Regardless, it's worth it. If we are ever going to secure the Internet+, we need to prioritize defense over offense in all of its aspects. We've got more to lose through our Internet+ vulnerabilities than our adversaries do, and more to gain through Internet+ security. We need to recognize that the security benefits of a secure Internet+ greatly outweigh the security benefits of a vulnerable one. We need to have this debate at the level of national security. Putting spy agencies in charge of this trade-off is wrong, and will result in bad decisions. Cory Doctorow has a good reaction. Source
  14. In the decade after the 9/11 attacks, the New York City Police Department moved to put millions of New Yorkers under constant watch. Warning of terrorism threats, the department created a plan to carpet Manhattan’s downtown streets with thousands of cameras and had, by 2008, centralized its video surveillance operations to a single command center. Two years later, the NYPD announced that the command center, known as the Lower Manhattan Security Coordination Center, had integrated cutting-edge video analytics software into select cameras across the city. The video analytics software captured stills of individuals caught on closed-circuit TV footage and automatically labeled the images with physical tags, such as clothing color, allowing police to quickly search through hours of video for images of individuals matching a description of interest. At the time, the software was also starting to generate alerts for unattended packages, cars speeding up a street in the wrong direction, or people entering restricted areas. Over the years, the NYPD has shared only occasional, small updates on the program’s progress. In a 2011 interview with Scientific American, for example, Inspector Salvatore DiPace, then commanding officer of the Lower Manhattan Security Initiative, said the police department was testing whether the software could box out images of people’s faces as they passed by subway cameras and subsequently cull through the images for various unspecified “facial features.” While facial recognition technology, which measures individual faces at over 16,000 points for fine-grained comparisons with other facial images, has attracted significant legal scrutiny and media attention, this object identification software has largely evaded attention. How exactly this technology came to be developed and which particular features the software was built to catalog have never been revealed publicly by the NYPD. Now, thanks to confidential corporate documents and interviews with many of the technologists involved in developing the software, The Intercept and the Investigative Fund have learned that IBM began developing this object identification technology using secret access to NYPD camera footage. With access to images of thousands of unknowing New Yorkers offered up by NYPD officials, as early as 2012, IBM was creating new search features that allow other police departments to search camera footage for images of people by hair color, facial hair, and skin tone. IBM declined to comment on its use of NYPD footage to develop the software. However, in an email response to questions, the NYPD did tell The Intercept that “Video, from time to time, was provided to IBM to ensure that the product they were developing would work in the crowded urban NYC environment and help us protect the City. There is nothing in the NYPD’s agreement with IBM that prohibits sharing data with IBM for system development purposes. Further, all vendors who enter into contractual agreements with the NYPD have the absolute requirement to keep all data furnished by the NYPD confidential during the term of the agreement, after the completion of the agreement, and in the event that the agreement is terminated.” In an email to The Intercept, the NYPD confirmed that select counterterrorism officials had access to a pre-released version of IBM’s program, which included skin tone search capabilities, as early as the summer of 2012. NYPD spokesperson Peter Donald said the search characteristics were only used for evaluation purposes and that officers were instructed not to include the skin tone search feature in their assessment. The department eventually decided not to integrate the analytics program into its larger surveillance architecture, and phased out the IBM program in 2016. After testing out these bodily search features with the NYPD, IBM released some of these capabilities in a 2013 product release. Later versions of IBM’s software retained and expanded these bodily search capabilities. (IBM did not respond to a question about the current availability of its video analytics programs.) Asked about the secrecy of this collaboration, the NYPD said that “various elected leaders and stakeholders” were briefed on the department’s efforts “to keep this city safe,” adding that sharing camera access with IBM was necessary for the system to work. IBM did not respond to a question about why the company didn’t make this collaboration public. Donald said IBM gave the department licenses to apply the system to 512 cameras, but said the analytics were tested on “fewer than fifty.” He added that IBM personnel had access to certain cameras for the sole purpose of configuring NYPD’s system, and that the department put safeguards in place to protect the data, including “non-disclosure agreements for each individual accessing the system; non-disclosure agreements for the companies the vendors worked for; and background checks.” Civil liberties advocates contend that New Yorkers should have been made aware of the potential use of their physical data for a private company’s development of surveillance technology. The revelations come as a city council bill that would require NYPD transparency about surveillance acquisitions continues to languish, due, in part, to outspoken opposition from New York City Mayor Bill de Blasio and the NYPD. Skin Tone Search Technology, Refined on New Yorkers IBM’s initial breakthroughs in object recognition technology were envisioned for technologies like self-driving cars or image recognition on the internet, said Rick Kjeldsen, a former IBM researcher. But after 9/11, Kjeldsen and several of his colleagues realized their program was well suited for counterterror surveillance. “After 9/11, the funding sources and the customer interest really got driven toward security,” said Kjeldsen, who said he worked on the NYPD program from roughly 2009 through 2013. “Even though that hadn’t been our focus up to that point, that’s where demand was.” IBM’s first major urban video surveillance project was with the Chicago Police Department and began around 2005, according to Kjeldsen. The department let IBM experiment with the technology in downtown Chicago until 2013, but the collaboration wasn’t seen as a real business partnership. “Chicago was always known as, it’s not a real — these guys aren’t a real customer. This is kind of a development, a collaboration with Chicago,” Kjeldsen said. “Whereas New York, these guys were a customer. And they had expectations accordingly.” The NYPD acquired IBM’s video analytics software as one part of the Domain Awareness System, a shared project of the police department and Microsoft that centralized a vast web of surveillance sensors in lower and midtown Manhattan — including cameras, license plate readers, and radiation detectors — into a unified dashboard. IBM entered the picture as a subcontractor to Microsoft subsidiary Vexcel in 2007, as part of a project worth $60.7 million over six years, according to the internal IBM documents. In New York, the terrorist threat “was an easy selling point,” recalled Jonathan Connell, an IBM researcher who worked on the initial NYPD video analytics installation. “You say, ‘Look what the terrorists did before, they could come back, so you give us some money and we’ll put a camera there.” A former NYPD technologist who helped design the Lower Manhattan Security Initiative, asking to speak on background citing fears of professional reprisal, confirmed IBM’s role as a “strategic vendor.” “In our review of video analytics vendors at that time, they were well ahead of everyone else in my personal estimation,” the technologist said. According to internal IBM planning documents, the NYPD began integrating IBM’s surveillance product in March 2010 for the Lower Manhattan Security Coordination Center, a counterterrorism command center launched by Police Commissioner Ray Kelly in 2008. In a “60 Minutes” tour of the command center in 2011, Jessica Tisch, then the NYPD’s director of policy and planning for counterterrorism, showed off the software on gleaming widescreen monitors, demonstrating how it could pull up images and video clips of people in red shirts. Tisch did not mention the partnership with IBM. During Kelly’s tenure as police commissioner, the NYPD quietly worked with IBM as the company tested out its object recognition technology on a select number of NYPD and subway cameras, according to IBM documents. “We really needed to be able to test out the algorithm,” said Kjeldsen, who explained that the software would need to process massive quantities of diverse images in order to learn how to adjust to the differing lighting, shadows, and other environmental factors in its view. “We were almost using the video for both things at that time, taking it to the lab to resolve issues we were having or to experiment with new technology,” Kjeldsen said. At the time, the department hoped that video analytics would improve analysts’ ability to identify suspicious objects and persons in real time in sensitive areas, according to Conor McCourt, a retired NYPD counterterrorism sergeant who said he used IBM’s program in its initial stages. “Say you have a suspicious bag left in downtown Manhattan, as a person working in the command center,” McCourt said. “It could be that the analytics saw the object sitting there for five minutes, and says, ‘Look, there’s an object sitting there.’” Operators could then rewind the video or look at other cameras nearby, he explained, to get a few possibilities as to who had left the object behind. Over the years, IBM employees said, they started to become more concerned as they worked with the NYPD to allow the program to identify demographic characteristics. By 2012, according to the internal IBM documents, researchers were testing out the video analytics software on the bodies and faces of New Yorkers, capturing and archiving their physical data as they walked in public or passed through subway turnstiles. With these close-up images, IBM refined its ability to search for people on camera according to a variety of previously undisclosed features, such as age, gender, hair color (called “head color”), the presence of facial hair — and skin tone. The documents reference meetings between NYPD personnel and IBM researchers to review the development of body identification searches conducted at subway turnstile cameras. “We were certainly worried about where the heck this was going,” recalled Kjeldsen. “There were a couple of us that were always talking about this, you know, ‘If this gets better, this could be an issue.’” According to the NYPD, counterterrorism personnel accessed IBM’s bodily search feature capabilities only for evaluation purposes, and they were accessible only to a handful of counterterrorism personnel. “While tools that featured either racial or skin tone search capabilities were offered to the NYPD, they were explicitly declined by the NYPD,” Donald, the NYPD spokesperson, said. “Where such tools came with a test version of the product, the testers were instructed only to test other features (clothing, eyeglasses, etc.), but not to test or use the skin tone feature. That is not because there would have been anything illegal or even improper about testing or using these tools to search in the area of a crime for an image of a suspect that matched a description given by a victim or a witness. It was specifically to avoid even the suggestion or appearance of any kind of technological racial profiling.” The NYPD ended its use of IBM’s video analytics program in 2016, Donald said. Donald acknowledged that, at some point in 2016 or early 2017, IBM approached the NYPD with an upgraded version of the video analytics program that could search for people by ethnicity. “The Department explicitly rejected that product,” he said, “based on the inclusion of that new search parameter.” In 2017, IBM released Intelligent Video Analytics 2.0, a product with a body camera surveillance capability that allows users to detect people captured on camera by “ethnicity” tags, such as “Asian,” “Black,” and “White.” Kjeldsen, the former IBM researcher who helped develop the company’s skin tone analytics with NYPD camera access, said the department’s claim that the NYPD simply tested and rejected the bodily search features was misleading. “We would have not explored it had the NYPD told us, ‘We don’t want to do that,’” he said. “No company is going to spend money where there’s not customer interest.” Kjeldsen also added that the NYPD’s decision to allow IBM access to their cameras was crucial for the development of the skin tone search features, noting that during that period, New York City served as the company’s “primary testing area,” providing the company with considerable environmental diversity for software refinement. “The more different situations you can use to develop your software, the better it’s going be,” Kjeldsen said. “That obviously pertains to people, skin tones, whatever it is you might be able to classify individuals as, and it also goes for clothing.” The NYPD’s cooperation with IBM has since served as a selling point for the product at California State University, Northridge. There, campus police chief Anne Glavin said the technology firm IXP helped sell her on IBM’s object identification product by citing the NYPD’s work with the company. “They talked about what it’s done for New York City. IBM was very much behind that, so this was obviously of great interest to us,” Glavin said. Day-to-Day Policing, Civil Liberties Concerns The NYPD-IBM video analytics program was initially envisioned as a counterterrorism tool for use in midtown and lower Manhattan, according to Kjeldsen. However, the program was integrated during its testing phase into dozens of cameras across the city. According to the former NYPD technologist, it could have been integrated into everyday criminal investigations. “All bureaus of the department could make use of it,” said the former technologist, potentially helping detectives investigate everything from sex crimes to fraud cases. Kjeldsen spoke of cameras being placed at building entrances and near parking entrances to monitor for suspicious loiterers and abandoned bags. Donald, the NYPD spokesperson, said the program’s access was limited to a small number of counterterrorism officials, adding, “We are not aware of any case where video analytics was a factor in an arrest or prosecution.” Campus police at California State University, Northridge, who adopted IBM’s software, said the bodily search features have been helpful in criminal investigations. Asked about whether officers have deployed the software’s ability to filter through footage for suspects’ clothing color, hair color, and skin tone, Captain Scott VanScoy at California State University, Northridge, responded affirmatively, relaying a story about how university detectives were able to use such features to quickly filter through their cameras and find two suspects in a sexual assault case. “We were able to pick up where they were at different locations from earlier that evening and put a story together, so it saves us a ton of time,” Vanscoy said. “By the time we did the interviews, we already knew the story and they didn’t know we had known.” Glavin, the chief of the campus police, added that surveillance cameras using IBM’s software had been placed strategically across the campus to capture potential security threats, such as car robberies or student protests. “So we mapped out some CCTV in that area and a path of travel to our main administration building, which is sometimes where people will walk to make their concerns known and they like to stand outside that building,” Glavin said. “Not that we’re a big protest campus, we’re certainly not a Berkeley, but it made sense to start to build the exterior camera system there.” Civil liberties advocates say they are alarmed by the NYPD’s secrecy in helping to develop a program with the potential capacity for mass racial profiling. The identification technology IBM built could be easily misused after a major terrorist attack, argued Rachel Levinson-Waldman, senior counsel in the Brennan Center’s Liberty and National Security Program. “Whether or not the perpetrator is Muslim, the presumption is often that he or she is,” she said. “It’s easy to imagine law enforcement jumping to a conclusion about the ethnic and religious identity of a suspect, hastily going to the database of stored videos and combing through it for anyone who meets that physical description, and then calling people in for questioning on that basis.” IBM did not comment on questions about the potential use of its software for racial profiling. However, the company did send a comment to The Intercept pointing out that it was “one of the first companies anywhere to adopt a set of principles for trust and transparency for new technologies, including AI systems.” The statement continued on to explain that IBM is “making publicly available to other companies a dataset of annotations for more than a million images to help solve one of the biggest issues in facial analysis — the lack of diverse data to train AI systems.” Few laws clearly govern object recognition or the other forms of artificial intelligence incorporated into video surveillance, according to Clare Garvie, a law fellow at Georgetown Law’s Center on Privacy and Technology. “Any form of real-time location tracking may raise a Fourth Amendment inquiry,” Garvie said, citing a 2012 Supreme Court case, United States v. Jones, that involved police monitoring a car’s path without a warrant and resulted in five justices suggesting that individuals could have a reasonable expectation of privacy in their public movements. In addition, she said, any form of “identity-based surveillance” may compromise people’s right to anonymous public speech and association. Garvie noted that while facial recognition technology has been heavily criticized for the risk of false matches, that risk is even higher for an analytics system “tracking a person by other characteristics, like the color of their clothing and their height,” that are not unique characteristics. The former NYPD technologist acknowledged that video analytics systems can make mistakes, and noted a study where the software had trouble characterizing people of color: “It’s never 100 percent.” But the program’s identification of potential suspects was, he noted, only the first step in a chain of events that heavily relies on human expertise. “The technology operators hand the data off to the detective,” said the technologist. “You use all your databases to look for potential suspects and you give it to a witness to look at. … This is all about finding a way to shorten the time to catch the bad people.” Object identification programs could also unfairly drag people into police suspicion just because of generic physical characteristics, according to Jerome Greco, a digital forensics staff attorney at the Legal Aid Society, New York’s largest public defenders organization. “I imagine a scenario where a vague description, like young black male in a hoodie, is fed into the system, and the software’s undisclosed algorithm identifies a person in a video walking a few blocks away from the scene of an incident,” Greco said. “The police find an excuse to stop him, and, after the stop, an officer says the individual matches a description from the earlier incident.” All of a sudden, Greco continued, “a man who was just walking in his own neighborhood” could be charged with a serious crime without him or his attorney ever knowing “that it all stemmed from a secret program which he cannot challenge.” While the technology could be used for appropriate law enforcement work, Kjeldsen said that what bothered him most about his project was the secrecy he and his colleagues had to maintain. “We certainly couldn’t talk about what cameras we were using, what capabilities we were putting on cameras,” Kjeldsen said. “They wanted to control public perception and awareness of LMSI” — the Lower Manhattan Security Initiative — “so we always had to be cautious about even that part of it, that we’re involved, and who we were involved with, and what we were doing.” (IBM did not respond to a question about instructing its employees not to speak publicly about its work with the NYPD.) The way the NYPD helped IBM develop this technology without the public’s consent sets a dangerous precedent, Kjeldsen argued. “Are there certain activities that are nobody’s business no matter what?” he asked. “Are there certain places on the boundaries of public spaces that have an expectation of privacy? And then, how do we build tools to enforce that? That’s where we need the conversation. That’s exactly why knowledge of this should become more widely available — so that we can figure that out.” This article was reported in partnership with the Investigative Fund at the Nation Institute. Source
  15. from the result-of-asking-'why-not?'-rather-than-'why?' dept Reuters has a long, detailed examination of the Chinese surveillance state. China's intrusion into the lives of its citizens has never been minimal, but advances in technology have allowed the government to keep tabs on pretty much every aspect of citizens' lives. Facial recognition has been deployed at scale and it's not limited to finding criminals. It's used to identify regular citizens as they go about their daily lives. This is paired with license plate readers and a wealth of information gathered from online activity to provide the government dozens of data points for every citizen that wanders into the path of its cameras. Other biometric information is gathered and analyzed to help the security and law enforcement agencies better pin down exactly who it is they're looking at. But it goes further than that. The Chinese version of stop-and-frisk involves "patting down" cellphones for illegal content or evidence of illegal activities. China is home to several companies offering phone cracking services and forensic software. It's not only Cellebrite and Grayshift, although these two are best known for selling tech to US law enforcement. Not that phone cracking is really a necessity in China. Most citizens hand over passwords when asked, considering the alternative isn't going to be a detainment while a warrant is sought. The option is far more likely to be something like a trip to a modern dungeon for a little conversational beating. What's notable about this isn't the tech. This tech is everywhere. US law enforcement has access to much of this, minus the full-blown facial recognition and other biometric tracking. (That's on its way, though.) Plate readers, forensic devices, numerous law enforcement databases, social media tracking software… these are all in use already. Much of what China has deployed is being done in the name of security. That's the same justification for the massive surveillance apparatus erected after the 2001 attacks. The framework for a totalitarian state is already in place. The only thing separating us from China is our Constitutional rights. Whenever you hear a US government official lamenting perps walking on technicalities or encryption making it tough to lock criminals up, keep in mind the alternative is China: a full-blown police state stocked to the teeth with surveillance tech. Source
  16. Researchers believe a new encryption technique may be key to maintaining a balance between user privacy and government demands. For governments worldwide, encryption is a thorn in the side in the quest for surveillance, cracking suspected criminal phones, and monitoring communication. Officials are applying pressure on technology firms and app developers which provide end-to-end encryption services provide a way for police forces to break encryption. However, the moment you provide a backdoor into such services, you are creating a weak point that not only law enforcement and governments can use -- assuming that tunneling into a handset and monitoring is even within legal bounds -- but threat actors, and undermining the security of encryption as a whole. As the mass surveillance and data collection activities of the US National Security Agency hit the headlines, faith in governments and their ability to restrain such spying to genuine cases of criminality began to weaken. Now, the use of encryption and secure communication channels is ever-more popular, technology firms are resisting efforts to implant deliberate weaknesses in encryption protocols, and neither side wants to budge. What can be done? From the outset, something has got to give. However, researchers from Boston University believe they may have come up with a solution. On Monday, the team said they have developed a new encryption technique which will give authorities some access, but without providing unlimited access in practice, to communication. In other words, a middle ground -- a way to break encryption to placate law enforcement, but not to the extent that mass surveillance on the general public is possible. Mayank Varia, Research Associate Professor at Boston University and cryptography expert, has developed the new technique, known as cryptographic "crumpling." In a paper documenting the research, lead author Varia says that the new cryptography methods could be used for "exceptional access" to encrypted data for government purposes while keeping user privacy at large at a reasonable level. "Our approach places most of the responsibility for achieving exceptional access on the government, rather than on the users or developers of cryptographic tools," the paper notes. "As a result, our constructions are very simple and lightweight, and they can be easily retrofitted onto existing applications and protocols." The crumpling techniques use two approaches -- the first being a Diffie-Hellman key exchange over modular arithmetic groups which leads to an "extremely expensive" puzzle which must be solved to break the protocol, and the second a "hash-based proof of work to impose a linear cost on the adversary for each message" to recover. Crumpling requires strong, modern cryptography as a precondition as it allows per-message encryption keys and detailed management. The system requires this infrastructure so a small number of messages can be targeted without full-scale exposure. The team says that this condition will also only permit "passive" decryption attempts, rather than man-in-the-middle (MiTM) attacks. By introducing cryptographic puzzles into the generation of per-message cryptographic keys, the keys will be possible to decrypt but will require vast resources to do so. In addition, each puzzle will be chosen independently for each key, which means "the government must expend effort to solve each one." "Like a crumple zone in automotive engineering, in an emergency situation the construction should break a little bit in order to protect the integrity of the system as a whole and the safety of its human users," the paper notes. "We design a portion of our puzzles to match Bitcoin's proof of work computation so that we can predict their real-world marginal cost with reasonable confidence." To prevent unauthorized attempts to break encryption an "abrasion puzzle" serves as a gatekeeper which is more expensive to solve than individual key puzzles. While this would not necessarily deter state-sponsored threat actors, it may at least deter individual cyberattackers as the cost would not be worth the result. The new technique would allow governments to recover the plaintext for targeted messages, however, it would also be prohibitively expensive. A key length of 70 bits, for example -- with today's hardware -- would cost millions and force government agencies to choose their targets carefully and the expense would potentially prevent misuse. The research team estimates that the government could recover less than 70 keys per year with a budget of close to $70 million dollars upfront -- one million dollars per message and the full amount set out in the US' expanded federal budget to break encryption. However, there could also be additional costs of $1,000 to $1 million per message, and these kind of figures are difficult to conceal, especially as one message from a suspected criminal in a conversation without contextual data is unlikely to ever be enough to secure conviction. The research team says that crumpling can be adapted for use in common encryption services including PGP, Signal, as well as full-disk and file-based encryption. "We view this work as a catalyst that can inspire both the research community and the public at large to explore this space further," the researchers say. "Whether such a system will ever be (or should ever be) adopted depends less on technology and more on questions for society to answer collectively: whether to entrust the government with the power of targeted access and whether to accept the limitations on law enforcement possible with only targeted access." The research was funded by the National Science Foundation. Source
  17. part 1 (YET ANOTHER) WARNING .... Your online activities are now being tracked and recorded by various government and corporate entities around the world. This information can be used against you at any time and there is no real way to “opt out”. In the past decade, we have seen the systematic advancement of the surveillance apparatus throughout the world. The United States, United Kingdom, Australia, and Canada have all passed laws allowing, and in some cases forcing, telecom companies to bulk-collect your data: United States – In March 2017 the US Congress passed legislation that allows internet service providers to collect, store, and sell your private browsing history, app usage data, location information and more – without your consent. This essentially allows Comcast, Verizon, AT&T and other providers to monetize and sell their customers to the highest bidders (usually for targeted advertising). United Kingdom – In November 2016 the UK Parliament passed the infamous Snoopers Charter (Investigatory Powers Act) which forces internet providers and phone companies to bulk-collect customer data. This includes private browsing history, social media posts, phone calls, text messages, and more. This information is stored for 12 months in a giant database that is accessible to 48 different government agencies. The erosion of free speech is also rapidly underway as various laws allow UK authorities to lock up anyone they deem to be “offensive” (1984 is already here). Australia – In April 2017 the Australian government passed a massive data retention law that forces telecoms to collect and store text messages, phone calls, location information, and internet connection data for a full two years, with the data being accessible to authorities without a warrant. Canada, Europe, and other parts of the world have similar laws and policies already in place. What you are witnessing is the rapid expansion of the global surveillance state, whereby corporate and government entities work together to monitor and record everything you do. What the hell is going on here? Perhaps you are wondering why all this is happening. There is a simple answer to that question. Control Just like we have seen throughout history, government surveillance is simply a tool used for control. This could be for maintaining control of power, controlling a population, or controlling the flow of information in a society. You will notice that the violation of your right to privacy will always be justified by various excuses – from “terrorism” to tax evasion – but never forget, it’s really about control. Along the same lines, corporate surveillance is also about control. Collecting your data helps private entities control your buying decisions, habits, and desires. The tools for doing this are all around you: apps on your devices, social networks, tracking ads, and many free products which simply bulk-collect your data (when something is free, you are the product). This is why the biggest collectors of private data – Google and Facebook – are also the two businesses that completely dominate the online advertising industry. So to sum this up, advertising today is all about the buying and selling of individuals. But it gets even worse… Now we have the full-scale cooperation between government and corporate entities to monitor your every move. In other words, governments are now enlisting private corporations to carry out bulk data collection on entire populations. Your internet service provider is your adversary working on behalf of the surveillance state. This basic trend is happening in much of the world, but it has been well documented in the United States with the PRISM Program. So why should you care? Everything that’s being collected could be used against you today, or at any time in the future, in ways you may not be able to imagine. In many parts of the world, particularly in the UK, thought crime laws are already in place. If you do something that is deemed to be “offensive”, you could end up rotting away in a jail cell for years. Again, we have seen this tactic used throughout history for locking up dissidents – and it is alive and well in the Western world today. From a commercial standpoint, corporate surveillance is already being used to steal your data and hit you with targeted ads, thereby monetizing your private life. Reality check Many talking heads in the media will attempt to confuse you by pretending this is a problem with a certain politician or perhaps a political party. But that’s a bunch of garbage to distract you from the bigger truth. For decades, politicians from all sides (left and right) have worked hard to advance the surveillance agenda around the world. Again, it’s all about control, regardless of which puppet is in office. So contrary to what various groups are saying, you are not going to solve this problem by writing a letter to another politician or signing some online petition. Forget about it. Instead, you can take concrete steps right now to secure your data and protect your privacy. Restore Privacy is all about giving you the tools and information to do that. If you feel overwhelmed by all this, just relax. The privacy tools you need are easy to use no matter what level of experience you have. Arguably the most important privacy tool is a good VPN (virtual private network). A VPN will encrypt and anonymize your online activity by creating a secured tunnel between your computer and a VPN server. This makes your data and online activities unreadable to government surveillance, your internet provider, hackers, and other third-party snoopers. A VPN will also allow you to spoof your location, hide your real IP address, and allow you to access blocked content from anywhere in the world. Check out the best VPN guide to get started. Stay safe! SOURCE
  18. Just as civil liberties groups challenge the legality of the UK intelligence agency’s mass surveillance programs, a catalog of exploit tools for monitoring and manipulation is leaked online. The Joint Threat Research Intelligence Group (JTRIG), a department within the Government Communications Headquarters (GCHQ), “develops the majority of effects capabilities” for UK’s NSA-flavored intelligence agency. First Look Media first published the Snowden-leaked Wikipedia-like document full of covert tools used by GCHQ for surveillance and propaganda. JTRIG tools and techniques help British spies “seed the internet with false information, including the ability to manipulate the results of online polls,” monitor social media posts, and launch attacks ranging from denial of service, to call bombing phones, to disabling users' accounts on PCs. Devil’s Handshake, Dirty Devil, Reaper and Poison Arrow are but a few vicious-sounding JTRIG system tools, but the naming convention for others are just inane like Bumblebee Dance, Techno Viking and Jazz Fusion. Perhaps the British spies were hungry when coming up with Fruit Bowl, Spice Island, Nut Allergy, and Berry Twister? Most of the tools are "fully operational, tested and reliable,” according to the 2012 JTRIG Manual, but "Don't treat this like a catalog. If you don't see it here, it doesn't mean we can't build it." Like the previously leaked TAO exploits, it’s an eye-opener as to exploits that GCHQ can deploy. Some of the especially invasive tools that are “either ready to fire or very close to being ready” include: Angry Pirate can “permanently disable a target’s account on their computer.” Stealth Moose can “disrupt” a target’s “Windows machine. Logs of how long and when the effect is active.” Sunblock can “deny functionality to send/receive email or view material online.” Swamp Donkey “silently” finds and encrypts all predefined types of files on a target’s machine. Tracer Fire is an “Office document that grabs the targets machine info, files, logs, etc and posts it back to GCHQ.” Gurkhas Sword is a tool for “beaconed Microsoft Office documents to elicit a targets IP address.” Tornado Alley is a delivery system aimed at Microsoft Excel "to silently extract and run an executable on a target's machine." Changeling provides UK spies with the “ability to spoof any email address and send email under that identity.” Glassback gets a target’s IP by “pretending to be a spammer and ringing them. Target does not need to answer.”Denial of Service: Rolling Thunder uses P2P for distributed denial of service. Predators Face is used for “targeted denial of service against web servers.” Silent Movie provides “targeted denial of service against SSH services.”Other JTRIG exploits include Screaming Eagle, “a tool that processes Kismetdata into geolocation information” and Chinese Firecracker for “overt brute login attempts against online forums.” Hacienda is a “port scanning tool designed to scan an entire country or city” before identifying IP locations and adding them to an “Earthling database.” Messing with cellphones: Burlesque can “send spoofed SMS text messages.” Cannonball can “send repeated text messages to a single target.” Concrete Donkey can “scatter an audio message to a large number of telephones, or repeatedly bomb a target number with the same message.” Deer Stalker provides a way to silently call a satellite and GSM phone “to aid geolocation.” Imperial Barge can connect two target phones together in a call. Mustang “provides covert access to the locations of GSM cell towers.” Scarlet emperor is used for denial of service against targets’ phones via call bombing. Scrapheap Challenge provides “perfect spoofing of emails from BlackBerry targets.” Top Hat is “a version of Mustang and Dancing Bear techniques that allows us to pull back cell tower and Wi-Fi locations targeted against particular areas.” Vipers Tongue is another denial of service tool but it’s aimed at satellite or GSM phone calls. Manipulation and propaganda Bomb Bay can “increase website hits/rankings.” Gateway can “artificially increase traffic to a website;” Slipstream can “inflate page views on websites.” Underpass “can change the outcome of online polls.” Badger can mass deliver email messages “to support an Information Operations campaign.” Gestator can amplify a “given message, normally video, on popular multimedia websites” like YouTube. The “production and dissemination of multimedia via the web in the course of information operations” can be accomplished with Skyscraper. There are also various tools to censor or report “extremist” content. Online surveillance of social networks Godfather collects public data from Facebook. While Spring Bishop finds private photos of targets on Facebook, Reservoir allows the collection of various Facebook information. Clean Sweep can “masquerade Facebook wall posts for individuals or entire countries.” Birdstrike monitors and collects Twitter profiles. Dragon’s Snout collects Paltalk group chats. Airwolf collects YouTube videos, comments and profiles. Bugsy collects users’ info off Google+. Fatyak is about collecting data from LinkedIn. Goodfella is a “generic framework to collect public data from online social networks.” Elate monitors a target's use of UK's eBay. Mouth finds, collects and downloads a user’s files from achive.org. Photon Torpedo can “actively grab the IP address of an MSN messenger user.” Pitbull is aimed at large scale delivery of tailored messages to IM services. Miniature Hero is about exploiting Skype. The description states, “Active Skype capability. Provision of real time call records (SkypeOut and SkypetoSkype) and bidirectional instant messaging. Also contact lists.” If that’s not enough mass-scale surveillance and manipulation to irk you, there are more weaponized tricks and techniques in the JTRIG Manual. Source
  19. Britain's electronic eavesdropping center GCHQ faces legal action from seven internet service providers who accuse it of illegally accessing "potentially millions of people's private communications," campaigners said Wednesday. The claim threatens fresh embarrassment for the British authorities after leaks by fugitive NSA worker Edward Snowden showed GCHQ was a key player in covert US surveillance operations globally. The complaint has been filed at a London court by ISPs Riseup and May First/People Link of the US, GreenNet of Britain, Greenhost of the Netherlands, Mango of Zimbabwe, Jinbonet of South Korea and the Chaos Computer Club of Germany, plus campaigners Privacy International. They claim that GCHQ carried out "targeted operations against internet service providers to conduct mass and intrusive surveillance." The move follows a series of reports by German magazine Der Spiegel which claimed to detail GCHQ's illicit activities. These reportedly included targeting a Belgian telecommunications company, Belgacom, where staff computers were infected with malware in a "quantum insert" attack to secure access to customers. The legal complaint says this was "not an isolated attack" and alleges violations of Britain's Human Rights Act and the European Convention of Human Rights. "These widespread attacks on providers and collectives undermine the trust we all place on the internet and greatly endangers the world's most powerful tool for democracy and free expression," said Eric King, Privacy International's deputy director. Britain's Foreign Office did not immediately comment. GCHQ, which stands for Government Communications Headquarters, employs around 5,500 people and is housed in a giant doughnut-shaped building in the sleepy town of Cheltenham, southwest England. Snowden's leaks claimed that the NSA had been secretly funding GCHQ to the tune of £100 million ($160 million, 120 million euros) over the last three years. Source
  20. The House of Representatives last night overwhelmingly passed an amendment to the Department of Defense Appropriations Act that would cut funding for two programs that grant intelligence agencies access to the private data and communications of U.S. citizens. The amendment shows that Congress is willing to adjust and follow a different tactic to rein in government surveillance powers after a more straightforward legislative approach failed last month. Privacy and civil rights advocates heralded that first effort, known as the USA FREEDOM Act, as a promising step toward controlling government spying powers when it came out of its committee. However, once it hit the House floor for debate, the broader Congress summarily crippled the committee’s efforts by vaguely defining key terms in the FREEDOM Act. The new bill was sponsored by U.S. Reps. Jim Sensenbrenner (R-Wis.), Zoe Lofgren (D-Calif.), Thomas Massie (R-Ky.) and a bipartisan group of lawmakers. In a 293 (ayes) to 139 (noes) to 1 (present) vote, the Massie-Lofgren Amendment passed. Lawmakers say it will close off two so-called backdoors. According to the amendment’s sponsors, one would be shut by prohibiting the search of government databases for information pertaining to U.S. citizens without a warrant, and the other would prohibit the National Security Agency and Central Intelligence Agency from requiring actual technological backdoors into products. In the Electronic Frontier Foundation’s (EFF) words, the amendment would block the NSA from using any of its funding from this Defense Appropriations Bill to conduct such warrantless searches. In addition, the amendment would prohibit the NSA from using its budget to mandate or request that private companies and organizations add backdoors to the encryption standards that are meant to keep you safe on the web. “This amendment will reinstate an important provision that was stripped from the original USA FREEDOM Act to further protect the Constitutional rights of American citizens,” the Sensenbrenner, Lofgren, and Massie said. “Congress has an ongoing obligation to conduct oversight of the intelligence community and its surveillance authorities.” Congressional officials claim the bill is supported by both major parties. In addition to that, the bill is reportedly supported by tech firms, civil rights groups, and political action committees, including the American Civil Liberties Union, the Liberty Coalition, the EFF, Google, FreedomWorks, Campaign for Liberty, Demand Progress, and the Center for Democracy and Technology. In a statement, the EFF described the move as important first step in reining in the NSA and applauded the House for its efforts. Like the passage of a stand-alone bill, in order to become law, the amendment must be passed by the Senate and signed by the president. The amendment’s additional sponsors included Reps. John Conyers (D-Mich.), Ted Poe (R-Texas), Tulsi Gabbard (D-Hawaii), Jim Jordan (R-Ohio), Robert O’Rourke (D-Texas), Justin Amash (R-Mich.), Rush Holt (D-N.J.), Jerrold Nadler (D-N.Y.) and Tom Petri (R-Wis.). Source
  21. Online banking and shopping in America are being negatively impacted by ongoing revelations about the National Security Agency’s digital surveillance activities. That is the clear implication of a recent ESET-commissioned Harris poll which asked more than 2,000 U.S. adults ages 18 and older whether or not, given the news about the NSA’s activities, they have changed their approach to online activity. Almost half of respondents (47%) said that they have changed their online behavior and think more carefully about where they go, what they say, and what they do online. When it comes to specific Internet activities, such as email or online banking, this change in behavior translates into a worrying trend for the online economy: over one quarter of respondents (26%) said that, based on what they have learned about secret government surveillance, they are now doing less banking online and less online shopping. This shift in behavior is not good news for companies that rely on sustained or increased use of the Internet for their business model. Online Commerce Shrinkage? After 20 years of seemingly limitless expansion of Internet commerce, these poll numbers may come as something of a shock to online firms, but they were not a complete surprise to ESET researchers. Last fall we detected early signs of this phenomenon when we conducted a smaller survey of “post-Snowden” attitudes. Some respondents reported reduced online shopping and banking behavior (14% and 19% respectively). At that time it was reasonable to speculate that such changes in behavior might be a temporary blip, but our latest findings suggest otherwise. And the reasons are not hard to find: continued revelations from the Snowden documents and a lack of convincing reassurances from government about privacy protections. The news for online stores and financial services does not get any better when you dig deeper into the numbers. The economically important 18-34 age group are more likely to say they are doing less shopping online (33% compared to an overall 26%). Online retailers who rely more on female shoppers should note that 29% of women surveyed said they have reduced how much they shop online (compared to 23% of men and 26% overall). When it comes to banking online 29% of folks in that 18-34 age bracket had cut back, as had 30% of those aged 65 and older. Clearly, these findings will be of concern to the retail and financial services sectors, but the news is also bad for just about any sector of the American economy where replacing physical contact with electronic communication is part of the business model. Just under one-quarter of respondents (24%) said that, based on what they have learned about secret government surveillance, they are less inclined to use email. Important economic sectors ranging from healthcare to education and government are looking at expanded use of electronic communications as a way to cut costs and improve service levels. Those objectives could be harder to attain if a significant percentage of the public is less inclined to use those channels. We observed a higher than average contraction in email use in the 18-34 age group (32%) and in households where annual household income is under $50,000. Ongoing impact of Privacy intrusions As a recent New York Times article titled “Revelations of N.S.A. Spying Cost U.S. Tech Companies” observed: “It is impossible to see now the full economic ramifications of the spying disclosures.” However, I think that when you look at this new survey and our previous research it is clear that changes in online behavior have already taken place, changes with broad economic ramifications. Whether or not we have seen the full extent of the public’s reaction to state-sponsored mass surveillance is hard to predict, but based on this survey and the one we did last year, I would say that, if the NSA revelations continue–and I am sure they will–and if government reassurances fail to impress the public, then it is possible that the trends in behavior we are seeing right now will continue. For example, I do not see many people finding reassurance in President Obama’s recently announced plan to transfer the storage of millions of telephone records from the government to private phone companies. As we will document in our next installment of survey findings, data gathering by companies is even more of a privacy concern for some Americans than government surveillance. And in case anyone is tempted to think that this is a narrow issue of concern only to news junkies and security geeks, let me be clear: according to this latest survey, 85% of adult Americans are now at least somewhat familiar with the news about secret government surveillance of private citizens’ phone calls, emails, online activity, and so on. As to what should be done about this situation and its effects on commerce, privacy, and online behavior, I will have more findings to share in my next blog post, along with suggested strategies for companies who may be impacted. Source
  22. I've been thinking, with all the revelations coming out about the NSA spying on all of us, maybe we've been going about reacting the wrong way. I mean, we all seem to fall somewhere on the spectrum of being upset about this, from the more mildly uncomfortable but resigning folks that are okay with the spying to those more militant about privacy. What if we're all just pissed that we aren't the ones getting to do all this sweet, sweet surveillance on everyone we know. Well, that's all changed, thanks to the makers of the mSpy software, which allows you to gift smart phones preinstalled with their software to those you care about most and then play NSA on them to your heart's content. Starting today, the company is also selling phones preloaded with the software, making it simple for users without any tech savvy to start surveillance right out of the box. The phone package is available with the HTC One, Nexus 5, Samsung Galaxy S4 and iPhone 5s, at varying cost; for example, the Samsung Galaxy S4 costs $300; the subscription for the preloaded software costs another $199 for a year. [From the moment the software is installed], the phone records everything that happens on the device and sends the details to a remote website. Every call is recorded, every keystroke logged, every email seen, every SMS chat or photograph monitored. Count me as someone who is suddenly even more glad than ever that I'm out of the dating world. On the other hand, I suppose it'll be weird for any of us married folks to get smart phones as gifts from now on as well. Oh well, down the surveillance hatch, I say! The NSA spies on us, we spy on each other, and the important thing to remember is that the makers of this software, which advertises to buyers that their targets "won't find out", are the most innocent of innocents here. The phone's proclaimed target markets are employers and parents who have the legal authority to watch what their children do on their smart phones. Company founder Andrei Shimanovich knows others may use his products in illegal ways, but says it is not his responsibility. "It is the same question with the gun producer," says Shimanovich, a Belarus native who recently moved to New York. "If you go out and buy a gun and go shoot someone, no one will go after the gun producer. People who shoot someone will be responsible for this. Same thing for mSpy. We just provide the services which can solve certain tasks regarding parents and teenagers." And creepy bastards, estranged lovers, stalkers, or anyone else who might be able to surreptitiously sneak this software onto the phones of whomever they're targeting. While it's completely true that we ought not blame the tool-maker for the way the tool is used, that doesn't discount the level of creepy in this software. Gone, apparently, are the days when parents raised their children to be responsible and then loosed them on the world to make a few mistakes and grow up better because of it. Gone are the days when employers made it a point to hire staff that they trusted. The NSA has paved the way for a whole new level of Orwellian acceptance, where the only difference between government surveillance and that we do ourselves is that our personal spying might actually be effective, since it will be more targeted. Prepare yourselves, people, for when the news media first gets hold of some stalker who commits a violent act and is found to have employed this software, because the backlash against it is going to be insane Source
  23. By NICOLE PERLROTH February 11, 2014, 9:13 pm So much for mass protest. A consortium of Internet and privacy activists had long promoted Feb. 11 as the day the Internet would collectively stand up and shout down surveillance by the National Security Agency. The group called Tuesday, “The Day We Fight Back,” and encouraged websites to join an online campaign modeled after protests against the Stop Online Piracy Act and Protect I.P. Act two years ago, when sites like Reddit and Wikipedia and companies like Google and Facebook helped successfully topple antipiracy legislation. Instead, the protest on Tuesday barely registered. Wikipedia did not participate. Reddit — which went offline for 12 hours during the protests two years ago — added an inconspicuous banner to its homepage. Sites like Tumblr, Mozilla and DuckDuckGo, which were listed as organizers, did nothing to their homepages. The most vocal protesters were the usual suspects: activist groups like the Electronic Frontier Foundation, the American Civil Liberties Union, Amnesty International and Greenpeace. The eight major technology companies — Google, Microsoft, Facebook, AOL, Apple, Twitter, Yahoo and LinkedIn — that joined forces in December in a public campaign to “reform government surveillance” only participated Tuesday insofar as having a joint website flash the protest banner. A promotional video from the organizers of “The Day We Fight Back.” The difference may be explained by the fact that two years ago, the Internet powerhouses were trying to halt new legislation. On Tuesday, people were being asked to reverse a secret, multi-billion dollar surveillance effort by five countries that has been in place for nearly a decade. And unlike 2012, when the goal was simply to block the passage of new bills, the goal of the protests on Tuesday were more muddled. This time around, participants were urged to flash a banner on their sites that urged visitors to call their congressional representative in support of the U.S.A. Freedom Act — a bill sponsored by Representative Jim Sensenbrenner, Republican of Wisconsin, and Senator Patrick Leahy, Democrat of Vermont, which seeks to reform the N.S.A.’s metadata database. They were also asked to oppose the FISA Improvements Act, a bill proposed by Senator Dianne Feinstein that would help legalize the N.S.A.’s metadata collection program. All was not lost. By late Tuesday, some 70,000 calls had been placed to legislators and roughly 150,000 people had sent their representatives an email. But on privacy forums and Reddit, significant discussions failed to materialize. “Online petitions,” one Reddit user wrote of the protest. “The very least you can do, without doing nothing.” http://bits.blogs.nytimes.com/2014/02/11/the-day-the-internet-didnt-fight-back/?_php=true&_type=blogs&_r=0Nsane was among the 6000 website? Only Nsane Management Team would have the answer :)
×
×
  • Create New...