Jump to content

Search the Community

Showing results for tags 'surveillance'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 54 results

  1. T-Mobile, Sprint, and AT&T are selling access to their customers’ location data, and that data is ending up in the hands of bounty hunters and others not authorized to possess it, letting them track most phones in the country. Nervously, I gave a bounty hunter a phone number. He had offered to geolocate a phone for me, using a shady, overlooked service intended not for the cops, but for private individuals and businesses. Armed with just the number and a few hundred dollars, he said he could find the current location of most phones in the United States. The bounty hunter sent the number to his own contact, who would track the phone. The contact responded with a screenshot of Google Maps, containing a blue circle indicating the phone’s current location, approximate to a few hundred metres. Queens, New York. More specifically, the screenshot showed a location in a particular neighborhood—just a couple of blocks from where the target was. The hunter had found the phone (the target gave their consent to Motherboard to be tracked via their T-Mobile phone.) The bounty hunter did this all without deploying a hacking tool or having any previous knowledge of the phone’s whereabouts. Instead, the tracking tool relies on real-time location data sold to bounty hunters that ultimately originated from the telcos themselves, including T-Mobile, AT&T, and Sprint, a Motherboard investigation has found. These surveillance capabilities are sometimes sold through word-of-mouth networks. Whereas it’s common knowledge that law enforcement agencies can track phones with a warrant to service providers, IMSI catchers, or until recently via other companies that sell location data such as one called Securus, at least one company, called Microbilt, is selling phone geolocation services with little oversight to a spread of different private industries, ranging from car salesmen and property managers to bail bondsmen and bounty hunters, according to sources familiar with the company’s products and company documents obtained by Motherboard. Compounding that already highly questionable business practice, this spying capability is also being resold to others on the black market who are not licensed by the company to use it, including me, seemingly without Microbilt’s knowledge. Motherboard’s investigation shows just how exposed mobile networks and the data they generate are, leaving them open to surveillance by ordinary citizens, stalkers, and criminals, and comes as media and policy makers are paying more attention than ever to how location and other sensitive data is collected and sold. The investigation also shows that a wide variety of companies can access cell phone location data, and that the information trickles down from cell phone providers to a wide array of smaller players, who don’t necessarily have the correct safeguards in place to protect that data. “People are reselling to the wrong people,” the bail industry source who flagged the company to Motherboard said. Motherboard granted the source and others in this story anonymity to talk more candidly about a controversial surveillance capability. Your mobile phone is constantly communicating with nearby cell phone towers, so your telecom provider knows where to route calls and texts. From this, telecom companies also work out the phone’s approximate location based on its proximity to those towers. Although many users may be unaware of the practice, telecom companies in the United States sell access to their customers’ location data to other companies, called location aggregators, who then sell it to specific clients and industries. Last year, one location aggregator called LocationSmart faced harsh criticism for selling data that ultimately ended up in the hands of Securus, a company which provided phone tracking to low level enforcement without requiring a warrant. LocationSmart also exposed the very data it was selling through a buggy website panel, meaning anyone could geolocate nearly any phone in the United States at a click of a mouse. There’s a complex supply chain that shares some of American cell phone users’ most sensitive data, with the telcos potentially being unaware of how the data is being used by the eventual end user, or even whose hands it lands in. Financial companies use phone location data to detect fraud; roadside assistance firms use it to locate stuck customers. But AT&T, for example, told Motherboard the use of its customers’ data by bounty hunters goes explicitly against the company’s policies, raising questions about how AT&T allowed the sale for this purpose in the first place. “The allegation here would violate our contract and Privacy Policy,” an AT&T spokesperson told Motherboard in an email. In the case of the phone we tracked, six different entities had potential access to the phone’s data. T-Mobile shares location data with an aggregator called Zumigo, which shares information with Microbilt. Microbilt shared that data with a customer using its mobile phone tracking product. The bounty hunter then shared this information with a bail industry source, who shared it with Motherboard. The CTIA, a telecom industry trade group of which AT&T, Sprint, and T-Mobile are members, has official guidelines for the use of so-called “location-based services” that “rely on two fundamental principles: user notice and consent,” the group wrote in those guidelines. Telecom companies and data aggregators that Motherboard spoke to said that they require their clients to get consent from the people they want to track, but it’s clear that this is not always happening. A second source who has tracked the geolocation industry told Motherboard, while talking about the industry generally, “If there is money to be made they will keep selling the data.” “Those third-level companies sell their services. That is where you see the issues with going to shady folks [and] for shady reasons,” the source added. Frederike Kaltheuner, data exploitation programme lead at campaign group Privacy International, told Motherboard in a phone call that “it’s part of a bigger problem; the US has a completely unregulated data ecosystem.” Microbilt buys access to location data from an aggregator called Zumigo and then sells it to a dizzying number of sectors, including landlords to scope out potential renters; motor vehicle salesmen, and others who are conducting credit checks. Armed with just a phone number, Microbilt’s “Mobile Device Verify” product can return a target’s full name and address, geolocate a phone in an individual instance, or operate as a continuous tracking service. “You can set up monitoring with control over the weeks, days and even hours that location on a device is checked as well as the start and end dates of monitoring,” a company brochure Motherboard found online reads. Posing as a potential customer, Motherboard explicitly asked a Microbilt customer support staffer whether the company offered phone geolocation for bail bondsmen. Shortly after, another staffer emailed with a price list—locating a phone can cost as little as $4.95 each if searching for a low number of devices. That price gets even cheaper as the customer buys the capability to track more phones. Getting real-time updates on a phone’s location can cost around $12.95. “Dirt cheap when you think about the data you can get,” the source familiar with the industry added. It’s bad enough that access to highly sensitive phone geolocation data is already being sold to a wide range of industries and businesses. But there is also an underground market that Motherboard used to geolocate a phone—one where Microbilt customers resell their access at a profit, and with minimal oversight. “Blade Runner, the iconic sci-fi movie, is set in 2019. And here we are: there's an unregulated black market where bounty-hunters can buy information about where we are, in real time, over time, and come after us. You don't need to be a replicant to be scared of the consequences,” Thomas Rid, professor of strategic studies at Johns Hopkins University, told Motherboard in an online chat. The bail industry source said his middleman used Microbilt to find the phone. This middleman charged $300, a sizeable markup on the usual Microbilt price. The Google Maps screenshot provided to Motherboard of the target phone’s location also included its approximate longitude and latitude coordinates, and a range of how accurate the phone geolocation is: 0.3 miles, or just under 500 metres. It may not necessarily be enough to geolocate someone to a specific building in a populated area, but it can certainly pinpoint a particular borough, city, or neighborhood. In other cases of phone geolocation it is typically done with the consent of the target, perhaps by sending a text message the user has to deliberately reply to, signalling they accept their location being tracked. This may be done in the earlier roadside assistance example or when a company monitors its fleet of trucks. But when Motherboard tested the geolocation service, the target phone received no warning it was being tracked. The bail source who originally alerted Microbilt to Motherboard said that bounty hunters have used phone geolocation services for non-work purposes, such as tracking their girlfriends. Motherboard was unable to identify a specific instance of this happening, but domestic stalkers have repeatedly used technology, such as mobile phone malware, to track spouses. As Motherboard was reporting this story, Microbilt removed documents related to its mobile phone location product from its website. https://www.documentcloud.org/documents/5676919-Microbilt-Mobile-Device-Verify-2018.html A Microbilt spokesperson told Motherboard in a statement that the company requires anyone using its mobile device verification services for fraud prevention must first obtain consent of the consumer. Microbilt also confirmed it found an instance of abuse on its platform—our phone ping. “The request came through a licensed state agency that writes in approximately $100 million in bonds per year and passed all up front credentialing under the pretense that location was being verified to mitigate financial exposure related to a bond loan being considered for the submitted consumer,” Microbilt said in an emailed statement. In this case, “licensed state agency” is referring to a private bail bond company, Motherboard confirmed. “As a result, MicroBilt was unaware that its terms of use were being violated by the rogue individual that submitted the request under false pretenses, does not approve of such use cases, and has a clear policy that such violations will result in loss of access to all MicroBilt services and termination of the requesting party’s end-user agreement,” Microbilt added. “Upon investigating the alleged abuse and learning of the violation of our contract, we terminated the customer’s access to our products and they will not be eligible for reinstatement based on this violation.” Zumigo confirmed it was the company that provided the phone location to Microbilt and defended its practices. In a statement, Zumigo did not seem to take issue with the practice of providing data that ultimately ended up with licensed bounty hunters, but wrote, “illegal access to data is an unfortunate occurrence across virtually every industry that deals in consumer or employee data, and it is impossible to detect a fraudster, or rogue customer, who requests location data of his or her own mobile devices when the required consent is provided. However, Zumigo takes steps to protect privacy by providing a measure of distance (approx. 0.5-1.0 mile) from an actual address.” Zumigo told Motherboard it has cut Microbilt’s data access. In Motherboard’s case, the successfully geolocated phone was on T-Mobile. “We take the privacy and security of our customers’ information very seriously and will not tolerate any misuse of our customers’ data,” A T-Mobile spokesperson told Motherboard in an emailed statement. “While T-Mobile does not have a direct relationship with Microbilt, our vendor Zumigo was working with them and has confirmed with us that they have already shut down all transmission of T-Mobile data. T-Mobile has also blocked access to device location data for any request submitted by Zumigo on behalf of Microbilt as an additional precaution.” Microbilt’s product documentation suggests the phone location service works on all mobile networks, however the middleman was unable or unwilling to conduct a search for a Verizon device. Verizon did not respond to a request for comment. AT&T told Motherboard it has cut access to Microbilt as the company investigates. “We only permit the sharing of location when a customer gives permission for cases like fraud prevention or emergency roadside assistance, or when required by law,” the AT&T spokesperson said. Sprint told Motherboard in a statement that “protecting our customers’ privacy and security is a top priority, and we are transparent about that in our Privacy Policy [...] Sprint does not have a direct relationship with MicroBilt. If we determine that any of our customers do and have violated the terms of our contract, we will take appropriate action based on those findings.” Sprint would not clarify the contours of its relationship with Microbilt. These statements sound very familiar. When The New York Times and Senator Ron Wyden published details of Securus last year, the firm that was offering geolocation to low level law enforcement without a warrant, the telcos said they were taking extra measures to make sure their customers’ data would not be abused again. Verizon announced it was going to limit data access to companies not using it for legitimate purposes. T-Mobile, Sprint, and AT&T followed suit shortly after with similar promises. After Wyden’s pressure, T-Mobile’s CEO John Legere tweeted in June last year “I’ve personally evaluated this issue & have pledged that @tmobile will not sell customer location data to shady middlemen.” Months after the telcos said they were going to combat this problem, in the face of an arguably even worse case of abuse and data trading, they are saying much the same thing. Last year, Motherboard reported on a company that previously offered phone geolocation to bounty hunters; here Microbilt is operating even after a wave of outrage from policy makers. In its statement to Motherboard on Monday, T-Mobile said it has nearly finished the process of terminating its agreements with location aggregators. “It would be bad if this was the first time we learned about it. It’s not. Every major wireless carrier pledged to end this kind of data sharing after I exposed this practice last year. Now it appears these promises were little more than worthless spam in their customers’ inboxes,” Wyden told Motherboard in a statement. Wyden is proposing legislation to safeguard personal data. Due to the ongoing government shutdown, the Federal Communications Commission (FCC) was unable to provide a statement. “Wireless carriers’ continued sale of location data is a nightmare for national security and the personal safety of anyone with a phone,” Wyden added. “When stalkers, spies, and predators know when a woman is alone, or when a home is empty, or where a White House official stops after work, the possibilities for abuse are endless.” Source
  2. from the 'intel-techniques,'-indeed dept A little opsec goes a long way. The Massachusetts State Police -- one of the most secretive law enforcement agencies in the nation -- gave readers of its Twitter feed a free look at the First Amendment-protected activities it keeps tabs on… by uploading a screenshot showing its browser bookmarks. Alex Press of Jacobin Magazine was one of the Twitter users to catch the inadvertent exposure of MSP operations. If you can't read/see the tweet, it says: the MA staties just unintentionally tweeted a photo that shows their bookmarks include a whole number of Boston’s left-wing orgs The tweet was quickly scrubbed by the MSP, but not before other Twitter users had grabbed screenshots. Some of the activist groups bookmarked by the state police include Mass. Action Against Police Brutality, the Coalition to Organize and Mobilize Boston Against Trump, and Resistance Calendar. Here's a closer look at the bookmarks. The MSP did not deny they keep (browser) tabs on protest organizations. Instead, it attempted to portray this screen of left-leaning bookmarks as some sort of non-partisan, non-cop-centric attempt to keep the community safe by being forewarned and forearmed. Ok. But mainly these groups? The ones against police brutality and the back-the-blue President? Seems a little one-sided for an "of any type and by any group" declaration. The statement continues in the same defensive vein for a few more sentences, basically reiterating the false conceit that cops don't take sides when it comes to activist groups and the good people of Massachusetts are lucky to have such proactive public servants at their disposal. Whatever. If it wasn't a big deal, the MSP wouldn't have vanished the original tweet into the internet ether. The screenshot came from a "fusion center" -- one of those DHS partnerships that results in far more rights violations and garbage "see something, say something" reports than "actionable intelligence". Fusion centers are supposed to be focused on terrorism, not on people who don't like police brutality or the current Commander in Chief. What this looks like is probably what it is: police keeping tabs on people they don't like or people who don't like them. That's not really what policing is about and it sure as hell doesn't keep the community any safer. Source
  3. By Bruce Schneier The Five Eyes -- the intelligence consortium of the rich English-speaking countries (the US, Canada, the UK, Australia, and New Zealand) -- have issued a "Statement of Principles on Access to Evidence and Encryption" where they claim their needs for surveillance outweigh everyone's needs for security and privacy. ...the increasing use and sophistication of certain encryption designs present challenges for nations in combatting serious crimes and threats to national and global security. Many of the same means of encryption that are being used to protect personal, commercial and government information are also being used by criminals, including child sex offenders, terrorists and organized crime groups to frustrate investigations and avoid detection and prosecution. Privacy laws must prevent arbitrary or unlawful interference, but privacy is not absolute. It is an established principle that appropriate government authorities should be able to seek access to otherwise private information when a court or independent authority has authorized such access based on established legal standards. The same principles have long permitted government authorities to search homes, vehicles, and personal effects with valid legal authority. The increasing gap between the ability of law enforcement to lawfully access data and their ability to acquire and use the content of that data is a pressing international concern that requires urgent, sustained attention and informed discussion on the complexity of the issues and interests at stake. Otherwise, court decisions about legitimate access to data are increasingly rendered meaningless, threatening to undermine the systems of justice established in our democratic nations. To put it bluntly, this is reckless and shortsighted. I've repeatedly written about why this can't be done technically, and why trying results in insecurity. But there's a greater principle at first: we need to decide, as nations and as society, to put defense first. We need a "defense dominant" strategy for securing the Internet and everything attached to it. This is important. Our national security depends on the security of our technologies. Demanding that technology companies add backdoors to computers and communications systems puts us all at risk. We need to understand that these systems are too critical to our society and -- now that they can affect the world in a direct physical manner -- affect our lives and property as well. This is what I just wrote, in Click Here to Kill Everybody: There is simply no way to secure US networks while at the same time leaving foreign networks open to eavesdropping and attack. There's no way to secure our phones and computers from criminals and terrorists without also securing the phones and computers of those criminals and terrorists. On the generalized worldwide network that is the Internet, anything we do to secure its hardware and software secures it everywhere in the world. And everything we do to keep it insecure similarly affects the entire world. This leaves us with a choice: either we secure our stuff, and as a side effect also secure their stuff; or we keep their stuff vulnerable, and as a side effect keep our own stuff vulnerable. It's actually not a hard choice. An analogy might bring this point home. Imagine that every house could be opened with a master key, and this was known to the criminals. Fixing those locks would also mean that criminals' safe houses would be more secure, but it's pretty clear that this downside would be worth the trade-off of protecting everyone's house. With the Internet+ increasing the risks from insecurity dramatically, the choice is even more obvious. We must secure the information systems used by our elected officials, our critical infrastructure providers, and our businesses. Yes, increasing our security will make it harder for us to eavesdrop, and attack, our enemies in cyberspace. (It won't make it impossible for law enforcement to solve crimes; I'll get to that later in this chapter.) Regardless, it's worth it. If we are ever going to secure the Internet+, we need to prioritize defense over offense in all of its aspects. We've got more to lose through our Internet+ vulnerabilities than our adversaries do, and more to gain through Internet+ security. We need to recognize that the security benefits of a secure Internet+ greatly outweigh the security benefits of a vulnerable one. We need to have this debate at the level of national security. Putting spy agencies in charge of this trade-off is wrong, and will result in bad decisions. Cory Doctorow has a good reaction. Source
  4. In the decade after the 9/11 attacks, the New York City Police Department moved to put millions of New Yorkers under constant watch. Warning of terrorism threats, the department created a plan to carpet Manhattan’s downtown streets with thousands of cameras and had, by 2008, centralized its video surveillance operations to a single command center. Two years later, the NYPD announced that the command center, known as the Lower Manhattan Security Coordination Center, had integrated cutting-edge video analytics software into select cameras across the city. The video analytics software captured stills of individuals caught on closed-circuit TV footage and automatically labeled the images with physical tags, such as clothing color, allowing police to quickly search through hours of video for images of individuals matching a description of interest. At the time, the software was also starting to generate alerts for unattended packages, cars speeding up a street in the wrong direction, or people entering restricted areas. Over the years, the NYPD has shared only occasional, small updates on the program’s progress. In a 2011 interview with Scientific American, for example, Inspector Salvatore DiPace, then commanding officer of the Lower Manhattan Security Initiative, said the police department was testing whether the software could box out images of people’s faces as they passed by subway cameras and subsequently cull through the images for various unspecified “facial features.” While facial recognition technology, which measures individual faces at over 16,000 points for fine-grained comparisons with other facial images, has attracted significant legal scrutiny and media attention, this object identification software has largely evaded attention. How exactly this technology came to be developed and which particular features the software was built to catalog have never been revealed publicly by the NYPD. Now, thanks to confidential corporate documents and interviews with many of the technologists involved in developing the software, The Intercept and the Investigative Fund have learned that IBM began developing this object identification technology using secret access to NYPD camera footage. With access to images of thousands of unknowing New Yorkers offered up by NYPD officials, as early as 2012, IBM was creating new search features that allow other police departments to search camera footage for images of people by hair color, facial hair, and skin tone. IBM declined to comment on its use of NYPD footage to develop the software. However, in an email response to questions, the NYPD did tell The Intercept that “Video, from time to time, was provided to IBM to ensure that the product they were developing would work in the crowded urban NYC environment and help us protect the City. There is nothing in the NYPD’s agreement with IBM that prohibits sharing data with IBM for system development purposes. Further, all vendors who enter into contractual agreements with the NYPD have the absolute requirement to keep all data furnished by the NYPD confidential during the term of the agreement, after the completion of the agreement, and in the event that the agreement is terminated.” In an email to The Intercept, the NYPD confirmed that select counterterrorism officials had access to a pre-released version of IBM’s program, which included skin tone search capabilities, as early as the summer of 2012. NYPD spokesperson Peter Donald said the search characteristics were only used for evaluation purposes and that officers were instructed not to include the skin tone search feature in their assessment. The department eventually decided not to integrate the analytics program into its larger surveillance architecture, and phased out the IBM program in 2016. After testing out these bodily search features with the NYPD, IBM released some of these capabilities in a 2013 product release. Later versions of IBM’s software retained and expanded these bodily search capabilities. (IBM did not respond to a question about the current availability of its video analytics programs.) Asked about the secrecy of this collaboration, the NYPD said that “various elected leaders and stakeholders” were briefed on the department’s efforts “to keep this city safe,” adding that sharing camera access with IBM was necessary for the system to work. IBM did not respond to a question about why the company didn’t make this collaboration public. Donald said IBM gave the department licenses to apply the system to 512 cameras, but said the analytics were tested on “fewer than fifty.” He added that IBM personnel had access to certain cameras for the sole purpose of configuring NYPD’s system, and that the department put safeguards in place to protect the data, including “non-disclosure agreements for each individual accessing the system; non-disclosure agreements for the companies the vendors worked for; and background checks.” Civil liberties advocates contend that New Yorkers should have been made aware of the potential use of their physical data for a private company’s development of surveillance technology. The revelations come as a city council bill that would require NYPD transparency about surveillance acquisitions continues to languish, due, in part, to outspoken opposition from New York City Mayor Bill de Blasio and the NYPD. Skin Tone Search Technology, Refined on New Yorkers IBM’s initial breakthroughs in object recognition technology were envisioned for technologies like self-driving cars or image recognition on the internet, said Rick Kjeldsen, a former IBM researcher. But after 9/11, Kjeldsen and several of his colleagues realized their program was well suited for counterterror surveillance. “After 9/11, the funding sources and the customer interest really got driven toward security,” said Kjeldsen, who said he worked on the NYPD program from roughly 2009 through 2013. “Even though that hadn’t been our focus up to that point, that’s where demand was.” IBM’s first major urban video surveillance project was with the Chicago Police Department and began around 2005, according to Kjeldsen. The department let IBM experiment with the technology in downtown Chicago until 2013, but the collaboration wasn’t seen as a real business partnership. “Chicago was always known as, it’s not a real — these guys aren’t a real customer. This is kind of a development, a collaboration with Chicago,” Kjeldsen said. “Whereas New York, these guys were a customer. And they had expectations accordingly.” The NYPD acquired IBM’s video analytics software as one part of the Domain Awareness System, a shared project of the police department and Microsoft that centralized a vast web of surveillance sensors in lower and midtown Manhattan — including cameras, license plate readers, and radiation detectors — into a unified dashboard. IBM entered the picture as a subcontractor to Microsoft subsidiary Vexcel in 2007, as part of a project worth $60.7 million over six years, according to the internal IBM documents. In New York, the terrorist threat “was an easy selling point,” recalled Jonathan Connell, an IBM researcher who worked on the initial NYPD video analytics installation. “You say, ‘Look what the terrorists did before, they could come back, so you give us some money and we’ll put a camera there.” A former NYPD technologist who helped design the Lower Manhattan Security Initiative, asking to speak on background citing fears of professional reprisal, confirmed IBM’s role as a “strategic vendor.” “In our review of video analytics vendors at that time, they were well ahead of everyone else in my personal estimation,” the technologist said. According to internal IBM planning documents, the NYPD began integrating IBM’s surveillance product in March 2010 for the Lower Manhattan Security Coordination Center, a counterterrorism command center launched by Police Commissioner Ray Kelly in 2008. In a “60 Minutes” tour of the command center in 2011, Jessica Tisch, then the NYPD’s director of policy and planning for counterterrorism, showed off the software on gleaming widescreen monitors, demonstrating how it could pull up images and video clips of people in red shirts. Tisch did not mention the partnership with IBM. During Kelly’s tenure as police commissioner, the NYPD quietly worked with IBM as the company tested out its object recognition technology on a select number of NYPD and subway cameras, according to IBM documents. “We really needed to be able to test out the algorithm,” said Kjeldsen, who explained that the software would need to process massive quantities of diverse images in order to learn how to adjust to the differing lighting, shadows, and other environmental factors in its view. “We were almost using the video for both things at that time, taking it to the lab to resolve issues we were having or to experiment with new technology,” Kjeldsen said. At the time, the department hoped that video analytics would improve analysts’ ability to identify suspicious objects and persons in real time in sensitive areas, according to Conor McCourt, a retired NYPD counterterrorism sergeant who said he used IBM’s program in its initial stages. “Say you have a suspicious bag left in downtown Manhattan, as a person working in the command center,” McCourt said. “It could be that the analytics saw the object sitting there for five minutes, and says, ‘Look, there’s an object sitting there.’” Operators could then rewind the video or look at other cameras nearby, he explained, to get a few possibilities as to who had left the object behind. Over the years, IBM employees said, they started to become more concerned as they worked with the NYPD to allow the program to identify demographic characteristics. By 2012, according to the internal IBM documents, researchers were testing out the video analytics software on the bodies and faces of New Yorkers, capturing and archiving their physical data as they walked in public or passed through subway turnstiles. With these close-up images, IBM refined its ability to search for people on camera according to a variety of previously undisclosed features, such as age, gender, hair color (called “head color”), the presence of facial hair — and skin tone. The documents reference meetings between NYPD personnel and IBM researchers to review the development of body identification searches conducted at subway turnstile cameras. “We were certainly worried about where the heck this was going,” recalled Kjeldsen. “There were a couple of us that were always talking about this, you know, ‘If this gets better, this could be an issue.’” According to the NYPD, counterterrorism personnel accessed IBM’s bodily search feature capabilities only for evaluation purposes, and they were accessible only to a handful of counterterrorism personnel. “While tools that featured either racial or skin tone search capabilities were offered to the NYPD, they were explicitly declined by the NYPD,” Donald, the NYPD spokesperson, said. “Where such tools came with a test version of the product, the testers were instructed only to test other features (clothing, eyeglasses, etc.), but not to test or use the skin tone feature. That is not because there would have been anything illegal or even improper about testing or using these tools to search in the area of a crime for an image of a suspect that matched a description given by a victim or a witness. It was specifically to avoid even the suggestion or appearance of any kind of technological racial profiling.” The NYPD ended its use of IBM’s video analytics program in 2016, Donald said. Donald acknowledged that, at some point in 2016 or early 2017, IBM approached the NYPD with an upgraded version of the video analytics program that could search for people by ethnicity. “The Department explicitly rejected that product,” he said, “based on the inclusion of that new search parameter.” In 2017, IBM released Intelligent Video Analytics 2.0, a product with a body camera surveillance capability that allows users to detect people captured on camera by “ethnicity” tags, such as “Asian,” “Black,” and “White.” Kjeldsen, the former IBM researcher who helped develop the company’s skin tone analytics with NYPD camera access, said the department’s claim that the NYPD simply tested and rejected the bodily search features was misleading. “We would have not explored it had the NYPD told us, ‘We don’t want to do that,’” he said. “No company is going to spend money where there’s not customer interest.” Kjeldsen also added that the NYPD’s decision to allow IBM access to their cameras was crucial for the development of the skin tone search features, noting that during that period, New York City served as the company’s “primary testing area,” providing the company with considerable environmental diversity for software refinement. “The more different situations you can use to develop your software, the better it’s going be,” Kjeldsen said. “That obviously pertains to people, skin tones, whatever it is you might be able to classify individuals as, and it also goes for clothing.” The NYPD’s cooperation with IBM has since served as a selling point for the product at California State University, Northridge. There, campus police chief Anne Glavin said the technology firm IXP helped sell her on IBM’s object identification product by citing the NYPD’s work with the company. “They talked about what it’s done for New York City. IBM was very much behind that, so this was obviously of great interest to us,” Glavin said. Day-to-Day Policing, Civil Liberties Concerns The NYPD-IBM video analytics program was initially envisioned as a counterterrorism tool for use in midtown and lower Manhattan, according to Kjeldsen. However, the program was integrated during its testing phase into dozens of cameras across the city. According to the former NYPD technologist, it could have been integrated into everyday criminal investigations. “All bureaus of the department could make use of it,” said the former technologist, potentially helping detectives investigate everything from sex crimes to fraud cases. Kjeldsen spoke of cameras being placed at building entrances and near parking entrances to monitor for suspicious loiterers and abandoned bags. Donald, the NYPD spokesperson, said the program’s access was limited to a small number of counterterrorism officials, adding, “We are not aware of any case where video analytics was a factor in an arrest or prosecution.” Campus police at California State University, Northridge, who adopted IBM’s software, said the bodily search features have been helpful in criminal investigations. Asked about whether officers have deployed the software’s ability to filter through footage for suspects’ clothing color, hair color, and skin tone, Captain Scott VanScoy at California State University, Northridge, responded affirmatively, relaying a story about how university detectives were able to use such features to quickly filter through their cameras and find two suspects in a sexual assault case. “We were able to pick up where they were at different locations from earlier that evening and put a story together, so it saves us a ton of time,” Vanscoy said. “By the time we did the interviews, we already knew the story and they didn’t know we had known.” Glavin, the chief of the campus police, added that surveillance cameras using IBM’s software had been placed strategically across the campus to capture potential security threats, such as car robberies or student protests. “So we mapped out some CCTV in that area and a path of travel to our main administration building, which is sometimes where people will walk to make their concerns known and they like to stand outside that building,” Glavin said. “Not that we’re a big protest campus, we’re certainly not a Berkeley, but it made sense to start to build the exterior camera system there.” Civil liberties advocates say they are alarmed by the NYPD’s secrecy in helping to develop a program with the potential capacity for mass racial profiling. The identification technology IBM built could be easily misused after a major terrorist attack, argued Rachel Levinson-Waldman, senior counsel in the Brennan Center’s Liberty and National Security Program. “Whether or not the perpetrator is Muslim, the presumption is often that he or she is,” she said. “It’s easy to imagine law enforcement jumping to a conclusion about the ethnic and religious identity of a suspect, hastily going to the database of stored videos and combing through it for anyone who meets that physical description, and then calling people in for questioning on that basis.” IBM did not comment on questions about the potential use of its software for racial profiling. However, the company did send a comment to The Intercept pointing out that it was “one of the first companies anywhere to adopt a set of principles for trust and transparency for new technologies, including AI systems.” The statement continued on to explain that IBM is “making publicly available to other companies a dataset of annotations for more than a million images to help solve one of the biggest issues in facial analysis — the lack of diverse data to train AI systems.” Few laws clearly govern object recognition or the other forms of artificial intelligence incorporated into video surveillance, according to Clare Garvie, a law fellow at Georgetown Law’s Center on Privacy and Technology. “Any form of real-time location tracking may raise a Fourth Amendment inquiry,” Garvie said, citing a 2012 Supreme Court case, United States v. Jones, that involved police monitoring a car’s path without a warrant and resulted in five justices suggesting that individuals could have a reasonable expectation of privacy in their public movements. In addition, she said, any form of “identity-based surveillance” may compromise people’s right to anonymous public speech and association. Garvie noted that while facial recognition technology has been heavily criticized for the risk of false matches, that risk is even higher for an analytics system “tracking a person by other characteristics, like the color of their clothing and their height,” that are not unique characteristics. The former NYPD technologist acknowledged that video analytics systems can make mistakes, and noted a study where the software had trouble characterizing people of color: “It’s never 100 percent.” But the program’s identification of potential suspects was, he noted, only the first step in a chain of events that heavily relies on human expertise. “The technology operators hand the data off to the detective,” said the technologist. “You use all your databases to look for potential suspects and you give it to a witness to look at. … This is all about finding a way to shorten the time to catch the bad people.” Object identification programs could also unfairly drag people into police suspicion just because of generic physical characteristics, according to Jerome Greco, a digital forensics staff attorney at the Legal Aid Society, New York’s largest public defenders organization. “I imagine a scenario where a vague description, like young black male in a hoodie, is fed into the system, and the software’s undisclosed algorithm identifies a person in a video walking a few blocks away from the scene of an incident,” Greco said. “The police find an excuse to stop him, and, after the stop, an officer says the individual matches a description from the earlier incident.” All of a sudden, Greco continued, “a man who was just walking in his own neighborhood” could be charged with a serious crime without him or his attorney ever knowing “that it all stemmed from a secret program which he cannot challenge.” While the technology could be used for appropriate law enforcement work, Kjeldsen said that what bothered him most about his project was the secrecy he and his colleagues had to maintain. “We certainly couldn’t talk about what cameras we were using, what capabilities we were putting on cameras,” Kjeldsen said. “They wanted to control public perception and awareness of LMSI” — the Lower Manhattan Security Initiative — “so we always had to be cautious about even that part of it, that we’re involved, and who we were involved with, and what we were doing.” (IBM did not respond to a question about instructing its employees not to speak publicly about its work with the NYPD.) The way the NYPD helped IBM develop this technology without the public’s consent sets a dangerous precedent, Kjeldsen argued. “Are there certain activities that are nobody’s business no matter what?” he asked. “Are there certain places on the boundaries of public spaces that have an expectation of privacy? And then, how do we build tools to enforce that? That’s where we need the conversation. That’s exactly why knowledge of this should become more widely available — so that we can figure that out.” This article was reported in partnership with the Investigative Fund at the Nation Institute. Source
  5. from the result-of-asking-'why-not?'-rather-than-'why?' dept Reuters has a long, detailed examination of the Chinese surveillance state. China's intrusion into the lives of its citizens has never been minimal, but advances in technology have allowed the government to keep tabs on pretty much every aspect of citizens' lives. Facial recognition has been deployed at scale and it's not limited to finding criminals. It's used to identify regular citizens as they go about their daily lives. This is paired with license plate readers and a wealth of information gathered from online activity to provide the government dozens of data points for every citizen that wanders into the path of its cameras. Other biometric information is gathered and analyzed to help the security and law enforcement agencies better pin down exactly who it is they're looking at. But it goes further than that. The Chinese version of stop-and-frisk involves "patting down" cellphones for illegal content or evidence of illegal activities. China is home to several companies offering phone cracking services and forensic software. It's not only Cellebrite and Grayshift, although these two are best known for selling tech to US law enforcement. Not that phone cracking is really a necessity in China. Most citizens hand over passwords when asked, considering the alternative isn't going to be a detainment while a warrant is sought. The option is far more likely to be something like a trip to a modern dungeon for a little conversational beating. What's notable about this isn't the tech. This tech is everywhere. US law enforcement has access to much of this, minus the full-blown facial recognition and other biometric tracking. (That's on its way, though.) Plate readers, forensic devices, numerous law enforcement databases, social media tracking software… these are all in use already. Much of what China has deployed is being done in the name of security. That's the same justification for the massive surveillance apparatus erected after the 2001 attacks. The framework for a totalitarian state is already in place. The only thing separating us from China is our Constitutional rights. Whenever you hear a US government official lamenting perps walking on technicalities or encryption making it tough to lock criminals up, keep in mind the alternative is China: a full-blown police state stocked to the teeth with surveillance tech. Source
  6. Researchers believe a new encryption technique may be key to maintaining a balance between user privacy and government demands. For governments worldwide, encryption is a thorn in the side in the quest for surveillance, cracking suspected criminal phones, and monitoring communication. Officials are applying pressure on technology firms and app developers which provide end-to-end encryption services provide a way for police forces to break encryption. However, the moment you provide a backdoor into such services, you are creating a weak point that not only law enforcement and governments can use -- assuming that tunneling into a handset and monitoring is even within legal bounds -- but threat actors, and undermining the security of encryption as a whole. As the mass surveillance and data collection activities of the US National Security Agency hit the headlines, faith in governments and their ability to restrain such spying to genuine cases of criminality began to weaken. Now, the use of encryption and secure communication channels is ever-more popular, technology firms are resisting efforts to implant deliberate weaknesses in encryption protocols, and neither side wants to budge. What can be done? From the outset, something has got to give. However, researchers from Boston University believe they may have come up with a solution. On Monday, the team said they have developed a new encryption technique which will give authorities some access, but without providing unlimited access in practice, to communication. In other words, a middle ground -- a way to break encryption to placate law enforcement, but not to the extent that mass surveillance on the general public is possible. Mayank Varia, Research Associate Professor at Boston University and cryptography expert, has developed the new technique, known as cryptographic "crumpling." In a paper documenting the research, lead author Varia says that the new cryptography methods could be used for "exceptional access" to encrypted data for government purposes while keeping user privacy at large at a reasonable level. "Our approach places most of the responsibility for achieving exceptional access on the government, rather than on the users or developers of cryptographic tools," the paper notes. "As a result, our constructions are very simple and lightweight, and they can be easily retrofitted onto existing applications and protocols." The crumpling techniques use two approaches -- the first being a Diffie-Hellman key exchange over modular arithmetic groups which leads to an "extremely expensive" puzzle which must be solved to break the protocol, and the second a "hash-based proof of work to impose a linear cost on the adversary for each message" to recover. Crumpling requires strong, modern cryptography as a precondition as it allows per-message encryption keys and detailed management. The system requires this infrastructure so a small number of messages can be targeted without full-scale exposure. The team says that this condition will also only permit "passive" decryption attempts, rather than man-in-the-middle (MiTM) attacks. By introducing cryptographic puzzles into the generation of per-message cryptographic keys, the keys will be possible to decrypt but will require vast resources to do so. In addition, each puzzle will be chosen independently for each key, which means "the government must expend effort to solve each one." "Like a crumple zone in automotive engineering, in an emergency situation the construction should break a little bit in order to protect the integrity of the system as a whole and the safety of its human users," the paper notes. "We design a portion of our puzzles to match Bitcoin's proof of work computation so that we can predict their real-world marginal cost with reasonable confidence." To prevent unauthorized attempts to break encryption an "abrasion puzzle" serves as a gatekeeper which is more expensive to solve than individual key puzzles. While this would not necessarily deter state-sponsored threat actors, it may at least deter individual cyberattackers as the cost would not be worth the result. The new technique would allow governments to recover the plaintext for targeted messages, however, it would also be prohibitively expensive. A key length of 70 bits, for example -- with today's hardware -- would cost millions and force government agencies to choose their targets carefully and the expense would potentially prevent misuse. The research team estimates that the government could recover less than 70 keys per year with a budget of close to $70 million dollars upfront -- one million dollars per message and the full amount set out in the US' expanded federal budget to break encryption. However, there could also be additional costs of $1,000 to $1 million per message, and these kind of figures are difficult to conceal, especially as one message from a suspected criminal in a conversation without contextual data is unlikely to ever be enough to secure conviction. The research team says that crumpling can be adapted for use in common encryption services including PGP, Signal, as well as full-disk and file-based encryption. "We view this work as a catalyst that can inspire both the research community and the public at large to explore this space further," the researchers say. "Whether such a system will ever be (or should ever be) adopted depends less on technology and more on questions for society to answer collectively: whether to entrust the government with the power of targeted access and whether to accept the limitations on law enforcement possible with only targeted access." The research was funded by the National Science Foundation. Source
  7. part 1 (YET ANOTHER) WARNING .... Your online activities are now being tracked and recorded by various government and corporate entities around the world. This information can be used against you at any time and there is no real way to “opt out”. In the past decade, we have seen the systematic advancement of the surveillance apparatus throughout the world. The United States, United Kingdom, Australia, and Canada have all passed laws allowing, and in some cases forcing, telecom companies to bulk-collect your data: United States – In March 2017 the US Congress passed legislation that allows internet service providers to collect, store, and sell your private browsing history, app usage data, location information and more – without your consent. This essentially allows Comcast, Verizon, AT&T and other providers to monetize and sell their customers to the highest bidders (usually for targeted advertising). United Kingdom – In November 2016 the UK Parliament passed the infamous Snoopers Charter (Investigatory Powers Act) which forces internet providers and phone companies to bulk-collect customer data. This includes private browsing history, social media posts, phone calls, text messages, and more. This information is stored for 12 months in a giant database that is accessible to 48 different government agencies. The erosion of free speech is also rapidly underway as various laws allow UK authorities to lock up anyone they deem to be “offensive” (1984 is already here). Australia – In April 2017 the Australian government passed a massive data retention law that forces telecoms to collect and store text messages, phone calls, location information, and internet connection data for a full two years, with the data being accessible to authorities without a warrant. Canada, Europe, and other parts of the world have similar laws and policies already in place. What you are witnessing is the rapid expansion of the global surveillance state, whereby corporate and government entities work together to monitor and record everything you do. What the hell is going on here? Perhaps you are wondering why all this is happening. There is a simple answer to that question. Control Just like we have seen throughout history, government surveillance is simply a tool used for control. This could be for maintaining control of power, controlling a population, or controlling the flow of information in a society. You will notice that the violation of your right to privacy will always be justified by various excuses – from “terrorism” to tax evasion – but never forget, it’s really about control. Along the same lines, corporate surveillance is also about control. Collecting your data helps private entities control your buying decisions, habits, and desires. The tools for doing this are all around you: apps on your devices, social networks, tracking ads, and many free products which simply bulk-collect your data (when something is free, you are the product). This is why the biggest collectors of private data – Google and Facebook – are also the two businesses that completely dominate the online advertising industry. So to sum this up, advertising today is all about the buying and selling of individuals. But it gets even worse… Now we have the full-scale cooperation between government and corporate entities to monitor your every move. In other words, governments are now enlisting private corporations to carry out bulk data collection on entire populations. Your internet service provider is your adversary working on behalf of the surveillance state. This basic trend is happening in much of the world, but it has been well documented in the United States with the PRISM Program. So why should you care? Everything that’s being collected could be used against you today, or at any time in the future, in ways you may not be able to imagine. In many parts of the world, particularly in the UK, thought crime laws are already in place. If you do something that is deemed to be “offensive”, you could end up rotting away in a jail cell for years. Again, we have seen this tactic used throughout history for locking up dissidents – and it is alive and well in the Western world today. From a commercial standpoint, corporate surveillance is already being used to steal your data and hit you with targeted ads, thereby monetizing your private life. Reality check Many talking heads in the media will attempt to confuse you by pretending this is a problem with a certain politician or perhaps a political party. But that’s a bunch of garbage to distract you from the bigger truth. For decades, politicians from all sides (left and right) have worked hard to advance the surveillance agenda around the world. Again, it’s all about control, regardless of which puppet is in office. So contrary to what various groups are saying, you are not going to solve this problem by writing a letter to another politician or signing some online petition. Forget about it. Instead, you can take concrete steps right now to secure your data and protect your privacy. Restore Privacy is all about giving you the tools and information to do that. If you feel overwhelmed by all this, just relax. The privacy tools you need are easy to use no matter what level of experience you have. Arguably the most important privacy tool is a good VPN (virtual private network). A VPN will encrypt and anonymize your online activity by creating a secured tunnel between your computer and a VPN server. This makes your data and online activities unreadable to government surveillance, your internet provider, hackers, and other third-party snoopers. A VPN will also allow you to spoof your location, hide your real IP address, and allow you to access blocked content from anywhere in the world. Check out the best VPN guide to get started. Stay safe! SOURCE
  8. MOSCOW - Edward Snowden, who exposed extensive U.S. surveillance programs in 2013, warned this week that Japan may be moving closer to sweeping surveillance of ordinary citizens as the government eyes a legal change to enhance police powers in the name of counterterrorism. "This is the beginning of a new wave of mass surveillance in Japan," the 33-year-old American said in an exclusive interview with Kyodo News while in exile in Russia, referring to a so-called anti-conspiracy bill that has stirred controversy in and outside Japan as having the potential to undermine civil liberties. The consequences could be even graver when combined with the use of a wide-reaching online data collection tool called XKEYSCORE, the former contractor for the U.S. National Security Agency said. He also gave credence to the authenticity of new NSA papers exposed through The Intercept, a U.S. online media outlet, earlier this year that showed the agency's surveillance tool has already been shared with Japan. Edward Snowden: Exclusive interview with Kyodo News 1 The remarks by the intelligence expert are the latest warning over the Japanese government's push to pass the controversial bill through parliament, which criminalizes the planning and preparatory actions of 277 serious crimes. In an open letter addressed to Prime Minister Shinzo Abe in mid-May, a U.N. special rapporteur on the right to privacy stated that the bill could lead to undue restrictions of privacy and freedom of expression due to its potentially broad application -- a claim the Japanese government has strongly protested against. Snowden said he agrees with the U.N.-appointed expert Joseph Cannataci, arguing the bill is "not well explained" and raises concerns that the government may have intentions other than its stated goal of cracking down on terrorism and organized crimes ahead of the 2020 Tokyo Olympics. The anti-conspiracy law proposed by the government "focuses on terrorism and everything else that's not related to terrorism -- things like taking plants from the forestry reserve," he said. "And the only real understandable answer (to the government's desire to pass the bill)...is that this is a bill that authorizes the use of surveillance in new ways because now everyone can be a criminal." Based on his experience of using XKEYSCORE himself, Snowden said authorities could become able to intercept everyone's communications, including people organizing political movements or protests, and put them "in a bucket." The records would be simply "pulled out of the bucket" whenever necessary and the public would not be able to know whether such activities are done legally or secretly by the government because there are no sufficient legal safeguards in the bill, Snowden said. Snowden finds the current situation in Japan reminiscent of what he went through in the United States following the terror attacks on Sept. 11, 2001. In passing the Patriot Act, which strengthened the U.S. government's investigative powers in the wake of the attacks, the government said similar things to what the Japanese government is saying now, such as "these powers are not going to be targeted against ordinary citizens" and "we're only interested in finding al-Qaida and terrorists," according to Snowden. But within a few short years of the enactment of the Patriot Act, the U.S. government was using the law secretly to "collect the phone records of everyone in the United States, and everyone around the world who they could access" through the largest phone companies in the United States, Snowden said, referring to the revelations made in 2013 through top-secret documents he leaked. Even though it sacrifices civil liberties, mass surveillance is not effective, Snowden said. The U.S. government's privacy watchdog concluded in its report in 2014 that the NSA's massive telephone records program showed "minimal value" in safeguarding the nation from terrorism and that it must be ended. On Japan's anti-conspiracy bill, Snowden said it should include strong guarantees of human rights and privacy and ensure that those guarantees are "not enforced through the words of politicians but through the actions of courts." "This means in advance of surveillance, in all cases the government should seek an individualized warrant, and individualized authorization that this surveillance is lawful and appropriate in relationship to the threat that's presented by the police," he said. He also said allowing a government to get into the habit of collecting the communications of everyone through powerful surveillance tools could dangerously change the power relationship between the public and government to something closer to "subject and ruler" instead of partners, which is how it should be in a democracy. Arguably, people in Japan may not make much of what Snowden sees as the rise of new untargeted and indiscriminate mass surveillance, thinking that they have nothing to hide or fear. But he insists that privacy is not about something to "hide" but about "protecting" an open and free society where people can be different and can have their own ideas. Freedom of speech would not mean much if people do not have the space to figure out what they want to say, or share their views with others they trust, to develop them before introducing them into the context of the world, he said. "When you say 'I don't care about privacy, because I've nothing to hide,' that's no different than saying you don't care about freedom of speech, because you've nothing to say," he added. Snowden, who was dressed in a black suit, said toward the end of his more than 100-minute interview at a hotel in Moscow that living in exile is not "a lifestyle that anyone chooses voluntarily." He hopes to return home while continuing active exchanges online with people in various countries. "The beautiful thing about today is that I can be in every corner of the world every night. I speak at U.S. universities every month. It's important to understand that I don't really live in Moscow. I live on the internet," he said. Snowden showed no regrets over taking the risk of becoming a whistleblower and being painted by his home country as a "criminal" or "traitor," facing espionage charges at home for his historic document leak. "It's scary as hell, but it's worth it. Because if we don't do it, if we see the truth of crimes or corruption in government, and we don't say something about it, we're not just making the world worse for our children, we're making the world worse for us, and we're making ourselves worse," he said. Article source
  9. Facebook Bans Devs From Creating Surveillance Tools With User Data Without a hint of irony, Facebook has told developers that they may not use data from Instagram and Facebook in surveillance tools. The social network says that the practice has long been a contravention of its policies, but it is now tidying up and clarifying the wording of its developer policies. American Civil Liberties Union, Color of Change and the Center for Media Justice put pressure on Facebook after it transpired that data from users' feeds was being gathered and sold on to law enforcement agencies. The re-written developer policy now explicitly states that developers are not allowed to "use data obtained from us to provide tools that are used for surveillance." It remains to be seen just how much of a difference this will make to the gathering and use of data, and there is nothing to say that Facebook's own developers will not continue to engage in the same practices. Deputy chief privacy officer at Facebook, Rob Sherman, says: Transparency reports published by Facebook show that the company has complied with government requests for data. The secrecy such requests and dealings are shrouded in means that there is no way of knowing whether Facebook is engaged in precisely the sort of activity it is banning others from performing. Source
  10. Legislation introduced today by New York City council members Dan Garodnick and Vanessa Gibson would finally compel the NYPD — one of the most technology-laden police forces in the country — to make public its rulebook for deploying its controversial surveillance arsenal. The bill, named the Public Oversight of Surveillance Technology (POST) act, would require the NYPD to detail how, when, and with what authority it uses technologies like Stingray devices, which can monitor and interfere with the cellular communications of an entire crowd at once. Specifically, the department would have to publicize the “rules, processes and guidelines issued by the department regulating access to or use of such surveillance technology as well as any prohibitions or restrictions on use, including whether the department obtains a court authorization for each use of a surveillance technology, and what specific type of court authorization is sought.” The NYPD would also have to say how it protects the gathered surveillance data itself (for example, X-ray imagery, or individuals captured in a facial recognition scan), and whether or not this data is shared with other governmental organizations. A period of public comment would follow these disclosures. In a press release, the New York Civil Liberties Union, which has been instrumental in fighting to reveal the mere fact that the NYPD possesses devices like the Stingray, hailed the bill: Public awareness of how the NYPD conducts intrusive surveillance, especially the impacts on vulnerable New Yorkers, is critical to democracy. For too long the NYPD has been using technology that spies on cellphones, sees through buildings and follows your car under a shroud of secrecy, and the bill is a significant step out of the dark ages. It’s unclear whether the bill would apply to products that have both powerful surveillance and non-surveillance functionality, a la Palantir, but the legislation’s definition of “surveillance technology” is sufficiently broad: The term “surveillance technology” means equipment, software, or system capable of, or used or designed for, collecting, retaining, processing, or sharing audio, video, location, thermal, biometric, or similar information, that is operated by or at the direction of the department. Though the bill might do little to curb the use of such technologies, it would at least give those on the sidewalk a better idea of how and when they’re being watched, if not why. The NYPD did not immediately return a request for comment. By Sam Biddle https://theintercept.com/2017/03/01/new-bill-would-force-nypd-to-disclose-its-surveillance-tech-playbook/
  11. The Tor Project, responsible for software that enables anonymous Internet use and communication, is launching a new mobile app to detect internet censorship and surveillance around the world. The app, called “OONIProbe,” alerts users to the blocking of websites, censorship and surveillance systems and the speed of networks. Slowing internet speeds down to a crawl is one way governments censor internet content they deem illegal. The app also spells out how users might be able to circumvent the blockage. Ooni on the iPhone Operating under the Tor Project umbrella, the Open Observatory of Network Interference (OONI) is a global observation network watching online censorship since 2012. Data from OONI has detected censorship in countries including Iran, Saudi Arabia, Turkey, South Korea, Greece, China, Russia, India, Indonesia and Sudan. The project watches over 100 countries and serves as a resource to journalists, lawyers, activists, researchers and people on the ground in countries where censorship is prevalent. In 2016, internet censorship was used in countries like the African nation of Gabon during highly contested elections and subsequent protests. To stop citizens from sharing videos of election irregularities, the country’s internet was down for four days. Earlier in 2016, Uganda engaged in similar widespread censorship. Both countries at times denied their actions, making tools like OONI ever more valuable. “What Signal did for end-to-end encryption, OONI did for unmasking censorship,” Moses Karanja, a Kenyan researcher on the politics of information controls at Strathmore University’s CIPIT, said in a statement. “Most Africans rely on mobile phones as their primary means of accessing the internet and OONI’s mobile app allows for decentralized efforts in unmasking the nature of censorship and internet performance. The possibilities are exciting for researchers, business and the human rights community around the world. We look forward to interesting days ahead. ” Internet freedom declined for the sixth year in a row in 2016, according to a report from Freedom House, making censorship and surveillance transparency a high priority for activists looking to turn back that momentum. Twenty-four governments blocked access to social media sites and communication services in 2016, compared with 15 governments doing so last year, according to Freedom House. Internet freedom fell most precipitously in Uganda, Bangladesh, Cambodia, Ecuador and Libya. Several countries, including Egypt and the United Arab Emirates, reportedly tried to block Signal, the increasingly popular encrypted messenger developed in the United States. That’s part of a global trend that’s seen governments go after apps like WhatsApp and Telegram in an effort to stymie secure communications. “Never before has it been so easy to uncover evidence of internet censorship,” Arturo Filastò, OONI’s project lead and core developer said in a statement. “By simply owning a smartphone (and running ooniprobe), you can now play an active role in increasing transparency around internet controls.” The app will be available on the Google Play and iOS app stores this week, according to Tor Project spokeswoman Kate Krauss. Article source
  12. Four in Five Britons Fearful Trump Will Abuse their Data More than three-quarters of Britons believe incoming US President Donald Trump will use his surveillance powers for personal gain, and a similar number want reassurances from the government that data collected by GCHQ will be safeguarded against such misuse. These are the headline findings from a new Privacy International poll of over 1600 Brits on the day Trump is inaugurated as the 45th President of the most powerful nation on earth. With that role comes sweeping surveillance powers – the extent of which was only revealed after NSA whistleblower Edward Snowden went public in 2013. There are many now concerned that Trump, an eccentric reality TV star and gregarious property mogul, could abuse such powers for personal gain. That’s what 78% of UK adults polled by Privacy International believe, and 54% said they had no trust that Trump would use surveillance for legitimate purposes. Perhaps more important for those living in the United Kingdom is the extent of the information sharing partnership between the US and the UK. Some 73% of respondents said they wanted the government to explain what safeguards exist to ensure any data swept up by their domestic secret services doesn’t end up being abused by the new US administration. That fear has become even more marked since the passage of the Investigatory Powers Act or 'Snoopers’ Charter', which granted the British authorities unprecedented mass surveillance and hacking powers, as well as forcing ISPs to retain all web records for up to 12 months. Privacy International claimed that although it has privately been presented with documents detailing the info sharing partnership between the two nations, Downing Street has so far refused to make the information public. The rights group and nine others are currently appealing to the European Court of Human Rights to overturn a decision by the Investigatory Powers Tribunal (IPT) not to release information about the rules governing the US-UK agreement. “UK and the US spies have enjoyed a cosy secret relationship for a long time, sharing sensitive intelligence data with each other, without parliament knowing anything about it, and without any public consent. Slowly, we’re learning more about the staggering scale of this cooperation and a dangerous lack of sufficient oversight,” argued Privacy International research officer, Edin Omanovic. “Today, a new President will take charge of US intelligence agencies – a President whose appetite for surveillance powers and how they’re used put him at odds with British values, security, and its people… Given that our intelligence agencies are giving him unfettered access to massive troves of personal data, including potentially about British people, it is essential that the details behind all this are taken out of the shadows.” Source
  13. Mozilla: The Internet Is Unhealthy And Urgently Needs Your Help Mozilla argues that the internet's decentralized design is under threat by a few key players, including Google, Facebook, Apple, Tencent, Alibaba and Amazon, monopolizing messaging, commerce, and search. Can the internet as we know it survive the many efforts to dominate and control it, asks Firefox maker Mozilla. Much of the internet is in a perilous state, and we, its citizens, all need to help save it, says Mark Surman, executive director of Firefox maker the Mozilla Foundation. We may be in awe of the web's rise over the past 30 years, but Surman highlights numerous signs that the internet is dangerously unhealthy, from last year's Mirai botnet attacks, to market concentration, government surveillance and censorship, data breaches, and policies that smother innovation. "I wonder whether this precious public resource can remain safe, secure and dependable. Can it survive?" Surman asks. "These questions are even more critical now that we move into an age where the internet starts to wrap around us, quite literally," he adds, pointing to the Internet of Things, autonomous systems, and artificial intelligence. In this world, we don't use a computer, "we live inside it", he adds. "How [the internet] works -- and whether it's healthy -- has a direct impact on our happiness, our privacy, our pocketbooks, our economies and democracies." Surman's call to action coincides with nonprofit Mozilla's first 'prototype' of the Internet Health Report, which looks at healthy and unhealthy trends that are shaping the internet. Its five key areas include open innovation, digital inclusion, decentralization, privacy and security, and web literacy. Mozilla will launch the first report after October, once it has incorporated feedback on the prototype. That there are over 1.1 billion websites today, running on mostly open-source software, is a positive sign for open innovation. However, Mozilla says the internet is "constantly dodging bullets" from bad policy, such as outdated copyright laws, secretly negotiated trade agreements, and restrictive digital-rights management. Similarly, while mobile has helped put more than three billion people online today, there were 56 internet shutdowns last year, up from 15 shutdowns in 2015, it notes. Mozilla fears the internet's decentralized design, while flourishing and protected by laws, is under threat by a few key players, including Facebook, Google, Apple, Tencent, Alibaba and Amazon, monopolizing messaging, commerce and search. "While these companies provide hugely valuable services to billions of people, they are also consolidating control over human communication and wealth at a level never before seen in history," it says. Mozilla approves of the wider adoption of encryption today on the web and in communications but highlights the emergence of new surveillance laws, such as the UK's so-called Snooper's Charter. It also cites as a concern the Mirai malware behind last year's DDoS attacks, which abused unsecured webcams and other IoT devices, and is calling for safety standards, rules and accountability measures. The report also draws attention to the policy focus on web literacy in the context of learning how to code or use a computer, which ignores other literacy skills, such as the ability to spot fake news, and separate ads from search results. Source Alternate Source - 1: Mozilla’s First Internet Health Report Tackles Security, Privacy Alternate Source - 2: Mozilla Wants Infosec Activism To Be The Next Green Movement
  14. Chinese Citizens Can Be Tracked In Real Time A group of researchers have revealed that the Chinese government is collecting data on its citizens to an extent where their movements can even be tracked in real-time using their mobile devices. This discovery was made by The Citizen Lab at the University of Toronto's Munk School of Global Affairs who specialize in studying the ways in which information technology affects both personal and human rights worldwide. It has been known for some time that the Chinese government employs a number of invasive tactics to be fully aware of the lives of its citizens. Though Citizen Lab was able to discover that the government has begun to monitor its populace using apps and services designed and run by the private sector. The discovery was made when the researchers began exploring Tencent's popular chat app WeChat that is installed on the devices of almost every Chinese citizen with 800 million active users each month. Citizen Lab found that not only does the app help the government censor chats between users but that it is also being used as a state surveillance tool. WeChat's restrictions even remain active for Chinese students studying abroad. Ronald Deibert, a researcher at Citizen Lab, offered further insight on the team's discovery, saying: "What the government has managed to do, I think quite successfully, is download the controls to the private sector, to make it incumbent upon them to police their own networks". To make matters worse, the data collected by WeChat and other Chinese apps and services is currently being sold online. The Guangzhou Southern Metropolis Daily led an investigation that found that large amounts of personal data on nearly anyone could be purchased online for a little over a hundred US dollars. The newspaper also found another service that offered the ability to track users in real-time via their mobile devices. Users traveling to China anytime soon should be extra cautious as to their activities online and should think twice before installing WeChat during their stay. Published under license from ITProPortal.com, a Future plc Publication. All rights reserved. Source
  15. After Spying Webcams, Welcome the Spy Toys “My Friend Cayla and I-Que” Privacy advocates claim both toys pose security and privacy threat for children and parents. Internet-connected toys are currently a rage among parents and kids alike but what we are not aware of are the associated security dangers of using Smart toys. It is a fact that has been acknowledged by the Center for Digital Democracy that smart toys pose grave privacy, security and similar other risks to children. There are certain privacy and security flaws in a pair of smart toys that have been designed to engage with kids. Last year, we reported how “Hello Barbie” toy spies on kids by talking to them, recording their conversations and send them to company’s servers which are then analyzed and stored in another cloud server. Now, the dolls My Friend Cayla and I-Que Intelligent Robot that are being marketed for both male and female kids are the objects of security concern. In fact the Federal Trade Commission’s child advocacy, consumer and privacy groups have filed a complaint [PDF] against these dolls. It is being suspected that these dolls are violating the Children’s Online Privacy Protection Act (COPPA) as well as the FTC rules because these collect and use personal data via communicating with kids. This feature of the dolls is being termed as a deceptive practice by the makers. The FTC has been asked in the complaint to investigate the matter and take action against the manufacturer of the dolls Genesis Toys as well as the provider of third-party voice recognition software for My Friend Cayla and I-Que, Nuance Communications. The complaints have been filed by these groups: the Campaign for a Commercial-Free Childhood (CCFC), Consumers Union, Center for Digital Democracy (CDD) and the Electronic Privacy Information Center (EPIC). According to complainers, these dolls are already creepy looking and the fact that these gather information makes them even creepier. Both these toys use voice recognition technology coupled with internet connectivity and Bluetooth to engage with the kids through answering questions and making up conversations. However, according to the CDD, this is done in a very insecure and invasive manner. The Genesis Toys claims on its website that while “most of Cayla’s conversational features can be accessed offline,” but searching for information would require internet connectivity. The promotional video for Cayla doll also focuses upon the toy’s ability to communicate with the kid as it stated: “ask Cayla almost anything.” To work, these dolls require mobile apps but some questions might be asked directly. The toys keep a Bluetooth connection enabled constantly so that the dolls could reach to the actions in the app and identify the objects when the kid taps on the screen. Some of the asked questions are recorded and sent to Nuance’s servers for parsing but it is yet unclear how much of the information is kept private. The toys’ manufacturer maintains that complete anonymity is observed. The toys were released in late 2015 but still these are selling like hot cakes. As per researchers’ statement in the FTC complaint, “by connecting one phone to the doll through the insecure Bluetooth connection and calling that phone with a second phone, they were able to both converse with and covertly listen to conversations collected through the My Friend Cayla and i-Que toys.” This means anyone can use their smartphone to communicate with the child using the doll as the gateway. Watch this add to see how Cayla works Watch this video to understand how anyone can spy on your child with Cayla and i-Que If you own a smart toy, keep an eye on the conversation between you and your kid. Courtesy: CDD Source
  16. Snowden Leaks Reveal NSA Snooped On In-Flight Mobile Calls NSA, GCHQ intercepted signals as they were sent from satellites to ground stations. GCHQ and the NSA have spied on air passengers using in-flight GSM mobile services for years, newly-published documents originally obtained by Edward Snowden reveal. Technology from UK company AeroMobile and SitaOnAir is used by dozens of airlines to provide in-flight connectivity, including by British Airways, Virgin Atlantic, Lufthansa, and many Arab and Asian companies. Passengers connect to on-board GSM servers, which then communicate with satellites operated by British firm Inmarsat. "The use of GSM in-flight analysis can help identify the travel of a target—not to mention the other mobile devices (and potentially individuals) onboard the same plane with them," says a 2010 NSA newsletter. A presentation, made available by the Intercept, contains details of GCHQ's so-called "Thieving Magpie" programme. GCHQ and the NSA intercepted the signals as they were sent from the satellites to the ground stations that hooked into the terrestrial GSM network. Initially, coverage was restricted to flights in Europe, the Middle East, and Africa, but the surveillance programme was expected to go global at the time the presentation was made. GCHQ's Thieving Magpie presentation explains how in-flight mobile works. Ars has asked these three companies to comment on the extent to which they were aware of the spying, and whether they are able to improve security for their users to mitigate its effects, but was yet to receive replies from Inmarsat or AeroMobile at time of publication. A SitaOnAir spokesperson told Ars in an e-mail: The Thieving Magpie presentation explains that it is not necessary for calls to be made, or data to be sent, for surveillance to take place. If the phone is switched on, and registers with the in-flight GSM service, it can be tracked provided the plane is flying high enough that ground stations are out of reach. The data, we're told, was collected in "near real time," thus enabling "surveillance or arrest teams to be put in place in advance" to meet the plane when it lands. Using this system, aircraft can be tracked every two minutes while in flight. If data is sent via the GSM network, GCHQ's presentation says that e-mail addresses, Facebook IDs, and Skype addresses can all be gathered. Online services observed by GCHQ using its airborne surveillance include Twitter, Google Maps, VoIP, and BitTorrent. Meanwhile, Le Monde reported that "GCHQ could even, remotely, interfere with the working of the phone; as a result the user was forced to redial using his or her access codes." No source is given for that information, which presumably is found in other Snowden documents, not yet published. As the French newspaper also points out, judging by the information provided by Snowden, the NSA seemed to have something of a fixation with Air France flights. Apparently that was because "the CIA considered that Air France and Air Mexico flights were potential targets for terrorists." GCHQ shared that focus: the Thieving Magpie presentation uses aircraft bearing Air France livery to illustrate how in-flight GSM services work. Ars asked the UK's spies to comment on the latest revelations, and received the usual boilerplate response from a GCHQ spokesperson: It is longstanding policy that we do not comment on intelligence matters. So that's OK, then. Source
  17. Uber Knows Where You Go, Even After Ride Is Over Enlarge / Uber's iOS popup asking for new surveillance permissions. “We do this to improve pickups, drop-offs, customer service, and to enhance safety.” As promised, Uber is now tracking you even when your ride is over. The ride-hailing service said the surveillance—even when riders close the app—will improve its service. The company now tracks customers from when they request a ride until five minutes after the ride has ended. According to Uber, the move will help drivers locate riders without having to call them, and it will also allow Uber to analyze whether people are being dropped off and picked up properly—like on the correct side of the street. "We do this to improve pickups, drop-offs, customer service, and to enhance safety," Uber said. In a statement, the company said: Uber announced that it would make the change last year to allow surveillance in the app's background, prompting a Federal Trade Commission complaint. (PDF) The Electronic Privacy Information Center said at the time that "this collection of user's information far exceeds what customers expect from the transportation service. Users would not expect the company to collect location information when customers are not actively using the app." The complaint went nowhere. However, users must consent to the new surveillance. A popup—like the one shown at the top of this story—asks users to approve the tracking. Uber says on its site that riders "can disable location services through your device settings" and manually enter a pickup address. Uber and the New York Attorney General's office in January entered into an agreement to help protect users' location data. The deal requires Uber to encrypt location data and to protect it with multi-factor authentication. Source
  18. Encrypted Email Sign-Ups Instantly Double In Wake of Trump Victory ProtonMail suggests fear of the Donald prompting lockdown "ProtonMail follows the Swiss policy of neutrality. We do not take any position for or against Trump," the Swiss company's CEO stated on Monday, before revealing that new user sign-ups immediately doubled following Trump's election victory. ProtonMail has published figures showing that as soon as the election results rolled in, the public began to seek out privacy-focused services such as its own. CEO Andy Yen said that, in communicating with these new users, the company found people apprehensive about the decisions that President Trump might take and what they would mean considering the surveillance activities of the National Security Agency. "Given Trump's campaign rhetoric against journalists, political enemies, immigrants, and Muslims, there is concern that Trump could use the new tools at his disposal to target certain groups," Yen said. "As the NSA currently operates completely out of the public eye with very little legal oversight, all of this could be done in secret." ProtonMail was launched back in May 2014 by scientists who had met at CERN and MIT. In response to the Snowden revelations regarding collusion between the NSA and other email providers such as Google, they created a government-resistant, end-to-end encrypted email service. The service was so popular that it was "forced to institute a waiting list for new accounts after signups exceeded 10,000 per day" within the first three days of opening, the CEO previously told The Register when ProtonMail reopened free registration to all earlier this year. ProtonMail new user signups doubled immediately after Trump's election victory Yen said his service was now "seeing an influx of liberal users" despite its popularity on both sides of the political spectrum. "ProtonMail has also long been popular with the political right, who were truly worried about big government spying, and the Obama administration having access to their communications. Now the tables have turned," Yen noted. "One of the problems with having a technological infrastructure that can be abused for mass surveillance purposes is that governments can and do change, quite regularly in fact. "The only way to protect our freedom is to build technologies, such as end-to-end encryption, which cannot be abused for mass surveillance," Yen added. "Governments can change, but the laws of mathematics upon which encryption is based are much harder to change." Source
  19. In Germany journalists uncovered that the browser add-on Web of Trust (WOT) saves users' surf history to sell this data. While the company claims that the data being sold is anonymized, the journalists were able to identify several users, among those journalists, judges, policemen and politicians of the German government. The politicians reacted shocked when they were confronted with the findings from the journalists. The data contained all websites people visited, for instance traveling information or porn websites. In one case the journalists could even access banking details and a copy of an identification card all stored in an unencrypted online storage service. This opens the door for blackmail and identity theft The German politician Valerie Wilms (member of the Bundestag) was shocked when confronted with the data. It contained information such as journey routes, tax data as well as ideas about her political work. The politician said that this kind of data “can be very harmful. It can open the door for blackmail”. She would feel “naked”. Other politicians called for laws against such data mining if the companies mining the data could not be trusted. How does it work? The journalists explained that the data they received contained information collected by the browser plugin Web of Trust. This plugin verifies that each website a person is visiting can be trusted. For doing so the plugin sends information about every visited website to their server. This data is stored and a profile of the user is being created. While the company claims that it only sells the data in an anonymized form, the journalists said it was rather easy to figure out who the person in question was. For instance, the data contained information such as email addresses or login names that made it easy to conclude the user's name. Mass surveillance should be illegal. The politicians reacted shocked when they were confronted with the data that showed what websites they were visiting. Their statements proved one thing: The politicians being monitored did not feel secure. And they all agreed on one thing: That such a surveillance should be illegal. We at Tutanota agree completely. This is why we encrypt all user data end-to-end. We want to thank the investigative journalists at NDR for their great research. We hope that journalists - and politicians! - will more and more understand what the consequences of all-round surveillance are. Whenever there is surveillance the data can - and will - find its way into the wrong hands. We have to stop any form of monitoring in the first place. We can win the battle for privacy. When politicians start fighting along with us, we can win this battle and take back what belongs to us: Our personal data. Because no one is allowed to accumulate our data and sell it. As for now we can be smarter than the data miners when using the internet: Encrypt as much information as possible. Use only very few browser plugins and make sure they do not collect your data. Use privacy-friendly services that do not collect and sell you data. Pay for your online services, instead of paying with your data! Article source
  20. New Reports Show How Vague Laws Can Pave the Way for Human Rights Violations We're proud to announce today's release of “Unblinking Eyes: The State of Communications Surveillance in Latin America,” a project that analyzes surveillance laws and practices in Latin America. On this day, let’s take a minute to reflect on the horrific consequences of unchecked surveillance. The Terror Archive In December 1992, following a hastily-drawn sketch of a map given to him by a whistleblower, the Paraguayan lawyer Martin Almada drove to an obscure police station in the suburb of Lambaré, near Asunción. Behind the police offices, in a run-down office building, he discovered a cache of 700,000 documents, piled nearly to the ceiling. This was the “Terror Archive,” an almost complete record of the interrogations, torture, and surveillance conducted by the Paraguayan military dictatorship of Alfredo Stroessner. The files reported details of “Operation Condor,” a clandestine program between the military dictatorships in Argentina, Chile, Paraguay, Bolivia, Uruguay, and Brazil between the 1970s and 1980s. The military governments of those nations agreed to cooperate in sending teams into other countries to track, monitor, and kill their political opponents. The files listed more than 50,000 deaths and 400,000 political prisoners throughout Argentina, Bolivia, Brazil, Chile, Paraguay, Uruguay, Colombia, Peru, and Venezuela. Stroessner's secret police used informants, telephoto cameras, and wiretaps to build a paper database on everyone that was viewed as a threat, plus their friends and associates. The Terror Archive shows how far a country's government might sink when unchecked by judicial authorities, public oversight bodies, and the knowledge of the general public. That was a quarter century ago. A modern Operation Condor would have far more powerful tools at hand than just ring-binders, cameras, and wiretapped phones. Today's digital surveillance technology leaves the techniques documented in the Terror Archive in the dust. Twentieth century surveillance law considers the simple wiretapping of a single phone line, with no guidance on how to apply these regulations to our growing menagerie of spying capabilities. When new surveillance or cyber-security laws are passed, they are written paper over existing practice, or to widen existing powers—such as data retention laws that force phone and Internet companies to log and retain even more data for state use. Each of these new powers is a ticking time-bomb, waiting for abuse. One way to stop these powers from being turned against the public is to create robust and detailed modern privacy law to constrain its use, an independent judiciary who will enforce those limits, and a public oversight mechanism that allows the general public to know what its country's most secretive government agents are up to in their name. Unfortunately, legislators and judges within Latin America and beyond have little insight into how existing surveillance law is flawed or how it might be fixed. To assist in that imposing task, EFF has released “Unblinking Eyes: The State of Communications Surveillance in Latin America.” For over a year, we have worked with partner organizations across Latin America (Red en Defensa de los Derechos Digitales, Fundación Karisma, TEDIC, Hiperderecho, Centro de Estudios en Libertad de Expresión y Acceso a la Información, Derechos Digitales, InternetLab, Fundación Acceso) to shed a light on the current state of surveillance in the region both in law and in practice. We've carefully documented existing laws in 13 countries, and gathered evidence of the misapplication of those laws. Our aim is to understand the legal situation in each country, and contrast them with existing human rights standards. For this work, we analyzed publicly available laws and practices in Argentina, Brazil, Chile, Colombia, El Salvador, Guatemala, Honduras, Peru, Mexico, Nicaragua, Paraguay, Uruguay, and the United States and published individual reports documenting the state of communications surveillance in each of these countries. Then, we took that research and produced a broader report that compares surveillance laws and practices throughout the entire region. Our project was not limited to legal research, however. We mixed our legal and policy work with on-site training throughout the region for digital rights activists, traditional human rights lawyers, investigative journalists, activists, and policy makers. We explained how surveillance technologies work and how governments must apply international human rights standards to their laws and practices in order to appropriately limit those legal powers. We also mixed our legal and policy workshops with technical advice on how our partners in the region can protect themselves against government surveillance. What have we learned? Given the deeply rooted culture of secrecy surrounding surveillance, it is hard to judge the extent to which states comply with their own published legal norms. Ensuring that law not only complies with human rights standards but also genuinely governs and describes the state's real-world behavior is an ongoing challenge. Even still, we identified deficiencies that are widespread throughout the region and are in need of special and immediate action. Here are our recommendations: The culture of secrecy surrounding communications surveillance must stop. We need the ensure that civil society, companies, and policy makers understand the importance of transparency in the context of surveillance, and why transparency reporting from the companies and the state is crucial to preventing abuses of power. State officials and civil society must ensure that written norms are translated into consistent practice and that any failure to uphold the law is discovered and remedied. Judicial guidance from impartial, independent, and knowledgeable judges is needed. States should have dedicated communications surveillance laws rather than a jigsaw puzzle of numerous provisions spread throughout various legislation and these laws should be necessary, proportionate, and adequate. The region should commit to implementing public oversight mechanisms that are carefully matched in resources and authority over those who wield these powers. Individuals need to be granted due process, and a right to be notified about a surveillance decision with enough time and information to challenge that decision or seek other remedies whenever possible; innocent individuals affected by surveillance need avenues for redress. Lastly, we need a strong civil society coalition working on these issues. With the help of watchful and informed judges and legislators, we hope that digital technology will be used wisely to protect, not violate, human rights. We must ensure that we build a world where the Terror Archive remains a grim record of past failings, not a low-tech harbinger of an even darker future. Read our reports, and learn about the situation of surveillance in Latin America. Join us to defend our rights and those of the future. Below you can find some key findings for each country. Article source
  21. Yahoo's Spying Billboard: It Would ID You, Watch And Listen To Your Reactions To Ads Yahoo's idea is for the billboard's ad content to be based on real-time information about a crowd of people, who could be commuters on a train platform. Yahoo is exploring a smart billboard that would use microphones, cameras and other sensors to bring targeted advertising to outdoor displays. Hacked web giant Yahoo has filed a patent application for the ultimate ad-targeting system: a billboard that uses sensors to watch, listen and capture biometric data from the passing public. Yahoo, still in damage control from this week's claims that it helped the government spy on its email users, has filed a patent for smart technology that brings online ad-targeting capabilities to public billboards. The billboards would have cameras, microphones, motion-proximity sensors, and biometric sensors, such as fingerprint or retinal scanning, or facial recognition, according to the patent, which was filed last year but published on Thursday. The sensors would be used to measure engagement of passers-by. "For example, image data or motion-proximity sensor data may be processed to determine whether any members of the audience paused or slowed down near the advertising content, from which it may be inferred that the pause or slowing was in response to the advertising content (eg, a measurement of 'dwell time')," Yahoo writes. It could also use image or video data to determine whether any individuals looked directly at the advertising content. Alternatively, "Audio data captured by one or more microphones may be processed using speech-recognition techniques to identify keywords relating to the advertising that are spoken by members of the audience." As Yahoo explains, the ability to personalize ads for smartphones has made mobile the most efficient place to use marketing budgets, whereas digital displays in public spaces, which still attract ad dollars, remain stuck on old technology. But instead of individualizing ads, Yahoo's idea would be to 'grouplize', where ad content is based on real-time information about a crowd of people, who could be commuters on a train platform or cars passing by a freeway billboard. In the freeway scenario, the billboard would be placed near traffic sensors that detect the number of vehicles passing, their speed, and time of day. It might also use video to capture images of vehicles, and use image recognition to determine the maker and model of vehicles to distill demographic data. The billboard may also use cell-tower data, mobile app location data, or image data to "identify specific individuals in the target audience, the demographic data (eg, as obtained from a marketing or user database) which can then be aggregated to represent all or a portion of the target audience". Alternatively, it could use vehicle GPS systems to identify specific vehicles and vehicle owners. "Those of skill in the art will appreciate from the diversity of these examples the great variety of ways in which an aggregate audience profile may be determined or generated using real-time information representing the context of the electronic public advertising display and/or additional information from a wide variety of sources," Yahoo notes. It sees potential for the system to be integrated with existing online ad exchanges, allowing advertisers to reach across devices with the same ads. It also envisages extending the online ad model of auctioning billboard space to the highest bidder, with content determined by the group's characteristics. However, if the smart billboards did their job of "grouplizing" a group of young adult males, it might display a risqué dating site ad, Yahoo says. This approach might be acceptable to some on a phone, but dangerous on the freeway. Yahoo says it has an answer for this issue: "Any advertising content including video could, for example, be eliminated from the pool of available content or modified to remove video components." In May, New York Senator Charles Schumer called on the Federal Trade Commission to investigate the use of 'spying billboards', which he described as popping up in cities across the country. He warned that such technology may represent a violation of privacy rights, because of the way it tracks the individual's cell phone data, and constitute a deceptive trade practice. Source
  22. Thanks to the power of algorithms, machine learning, and open source data sets. Back in March 2015, the CIA chief began setting up a new office, the Directorate of Digital Innovation, to integrate the latest tech into the agency's data-gathering workflow along with boosting the country's cyber defense. According to its director, the department has helped the CIA as a whole improve its "anticipatory intelligence." Speaking at the Next Tech event yesterday, Deputy Director for Digital Innovation Andrew Hallman noted that, in some instances, they've been able to forecast social unrest and societal instability in other countries by as much as three to five days out. That "anticipatory intelligence" has been boosted through a combination of algorithms and analytics to predict the flow of illicit goods or extremists, according to Defense One. Deep and machine learning makes sense of seemingly disparate data, helping analysts see patterns to anticipate national security threats. And then they apply it to the world. "What we're trying to do within a unit of my directorate is leverage what we know from social sciences on the development of instability, coups and financial instability, and take what we know from the past six or seven decades and leverage what is becoming the instrumentation of the globe," Hallman said during yesterday's event. They don't just pore through the intelligence community's own proprietary information, either. The Digital Innovation department has been using more and more open source data sets with specialists who can combine public and agency information to draw more nuanced conclusions, which CIA director John Brennan called a tremendous advantage. Combined with their increasing surveillance of social media, the agency is clearly looking to gobble up as much information as possible. With tech's best data-parsing tools, they hope to get days of lead time to prepare for riots and social decay across the globe. But how successful they are and how far ahead they can accurately anticipate it is uncertain. 1st posted on : Defense One Source: https://www.engadget.com/2016/10/05/cia-claims-it-can-predict-some-social-unrest-up-to-5-days-ahead/
  23. Swiss Vote to Give Their Government More Spying Powers Swiss approve new surveillance law with 66.5% majority Last year, the country's parliament passed a law that allowed its secret service, FIS (Federal Intelligence Service), more powers to snoop on emails, tap phones, or use hidden cameras and microphones. Such technologies and investigative procedures are common practice in other countries, but they have been outlawed by the strict Swiss government. New surveillance law passed in 2015, implementation delayed The law, which the government argued it was needed after the devastating Paris ISIS attacks, was contested by privacy groups and the Swiss leftist political parties, which delayed its implementation and forced it into a country-wide referendum that took place this Sunday. The Swiss population made their voice heard over the weekend and concerned with the ever-increasing threat from terrorist groups have voted to sacrifice some of their privacy for the sake of security. Switzerland, next to Germany and the northern Scandinavian countries, has some of the strictest privacy laws in Europe. So much so that it took Google years to get permission to map out the country via its Street View service. Swiss secret service will need special authorization on a per-case basis FIS, who handles both internal and external cyber-espionage operations, will need special authorization from a court, the defense ministry, and the cabinet if they are to launch internal surveillance operations. According to SwissInfo, opponents of this law struggled in winning the older generation on their side, who mostly voted for the new surveillance laws. The publication also noted the little attention the campaign got in the media, with most of the attention focusing on another topic included in the three-vote referendum, related to a 10 percent boost to the country's old age pension fund. The population voted against an increase of the pension fund just because it would add an extra strain on the state's budget. The third issue was related to Switzerland increasing its green economy, which citizens also voted down. Source
×
×
  • Create New...