Jump to content

Search the Community

Showing results for tags 'face recognition'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 5 results

  1. California became the largest state to ban police from using body cameras with facial recognition software after lawmakers passed a three-year moratorium Thursday, Reuters reported. This landmark bill comes as legislators nationwide struggle to keep regulations apace with the fast-growing, and notoriously inaccurate, technology. The bill, AB-1215, bars state and local law enforcement from using facial recognition technology on body camera footage, both in real-time and upon later review. Though there is one exception: If police release these videos to the public, they can use the software to blur faces for privacy reasons, per Reuters. An earlier draft proposed a seven-year ban but, according to Reuters, this was shortened to three years over “concerns that the technology might greatly improve.” The legislation secured the state Senate’s support earlier this week and will go into effect in 2020 pending approval from Governor Gavin Newsom. Critics of this technology argue it stands to seriously threaten Americans’ civil rights and civil liberties if used by police in its current state, particularly among people of color since it’s demonstrably less accurate with darker skin tones. At this point, facial recognition software often can’t even tell the difference between legislators and suspected criminals. U.S. law enforcement agencies have also already proven that a new gadget isn’t going to buck their track record of systemic abuse. Several departments have already been caught using facial recognition technology in highly questionable and occasionally outright dumb ways, such as using edited photos, forensic sketches, or celebrity lookalikes to search for suspects. All this is precisely why many digital rights organizations, including the 15-million member strong coalition Fight for the Future, joined together earlier this month to petition a nationwide ban, calling the technology “unreliable, biased, and a threat to basic rights and safety.” California’s new bill follows similar legislation in Oregon and New Hampshire. Several cities, including San Francisco and Somerville, Massachusetts, as well as the largest manufacturer of police body cameras, Axon, have also followed suit. And Europe may not be too far behind. While it’s too early to tell whether the question of regulating facial recognition technology will reach the federal level any time soon, California’s decision will no doubt earn the issue some serious credence. Hopefully enough to land it on the agenda of national legislators. Source
  2. (Reuters) - Facebook Inc (FB.O) said on Tuesday its face recognition technology will now be available to all users with an option to opt out, while deciding to discontinue a related feature called ‘Tag Suggestions’. Face recognition, which was available to some Facebook users since December 2017, notifies an account holder if their profile photo is used by someone else or if they appear in photos where they have not been tagged. Tag Suggestions, which used face recognition only to suggest a user to tag friends in photos, has been at the center of a privacy related lawsuit since 2015. The lawsuit by Illinois users accused the social media company of violating the state’s Biometric Information Privacy Act, claiming it illegally collected and stored biometric data of millions of users without their consent. Last month, a federal appeals court rejected Facebook’s effort to undo the class action status of the lawsuit. “We have always disclosed our use of face recognition technology and that people can turn it on or off at any time,” Facebook said last month. The company said it continues to engage with privacy experts, academics, regulators and its users on how it uses face recognition and the options users have to control it. Source
  3. European citizens may soon have protections most Americans lack: control over the use of their face recognition data. Facial recognition tech remains largely unregulated, and we’ve seen what police and other government operators can get away with in the absence of any meaningful rules. A few cities in the U.S. have outright banned its use by city agencies, but globally, it still remains a wild west of unbridled surveillance. The European Commission reportedly intends to counter this unjust reality. The EU has regulation in the works that will give citizens more power over how their facial recognition data is used, senior officials told the Financial Times. The plan will reportedly limit “the indiscriminate use of facial recognition technology” by both companies as well as public authorities. It will also give citizens the right to know when their facial recognition data is being used, according to a source who spoke with the Financial Times. According to the report, the restrictions on face recognition tech are part of a broader plan to address the use of artificial intelligence and to “foster public trust and acceptance” in this type of technology. A document obtained by the Financial Times stated that the intention is to “set a world-standard for AI regulation” with “clear, predictable and uniform rules … which adequately protect individuals.” “AI applications can pose significant risks to fundamental rights,” the document reportedly states. “Unregulated AI systems may take decisions affecting citizens without explanation, possibility of recourse or even a responsible interlocutor.” There have been a number of reports on how facial recognition tech has been inaccurate, misused, and abused, ranging from the tremendously dumb to unsettlingly disturbing and, in some cases, life-endangering. Which is oftentimes the point. The reported EU plan to target “indiscriminate” usage of facial recognition tech in public areas would extend to both public and private entities. Officials and private companies have mostly been able to continue to deploy this technology in flawed and unethical ways because there are very few explicit laws and legal requirements for transparency that would effectively limit or end their usage. The EU’s plans are reportedly still in their early stages, so it’s unclear what the exact parameters of the regulation will be. Still, sweeping legislation drafted to address technology that, unfortunately, is already being irresponsibly utilized is progress. The U.S. has yet to consider such a far-reaching plan; however, we’ve seen progress here as well, with three cities instituting bans on the technology and others considering similar prohibitions. It’s also unclear if a massive surveillance system can coexist with individual rights to privacy and data. Sure, granting individuals the right to know exactly how their biometric data is being used is an important step toward transparency, but if they don’t like what they learn, what recourse will they have? And while some regulation is better than no regulation, some human rights advocates and ethical technologists might argue that simply putting limitations on a technology that can easily target vulnerable communities is not enough—that instead, we should ban it outright. Source
  4. Face recognition will be used to harm citizens if given to governments or police, writes Brian Brackeen, CEO of the face recognition and AI startup Kairos, in an op-ed published by TechCrunch today. Last week, news broke that bodycam maker Axon requested a partnership with Kairos to explore face recognition. Brackeen declined, and writes today that “using commercial facial recognition in law enforcement is irresponsible and dangerous.” “As the Black chief executive of a software company developing facial recognition services, I have a personal connection to the technology both culturally, and socially,” Brackeen writes. Face recognition is one of the most contentious areas in privacy and surveillance studies, because of issues of both privacy and race. A study by MIT computer scientist Joy Buolamwini published earlier this year found face recognition is routinely less accurate on darker-skinned faces than it is on lighter-skinned faces. A serious problem, Brackeen reasons, is that as law enforcement relies more and more on face recognition, the racial disparity in accuracy will lead to consequences for people of color. “The more images of people of color it sees, the more likely it is to properly identify them,” he writes. “The problem is, existing software has not been exposed to enough images of people of color to be confidently relied upon to identify them. And misidentification could lead to wrongful conviction, or far worse.” Law enforcement agencies have increasingly relied on face recognition in the U.S., celebrating the tech as a public safety service. Just last week, Amazon employees rallied against the use Rekognition, the company’s face recognition technology, by police. Once optional for U.S. citizens, the Orlando Airport now mandates face scans for all international travelers. And CBP has moved to institute face recognition at the Mexican border. In areas where identifying yourself is tied to physical safety, any inaccuracies or anomalies could lead to secondary searches and more interactions with law enforcement. If non-white faces are already more heavily scrutinized in high security spaces, face recognition could only add to that. “Any company in this space that willingly hands this software over to a government, be it America or another nation’s, is willfully endangering people’s lives,” concludes Brackeen. “We need movement from the top of every single company in this space to put a stop to these kinds of sales.” More on this at [TechCrunch] Source
×
×
  • Create New...