Jump to content

Search the Community

Showing results for tags 'facial recognition'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 24 results

  1. How is it that our brains – the original face recognition program – can recognize somebody we know, even when they’re far away? As in, how do we recognize those we know in spite of their faces appearing to flatten out the further they are from us? Cognitive experts say we do it by learning a face’s configuration – the specific pattern of feature-to-feature measurements. Then, even as our friends’ faces get optically distorted by being closer or further away, our brains employ a mechanism called perceptual constancy that optically “corrects” face shape… At least, it does when we’re already familiar with how far apart our friends’ features are. But according to Dr. Eilidh Noyes, who lectures in Cognitive Psychology at the University of Huddersfield in the UK, the ease of accurately identifying people’s faces – enabled by our image-being-tweaked-in-the-wetware perceptual constancy – falls off when we don’t know somebody. This also means that there’s a serious flaw with facial recognition systems that use what’s called anthropometry: the measurement of facial features from images. Given that the distance between features of a face varies as a result of the camera-to-subject distance, anthropometry just isn’t a reliable method of identification, Dr. Noyes says: In an excerpt of the abstract of a paper published in Cognition magazine– that came out of research done by Noyes and University of York’s Dr. Rob Jenkins on the effect of camera-to-subject distance on face recognition performance – the researchers write that identification of familiar faces was accurate, thanks to perceptual constancy. But the researchers found that changing the distance between a camera and a subject – from 0.32m to 2.70m – impaired perceptual matching of unfamiliar faces, even though the images were presented at the same size. In order to reduce the errors in face-matching that stem from this flaw in anthropometry before migrating to real-world use cases – such as facial recognition being used in passport control or to create national IDs – industry has to take the distance piece of the puzzle into account, she says. Or here’s a thought: perhaps this new finding can be used by lawyers working on behalf of people imprisoned after their faces were matched with those of suspects in grainy, low-quality photos? People like Willie Lynch, who was imprisoned even though an algorithm expressed only one star of confidence that it had generated the correct match? Dr. Noyes said it best: Noyes, a specialist in the field, was one of 20 global academic experts invited to attend a recent conference, at the University of New South Wales, which is home to the Unfamiliar Face Identification Group (UFIG). The conference’s title was Evaluating face recognition expertise: Turning theory into best practice. The University of Huddersfield said that 20 world-leading experts in the science of face recognition assembled in Australia for a workshop and the conference, which were designed to lead to policy recommendations that will aid police, governments, the legal system and border control agencies. Source: Huge flaw found in how facial features are measured from images (via Naked Security by Sophos)
  2. The complaint claims Google “failed to obtain consent from anyone” when it introduced facial recognition to its cloud service for storing and sharing photos. The lawsuit comes in the wake of the announcement of a proposed $550 million settlement that Facebook Inc. reached in a BIPA class action. Google has been hit with a new lawsuit accusing the company of violating an Illinois biometric privacy law by compiling faceprints. “Unbeknownst to the average consumer ... Google’s proprietary facial recognition technology scans each and every photo uploaded to the cloud-based Google Photos for faces, extracts geometric data relating to the unique points and contours (i.e., biometric identifiers) of each face, and then uses that data to create and store a template of each face -- all without ever informing anyone of this practice,” Illinois resident Brandon Molander alleges in a class-action complaint filed Thursday in U.S. District Court for the Northern District of California. This new complaint comes one week after Facebook said it had agreed to pay $550 million to settle a similar lawsuit. Molander, who says he has had a Google Photos account for five years, claims the company is violating the Illinois Biometric Privacy Act. That law requires companies to obtain consumers' written consent before collecting or storing scans of their facial geometry. “Molander’s Google Photos account contains dozens of photos depicting Plaintiff Molander that were taken with his smart phone and automatically uploaded in Illinois to Google Photos,” his complaint alleges. “Google analyzed these photos by automatically locating and scanning Plaintiff Molander’s face, and by extracting geometric data relating to the contours of his face and the distances between his eyes, nose, and ears -- data which Google then used to create a unique template of Plaintiff Molander’s face.” The complaint appears similar to one filed against Google in 2016, in federal district court in Illinois. That case, brought by Illinois residents Lindabeth Rivera and Joseph Weiss, was decided in Google's favor last year by U.S. District Court Judge Edmond Chang, who ruled that the company's alleged practices didn't cause the kind of concrete injury that warrants a lawsuit. Chang said in his ruling that faces are “public” information, and that Google didn't violate people's privacy by using facial recognition technology on photos of faces. “All that Google did was to create a face template based on otherwise public information -- plaintiffs’ faces,” he wrote. But federal judges in California came to the opposite conclusion in a lawsuit accusing Facebook of violating the Illinois privacy law. In that matter, a trial judge and a three-judge panel of the 9th Circuit Court of Appeals rejected Facebook's argument that its alleged faceprint collection didn't cause the kind of concrete injury that would support a lawsuit. Source
  3. Face masks are mandatory in at least two provinces in China, including the city of Wuhan. In an effort to contain the coronavirus strain that has caused nearly 500 deaths, the government is insisting that millions of residents wear protective face covering when they go out in public. As millions don masks across the country, the Chinese are discovering an unexpected consequence to covering their faces. It turns out that face masks trip up facial recognition-based functions, a technology necessary for many routine transactions in China. Suddenly, certain mobile phones, condominium doors, and bank accounts won’t unlock with a glance. Complaints are plentiful in the popular Chinese blogging platform Weibo, reports Abacus, the Hong Kong-based technology news outlet. “[I’ve] been wearing a mask everyday recently and I just want to throw away this phone with face unlock,” laments one user. “Fingerprint payment is still better,” writes another. “All I want is to pay and quickly run.” Most complaints are about unlocking mobile devices. Apple confirmed to Quartz that an unobstructed view of a user’s eyes, nose, and mouth is needed for FaceID to work properly. Similarly, Huawei says that its efforts to develop a feature that recognizes partially-covered faces has fallen short. “There are too few feature points for the eyes and the head so it’s impossible to ensure security,” explains Huawei vice president Bruce Lee, in a Jan 21 post on Weibo.”We gave up on facial unlock for mask or scarf wearing [users].” Subverting surveillance cameras Biometrics, including facial recognition, are essential to daily life in China, on a scale beyond other nations. It’s used to do everything from ordering fast food meals to scheduling medical appointments to boarding a plane in more than 200 airports across the country. Facial recognition is even used in restrooms to prevent an occupant from taking too much toilet paper. And beyond quotidian transactions, the technology is a linchpin in the Chinese government’s scheme to police its 1.4 billion citizens. Last December, the government passed a new law that forces anyone registering a new mobile phone SIM card to undergo a face scan, in the stated interest of protecting “the legitimate rights and interest of citizens in cyberspace,” as Chinese Ministry of Industry and Information puts it. The technology is also used in some schools, where a camera records student attendance and can offer predictions about behavior and level of engagement. Hong Kong’s government, incidentally, has been trying to install a “mask ban” for protestors participating in anti-government rallies. The anonymity afforded by surgical masks, gas masks, and respirators has somehow emboldened both police and demonstrators to act aggressively, without fear of being caught on camera. Facial recognition technology that can “see through” disguises already exists, but it’s far from perfect. Researchers at the University of Cambridge and India’s National Institute of Technology, for instance, demonstrated one method that could identify a person wearing a mask with around 55% accuracy. In 2018, Panasonic introduced commercially-available software that can ID people wearing surgical masks if the camera captures images at a certain angle. Despite its widespread adoption across China, it’s ironic that facial recognition technology in general has been found to be less reliable when processing non-white faces, observes Jessica Helfand, author of the new book Face: A Visual Odyssey. “The fact that surveillance is increasingly flawed with regard to facial recognition and Asian faces is a paradox made even more bizarre by the face mask thing,” Helfand says. A recent landmark study by the US National Institute of Standards and Technology revealed a racial bias in algorithms sold by Intel, Microsoft, Toshiba, Tencent, and DiDi Chuxing. It showed that that African Americans, Native Americans, and Asians were 10 to 100 times more likely to be misidentified compared to a Caucasian subject. Source
  4. Meanwhile, Europe wants to ban the technology for up to five years. London's Metropolitan Police Service has begun using live facial recognition (LFR) technology. At key areas throughout the city, signposted cameras will scan the faces of passing people, alerting officers to potential matches with wanted criminals. According to the Met, "this will help tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and help protect the vulnerable". In a tweet, the Met assured the public that any images obtained that don't trigger a potential alert are deleted immediately -- and that it's up to officers whether they decide to stop someone based on an alert or not. The technology operates from a standalone system, and isn't linked to any other imaging platforms, such as CCTV or bodycams. Despite the Met's insistence that the technology can be used for good, however, some critics have lambasted LFR as ineffectual and in some cases, unlawful. In April 2019, for example, a report from the University of Essex found that the Met's LFR technology has an inaccuracy rate of 81 percent. The previous year, technology used by police in South Wales mistakenly identified 2,300 innocent people as potential criminals. The Met's new endeavor is launched at a tumultuous time for facial recognition technology. Just last week the European Commission revealed it's considering a ban on the use of LFR in public areas for up to five years, while regulators figure out how to prevent the tech being abused. Meanwhile, privacy campaign group Big Brother Watch -- supported by more than 18 UK politicians and 25 additional campaign groups -- has called for a halt to adoption, citing concerns about implementation without proper scrutiny. Source
  5. Every few days, China finds a new way to introduce facial recognition in people’s daily lives. According to a report from the South China Morning Post, Shanghai is testing face recognition terminals at pharmacies to catch folks attempting to buy controlled substances in substantial quantities, likely for resale. The report noted buyers of drugs containing sedatives and psychotropic substances will have to verify themselves through the terminal. The system will scan both pharmacists and buyers to prevent any misdoings. The move is also to prevent people from obtaining medicines that contain raw materials for illegal drugs. For instance, ephedrine or pseudoephedrine, found in drugs for the common colds, is a key element to produce crystal meth. The system has been adopted by 31 healthcare institutions and has performed over 300 scans. Shanghai city administration aims to cover the whole city with these terminals by the first half of 2021. Previously, China has experimented with facial recognition with subway rides, payments, catching criminals, and buying SIM cards. While some solutions may sound dystopian, this new system is relatively more well-intentioned, as it could help curb drug abuse. Source
  6. Efforts to check the spread of face-scanning technology are meeting resistance on both sides of the Atlantic. Face-scanning technology is inspiring a wave of privacy fears as the software creeps into every corner of life in the United States and Europe — at border crossings, on police vehicles and in stadiums, airports and high schools. But efforts to check its spread are hitting a wall of resistance on both sides of the Atlantic. One big reason: Western governments are embracing this technology for their own use, valuing security and data collection over privacy and civil liberties. And in Washington, U.S. President Donald Trump’s impeachment and the death of a key civil rights and privacy champion have snarled expectations for a congressional drive to enact restrictions. The result is an impasse that has left tech companies largely in control of where and how to deploy facial recognition, which they have sold to police agencies and embedded in consumers’ apps and smartphones. The stalemate has persisted even in Europe’s most privacy-minded countries, such as Germany, and despite a bipartisan U.S. alliance of civil-libertarian Democrats and Republicans. Advocates for tighter regulations point to China as an example of the technology’s nightmare potential, amid reports authorities are using it to indiscriminately track citizens in public, identify pro-democracy protesters in Hong Kong and oppress millions of Muslim Uighurs in Xinjiang. Current implementations of the software also perpetuate racial bias by misidentifying people of color far more frequently than white people, according to a U.S. government study released just before Congress left town for Christmas. “Facial recognition needs to be stopped before a fait accompli is established,” Patrick Breyer, a member of the European Parliament for the Pirate Party Germany, told POLITICO. "The use of facial recognition technology poses a staggering threat to Americans’ privacy," Democratic Senator Ed Markey, who is prepping legislation to crack down on the software, said in June. But police and security forces across the West are still rapidly testing or rolling out the technology, adopting it as an inexpensive way to keep tabs on large groups of people. Cameras and artificial intelligence that can identify people based on their facial features are also showing up at border crossings, on police vehicles, at the entrances to stadiums and even in some high schools in the U.S. and Europe, where they are used to identify students. Such examples far outnumber the facial recognition bans enacted in San Francisco and some other U.S. cities. In Washington, a once-promising bipartisan push in the House of Representatives to limit the federal government's use of facial recognition has stalled for unrelated reasons, including the death of former House Oversight Chairman Elijah Cummings and acrimony over impeachment. And in the Senate, more limited proposals to rein in federal agencies' use of the technology have been slow to pick up support. But privacy activists are drawing a broader lesson from governments’ failure to check the technology’s spread, saying it is eroding the differences between the way Western governments and China approach public surveillance. “There is growing evidence that the U.S. is increasingly using AI in oppressive and harmful ways that echo China’s use,” AI Now, a research group at New York University, wrote in a report published this month that underscored the spread of "invasive" artificial intelligence technology. Fundamental rights' In both the U.S. and Europe, the stuttering progress of efforts to regulate facial recognition stems from a blend of reluctance by security-obsessed governments and setbacks that have stymied lawmakers’ focus. The latter obstacles have included the death in October of Cummings, whose panel had seemed poised to craft bipartisan legislation restricting face-scanning by federal agencies. Several top Democrats and Republicans on the committee have also been embroiled in the monthslong dispute over Trump's impeachment and upcoming Senate trial. Democratic and Republican lawmakers alike acknowledged in interviews with POLITICO that the effort has stalled. "Unfortunately impeachment has sucked all the energy out of the room," Republican Senator Stephen Lynch, who chairs the Oversight National Security Subcommittee, said this month. "There hasn't been anything that's happened right now. … Elijah's death put that on hold," said Senator Mark Meadows of North Carolina, the top Republican on the Oversight Government Operations Subcommittee. In the European Union, meanwhile, calls by top leaders for quick action on regulating artificial intelligence aren’t guaranteed to result in any binding EU-wide restrictions. Even the strict limits on gathering of “sensitive data" in the EU’s premier privacy rule, the General Data Protection Regulation, contain a broad carve-out for public authorities that can collect sensitive biometric data if they can justify it. That loophole has allowed facial recognition technology to pop up in locations such as a major Berlin train station, where an experimental project by the authorities has scanned tens of thousands of passersby. Even so, Breyer said he is confident Europe will ultimately end up with stricter limits on facial recognition than the United States. The lawmaker argues that the EU’s Charter of Fundamental Rights, which grants every European citizen “the right to the protection of personal data concerning him or her,” will protect Europeans from indiscriminate use of facial recognition, while the U.S. Constitution says nothing to that effect, and will not. “In the U.S., if you’re moving around public spaces, there essentially is no right to privacy,” said Breyer, who was trained as a lawyer. “Here, it’s the other way around: There is a basic right to data protection and informational self-determination, which means that every piece of data that’s collected and processed about us means an intrusion into our basic rights, and [law enforcement agencies] can only do that on a legal basis, and after being granted permission.” Tech industry leaders are meanwhile seizing on the opportunity to shape any global rules that may emerge. In December 2018, Microsoft President Brad Smith unveiled principles for facial recognition regulations in a rare call to action from a leading industry figure. Amazon Web Services' public policy chief, Michael Punke, urged lawmakers to enact legislation that would both "protect civil rights while also allowing for continued innovation and practical application of the technology." For now, though, face-scanning tools are rapidly becoming commonplace, embraced by the public and private sector alike. Say goodbye to anonymity At the same time, companies like Facebook, Apple and Google have built facial recognition into their most popular devices, for instance as a means of unlocking phones or automatically tagging friends in photos. Amazon has emerged as a top supplier of easy-to-use facial recognition systems, whose customers have included police departments and U.S. government agencies. Among public authorities, the appetite for facial recognition systems seems to know no bounds. Across the United States, federal, state and local agencies have been conducting so-called experiments for years, with the Transportation Security Administration and U.S. Customs and Border Protection both use facial recognition at select points of entry. On the state and local levels, police departments in Florida, Colorado and Oregon have begun to adopt the technology, with others exploring its use. 35,000 cameras are being installed in Budapest The situation is no different in Europe. In the U.K., two police forces have been testing facial recognition technology to identify passersby in real time with street cameras. The French government has rolled out a facial-recognition-enabled ID card over protests from digital rights groups. Meanwhile, Hungary’s interior ministry is installing 35,000 cameras across its capital, Budapest, and the rest of the country to capture both facial images and license plates. Facial recognition is already up and running at some European airports from Lisbon to Prague, where passengers from the EU and Switzerland can skip long lines by opting for an automated border control system scanning their faces. "That’s worrying," said Matthias Monroy, a civil liberties activist who works for the leftist Die Linke party in the German parliament. Paired with the power of artificial intelligence and high-powered computing, such technology may soon “make it impossible to move around anonymously in a public space," he added. ‘Virtually unrestricted’ In the U.S., the debate around facial recognition has focused mainly on government's use of the technology. After the American Civil Liberties Union issued a report on Amazon's sale of its Rekognition software to police departments, San Francisco became the first city in the world to ban local agencies from using the technology, in a blueprint that activists hope will be replicated elsewhere. That cause has also found a following in Washington, where a coalition of congressional Democrats and libertarian-leaning Republicans, worried about the implications of unchecked government surveillance, wants to restrict the use of facial recognition by federal agencies like the FBI, TSA and Immigration and Customs Enforcement. Leaders of the House Oversight Committee have explored a bill that would block funding for any new use or expanded use of the technology by the federal government, the panel’s top Republican told POLITICO in August. “We don’t want any more money being used, no money used to expand what we have or to purchase any new ability to impact or use this technology,” Republican Senator Jim Jordan told POLITICO at the time. A “vast majority” of House Oversight Republicans still favor a federal timeout, Jordan said more recently, though he conceded that “I’ve been all focused all on impeachment.” Before his death, then-committee chairman Cummings warned at a hearing last spring that under current law, the government can use the technology to “monitor you without your knowledge and enter your face into a database that could be used in virtually unrestricted ways.” He later called the issue “one of the few things that everybody, Democrats and Republicans, seem to agree on.” The committee’s new chairwoman, Republican Senator Carolyn Maloney, has expressed a desire to revisit the issue but has not committed to pushing for a moratorium on federal use. And even if such a proposal were to advance in the Democratic-controlled House, it would likely face opposition in the GOP-controlled Senate from moderate Republicans, many of whom see the emerging technology as a crucial law enforcement tool. "I think that is unlikely to gain support in the Senate," Senator Chris Coons told POLITICO this month when asked about the prospects of a moratorium. Coons in November partnered with Senator Mike Lee of Utah, a libertarian-leaning Republican, to introduce legislation requiring that federal agencies obtain a warrant before using facial recognition for targeted surveillance. But even more narrow proposals such as that bill, which has yet to pick up any additional co-sponsors, have failed to gain traction in the chamber to this point. In Brussels, new European Commission leader Ursula von der Leyen has promised to take on facial recognition as part of a pledge to write binding rules for artificial intelligence during the first 100 days of her term in office, which started on December 1. That call coincides with a series of rulings from data protection authorities in various EU states, which have called for caution in deploying mainly "live" facial recognition and urged public authorities to draft legislation around its use. In the past few weeks, privacy watchdogs in Sweden, France and the United Kingdom have all issued position papers on the subject, with the French and Swedish regulators halting efforts to deploy facial recognition at the entrances to high schools. However, the Commission’s road map for AI does does not necessarily mean any EU-wide rules on facial recognition will ever see the light of day. Despite promises to encode accepted practices in law, European officials have voiced a cacophony of opinions on what exactly the law on AI should contain, with high-ranking bureaucrats hinting that it may skew toward guiding principles instead of hard rules. That is why it's likely that courts and data protection watchdogs will write those rules in Europe. In late August, a British court rejected the first major attempt to curtail police use of facial recognition, saying security benefits outweighed the risk to privacy and individual freedoms. Legal experts expect European rules for facial recognition in law enforcement to come in the form of what’s known as “ex post” regulation: First, law enforcement will take advantage of existing legal gray zones and start deploying the technology — until either a judge or a data protection authority will restrict or stop that use, or rule that it conforms with existing law. Cases that could set such precedent are underway. Source
  7. Allowing facial recognition technology to spread without understanding its impact could have serious consequences. In the last few years facial recognition has been gradually introduced across a range of different technologies. Some of these are relatively modest and useful; thanks to facial recognition software you can open you smartphone just by looking at it, and log into your PC without a password. You can even use your face to get cash out of an ATM, and increasingly it's becoming a standard part of your journey through the airport now. And facial recognition is still getting smarter. Increasingly it's not just faces that can be recognised, but emotional states too, if only with limited success right now. Soon it won't be too hard for a camera to not only recognise who you are, but also to make a pretty good guess at how you are feeling. But one of the biggest potential applications of facial recognition on the near horizon is, of course, for law and order. It is already being used by private companies to deter persistent shoplifters and pickpockets. In the UK and other countries police have been testing facial recognition in a number of situations, with varying results. There's a bigger issue here, as the UK's Information Commissioner Elizabeth Denham notes: "How far should we, as a society, consent to police forces reducing our privacy in order to keep us safe?" She warns that when it comes to live facial recognition "never before have we seen technologies with the potential for such widespread invasiveness," and has called for police, government and tech companies to work together to eliminate bias in the algorithms used; particularly that associated with ethnicity. She is not the only one to be raising questions about the use of facial recognition by police; similar questions are being asked in the US, and rightly so. There is always a trade-off between privacy and security. Deciding where to draw the line between the two is key. But we also have to make the decision clearly and explicitly. At the moment there is a great risk that as the use of facial recognition technology by government and business spreads, the decision will be taken away from us. In the UK we've already built up plenty of the infrastructure that you'd need if you were looking to build a total surveillance state. There are probably somewhere around two million private and local government security cameras in the UK; a number that is rising rapidly as we add our own smart doorbells or other web-connected security cameras to watch over our homes and businesses. In many cases it will be very easy to add AI-powered facial recognition analysis to all those video streams. I can easily see a scenario where we achieve an almost-accidental surveillance state, through small steps, each of which make sense on their own terms but which together combine to hugely reduce our privacy and freedoms, all in the name of security and efficiency. It is much easier to have legitimate concerns about privacy addressed before facial recognition is a ubiquitous feature of society. And the same applies other related technologies like gait recognition or other biometric systems that can recognise us from afar. New technology rolled out in the name of security is all but impossible to roll back. For sure, these technologies can have many benefits, from making it quicker to unlock your phone or recognising criminals in the street. But allowing these technologies to become pervasive without rigorous debate about the need for them, the effectiveness of them and their broader impact on society is deeply unwise and could leave us facing much bigger problems ahead. Source: We must stop smiling our way towards a surveillance state (via ZDNet)
  8. A university lecturer in east China is suing a wildlife park for breach of contract after it replaced its fingerprint-based entry system with one that uses facial recognition, according to a local newspaper report. Guo Bing, an associate law professor at Zhejiang Sci-tech University, bought an annual pass to Hangzhou Safari Park for 1,360 yuan (RM803) in April, Southern Metropolis Daily reported on Sunday. But when he was told last month about the introduction of the new system he became concerned it might be used to “steal” his identity and asked for a refund, the report said. The park declined to return his money, so Guo filed a civil lawsuit last week at a district court in Fuying, Hangzhou, the capital of Zhejiang province. The report said the court had accepted the case, in which Guo is demanding 1,360 yuan (RM803) compensation plus costs. “The purpose of the lawsuit is not to get compensation but to fight the abuse of facial recognition,” he was quoted as saying. Guo said that when he bought the ticket – which offers 12 months’ unlimited visits to the park for himself and a family member – he was required to provide his name, phone number and fingerprints. He complied with the request and said he had visited the park on several occasions since. However, when the attraction upgraded its admission system, all annual pass-holders where asked to update their records – including having their photograph taken – before Oct 17 or they would no longer be allowed to enter, the report said. Guo said he believed the change was an infringement of his consumer rights. Zhao Zhanling, a lawyer at the Beijing Zhilin Law Firm, said it was possible the park had breached the conditions of the original contract. “The plaintiff’s concern is totally understandable,” he said. Facial identities were “highly sensitive”, Zhao said, and the authorities “should strictly supervise how data is collected, used, stored and transferred.” A manager at the park, who gave her name only as Yuan, said the upgrade was designed to improve efficiency at the entrance gates. Since the launch of the central government’s “smart tourism” initiative in 2015, more than 270 tourist attractions around the country have introduced facial recognition systems, Xinhua reported earlier. Source: Chinese professor sues wildlife park after it introduces facial recognition entry system (via The Star Online)
  9. UK's Information Commissioner's Office challenges the interpretation of a court ruling that gave the green light for using facial recognition on the public. Police forces should be subject to a code of practice if they want to use live facial recognition technology on the public, according to the UK's Information Commissioner's Office (ICO). ICO commissioner Elizabeth Denham has released her opinion on the use of live facial recognition on the public by police in response to a recent High Court ruling that South Wales Police didn't violate human rights or UK law by deploying the technology in a public space. Denham argues facial recogition should be restricted to targeted deployments that are informed by intelligence and time-limited, rather than ongoing. She also reckons the High Court's decision "should not be seen as a blanket authorisation for police forces to use [live facial recognition] systems in all circumstances". The case concerned police using live CCTV feeds to extract individuals' facial biometric information and matching it against a watchlist of people of interest to police. Large scale trials of facial recognition tech by the South Wales Police and the Metropolitan Police Service (Met) for public safety have irked some people who fear a dystopian future of mass surveillance combined with automated identification. The ICO kicked off an investigation in August over the use of surveillance cameras to track commuters and passersby in London. Denham raised concerns over people being identified in public without gaining an individual's consent. Surveillance cameras themselves make some people uncomfortable, but technology that automatically identifies people raises new questions for privacy in public spaces. The Met began trialling the tech on shoppers in London last Christmas. Denham said live facial recognition was a significant change in policing techniques that raises "serious concerns". "Never before have we seen technologies with the potential for such widespread invasiveness. The results of that investigation raise serious concerns about the use of a technology that relies on huge amounts of sensitive personal information," she said. Denham argues the UK needs a "a statutory and binding code of practice" for the technology's deployment due to a failure in current laws to manage the risks it poses. The privacy watchdog will be pushing the idea of a code of practice with the UK's chief surveillance bodies, including policing bodies, the Home Office and the Investigatory Powers Commissioner. Denham argues in her opinion statement that for police to use facial recognition, they need to meet the threshold of "strict necessity" and also consider proportionality. She believes this is likelier to be met on small scale operations, such as when "police have specific intelligence showing that suspects are likely to be present at a particular location at a particular time." Another is at airports, where live facial recognition supports "tailored security measures". Source: Facial recognition could be most invasive policing technology ever, warns watchdog (via ZDNet)
  10. The Pixel 4's facial recognition will unlock your phone with your eyes closed Google announced its Pixel 4 series of handsets earlier this week, and one thing that's different this time around is that there's no fingerprint sensor on the back. Instead, they use a new face unlock feature. Unfortunately, it turns out that with face unlock, you don't actually have to be looking at the phone. On Google's face unlock support page, the company confirmed that your Pixel 4 can be "unlocked by someone else if it’s held up to your face, even if your eyes are closed". The firm also noted that looking at the phone can unlock it when you don't mean to, and it can be unlocked by someone that "looks a lot like you". The only other mainstream flagship smartphones that have facial recognition and no fingerprint sensor are Apple's lineup of iPhones. And even when Apple's Face ID was first introduced with an iPhone X two years ago, it required not only that your eyes be open, but that you're actually looking at the device. Sadly, there's no option to require your eyes to be open on the Pixel, although it wouldn't be surprising if Google adds that at some point. If you're worried about someone using your face to unlock your phone while you're sleeping, your only option is to turn off the feature completely. Source: The Pixel 4's facial recognition will unlock your phone with your eyes closed (Neowin)
  11. California will block police officers from including facial recognition technology in their body cameras, joining two other states that have created similar laws—Oregon and New Hampshire. On Tuesday Gov. Gavin Newsom signed AB1215 into law, barring law enforcement from using any “biometric surveillance system” in their body cameras, and enabling people to take legal action against officers who violate the law. As the San Francisco Chronicle points out, state legislators were encouraged to pass the bill following an ACLU demonstration in which Amazon’s Rekognition software misidentified 26 lawmakers, incorrectly deeming them criminal suspects. “We wanted to run this as a demonstration about how this software is absolutely not ready for prime time,” said the bill’s creator assemblymember Phil Ting at a press conference following the test. “While we can laugh about it as legislators, it’s no laughing matter if you are an individual who is trying to get a job, if you are an individual trying to get a home, if you get falsely accused of an arrest, what happens, it could impact your ability to get employment, it absolutely impacts your ability to get housing.” However, when the technology advances, and is ready for “prime time” it will only cause more ethical concerns. “When you’re talking about an AI tool on a body camera, then these are extra-human abilities,” Brian Brackeen, CEO of AI startup Kairos, told Gizmodo last year for a story about racial bias in face recognition technology. A month earlier Brackeen revealed Kairos has turned down a contract with Axon, manufacturer of body cameras. “Let’s say an officer can identify 30 images an hour. If you were to ask a police department if they were willing to limit [recognition] to 30 recognitions an hour they would say no,” Brackeen told Gizmodo. “Because it’s not really about the time of the officer. It’s really about a superhuman ability to identify people, which changes the social contract.” According to the Chronicle, the California Peace Officers’ Association claims that no California law enforcement agencies currently use facial recognition technology in body cameras. But the newspaper reports that some agencies have considered adopting the technology. The original bill would have created a permanent ban, but Ting compromised due to protests from the Peace Officers’ Association and other police advocacy groups. For now, the law expires in 2023. Source
  12. THE USE of facial recognition by South Wales Police has been deemed lawful in a ruling on Wednesday by the High Court in London following a judicial review. Welsh cops' use of facial recognition is legal, High Court rules Civil rights group Liberty and local Cardiff resident Ed Bridges had challenged the deployment of facial recognition in the first legal challenge to UK police use of facial recognition technology. It was first used by South Wales Police in a trial during the Champions League Final at Cardiff's Millennium Stadium in June 2017. In total, South Wales Police is believed to have scanned the faces of more than 500,000 members of the public. Bridges claimed that he had been scanned at least twice - on Queen Street in Cardiff in December 2017 and at a protest against the arms trade in March 2018. Metropolitan Police has also trialled facial recognition, with less than convincing results. Liberty had claimed in court that facial recognition systems were little different from police fingerprinting or obtaining DNA, around which tight legal safeguards exist. However, the court ruled that while facial recognition might infringe upon people's privacy rights it wasn't unlawful per se. The court declared that the current legal framework governing facial recognition is adequate, but ought to be subject to periodic review. Liberty, though, is campaigning for a complete ban on what it describes as an "authoritarian surveillance tool". Liberty lawyer Megan Goulding said: "This disappointing judgment does not reflect the very serious threat that facial recognition poses to our rights and freedoms. "Facial recognition is a highly intrusive surveillance technology that allows the police to monitor and track us all. It is time that the government recognised the danger this dystopian technology presents to our democratic values and banned its use. Facial recognition has no place on our streets." Police use of facial recognition, Liberty added, involves the processing of sensitive, personal data of everyone who is scanned, not just those on a watchlist. The organisation has vowed to appeal. South Wales Police typically use facial recognition in cameras attached to vans. These take scans of people's faces, making a biometric map of the face which is then run against a database of facial biometric maps. When a positive match is made, the image is flagged for a manual review. UK police have around 20 million mugshots in various databases. South Wales Police is also planning to put the technology onto police mobile phones, which will make its use even more widespread. Source
  13. (Reuters) - A federal appeals court on Thursday rejected Facebook Inc’s (FB.O) effort to undo a class action lawsuit claiming that it illegally collected and stored biometric data for millions of users without their consent. The 3-0 decision from the 9th U.S. Circuit Court of Appeals in San Francisco over Facebook’s facial recognition technology exposes the company to billions of dollars in potential damages to the Illinois users who brought the case. It came as the social media company faces broad criticism from lawmakers and regulators over its privacy practices. Last month, Facebook agreed to pay a record $5 billion fine to settle a Federal Trade Commission data privacy probe. “This biometric data is so sensitive that if it is compromised, there is simply no recourse,” Shawn Williams, a lawyer for plaintiffs in the class action, said in an interview. “It’s not like a Social Security card or credit card number where you can change the number. You can’t change your face.” Facebook said it plans to appeal. “We have always disclosed our use of face recognition technology and that people can turn it on or off at any time,” a spokesman said in an email. Google, a unit of Alphabet Inc (GOOGL.O), won the dismissal of a similar lawsuit in Chicago last December. The lawsuit began in 2015, when Illinois users accused Facebook of violating that state’s Biometric Information Privacy Act in collecting biometric data. Facebook allegedly accomplished this through its “Tag Suggestions” feature, which allowed users to recognize their Facebook friends from previously uploaded photos. Writing for the appeals court, Circuit Judge Sandra Ikuta said the Illinois users could sue as a group, rejecting Facebook’s argument that their claims were unique and required individual lawsuits. She also said the 2008 Illinois law was intended to protect individuals’ “concrete interests in privacy,” and Facebook’s alleged unauthorized use of a face template “invades an individual’s private affairs and concrete interests.” The court returned the case to U.S. District Judge James Donato in San Francisco, who had certified a class action in April 2018, for a possible trial. Illinois’ biometric privacy law provides for damages of $1,000 for each negligent violation and $5,000 for each intentional or reckless violation. Williams, a partner at Robbins Geller Rudman & Dowd, said the class could include 7 million Facebook users. The FTC probe arose from the discovery that Facebook had let British consulting firm Cambridge Analytica harvest users’ personal information. Facebook’s $5 billion payout still requires U.S. Department of Justice approval. The case is Patel et al v Facebook Inc, 9th U.S. Circuit Court of Appeals, No. 19-15982. Source
  14. Don’t look now: why you should be worried about machines reading your emotions Machines can now allegedly identify anger, fear, disgust and sadness. ‘Emotion detection’ has grown from a research project to a $20bn industry ‘Some developers claim that algorithms will become attuned to our innermost feelings, vastly improving interaction with our devices.’ ‘Some developers claim that algorithms will become attuned to our innermost feelings, vastly improving interaction with our devices.’ Photograph: Indeed/Getty, ibrandify via Noun Project Could a program detect potential terrorists by reading their facial expressions and behavior? This was the hypothesis put to the test by the US Transportation Security Administration (TSA) in 2003, as it began testing a new surveillance program called the Screening of Passengers by Observation Techniques program, or Spot for short. While developing the program, they consulted Paul Ekman, emeritus professor of psychology at the University of California, San Francisco. Decades earlier, Ekman had developed a method to identify minute facial expressions and map them on to corresponding emotions. This method was used to train “behavior detection officers” to scan faces for signs of deception. But when the program was rolled out in 2007, it was beset with problems. Officers were referring passengers for interrogation more or less at random, and the small number of arrests that came about were on charges unrelated to terrorism. Even more concerning was the fact that the program was allegedly used to justify racial profiling. Ekman tried to distance himself from Spot, claiming his method was being misapplied. But others suggested that the program’s failure was due to an outdated scientific theory that underpinned Ekman’s method; namely, that emotions can be deduced objectively through analysis of the face. In recent years, technology companies have started using Ekman’s method to train algorithms to detect emotion from facial expressions. Some developers claim that automatic emotion detection systems will not only be better than humans at discovering true emotions by analyzing the face, but that these algorithms will become attuned to our innermost feelings, vastly improving interaction with our devices. But many experts studying the science of emotion are concerned that these algorithms will fail once again, making high-stakes decisions about our lives based on faulty science. Your face: a $20bn industry Emotion detection technology requires two techniques: computer vision, to precisely identify facial expressions, and machine learning algorithms to analyze and interpret the emotional content of those facial features. Typically, the second step employs a technique called supervised learning, a process by which an algorithm is trained to recognize things it has seen before. The basic idea is that if you show the algorithm thousands and thousands of images of happy faces with the label “happy” when it sees a new picture of a happy face, it will, again, identify it as “happy”. A graduate student, Rana el Kaliouby, was one of the first people to start experimenting with this approach. In 2001, after moving from Egypt to Cambridge University to undertake a PhD in computer science, she found that she was spending more time with her computer than with other people. She figured that if she could teach the computer to recognize and react to her emotional state, her time spent far away from family and friends would be less lonely. Kaliouby dedicated the rest of her doctoral studies to work on this problem, eventually developing a device that assisted children with Asperger syndrome read and respond to facial expressions. She called it the “emotional hearing aid”. In 2006, Kaliouby joined the Affective Computing lab at the Massachusetts Institute of Technology, where together with the lab’s director, Rosalind Picard, she continued to improve and refine the technology. Then, in 2009, they co-founded a startup called Affectiva, the first business to market “artificial emotional intelligence”. At first, Affectiva sold their emotion detection technology as a market research product, offering real-time emotional reactions to ads and products. They landed clients such as Mars, Kellogg’s and CBS. Picard left Affectiva in 2013 and became involved in a different biometrics startup, but the business continued to grow, as did the industry around it. Amazon, Microsoft and IBM now advertise “emotion analysis” as one of their facial recognition products, and a number of smaller firms, such as Kairos and Eyeris, have cropped up, offering similar services to Affectiva. Beyond market research, emotion detection technology is now being used to monitor and detect driver impairment, test user experience for video games and to help medical professionals assess the wellbeing of patients. Kaliouby, who has watched emotion detection grow from a research project into a $20bn industry, feels confident that this growth will continue. She predicts a time in the not too distant future when this technology will be ubiquitous and integrated in all of our devices, able to “tap into our visceral, subconscious, moment by moment responses”. A database of 7.5m faces from 87 countries As with most machine learning applications, progress in emotion detection depends on accessing more high-quality data. According to Affectiva’s website, they have the largest emotion data repository in the world, with over 7.5m faces from 87 countries, most of it collected from opt-in recordings of people watching TV or driving their daily commute. These videos are sorted through by 35 labelers based in Affectiva’s office in Cairo, who watch the footage and translate facial expressions to corresponding emotions – if they see lowered brows, tight-pressed lips and bulging eyes, for instance, they attach the label “anger”. This labeled data set of human emotions is then used to train Affectiva’s algorithm, which learns how to associate scowling faces with anger, smiling faces with happiness, and so on. A face with lowered brows and tight-pressed lips meant 'anger' to a banker in the US and to a hunter in Papua New Guinea This labelling method, which is considered by many in the emotion detection industry to be the gold standard for measuring emotion, is derived from a system called the Emotion Facial Action Coding System (Emfacs) that Paul Ekman and Wallace V Friesen and developed during the 1980s. The scientific roots of this system can be traced back to the 1960s, when Ekman and two colleagues hypothesized that there are six universal emotions – anger, disgust, fear, happiness, sadness and surprise – that are hardwired into us and can be detected across all cultures by analyzing muscle movements in the face. To test the hypothesis, they showed diverse population groups around the world photographs of faces, asking them to identify what emotion they saw. They found that despite enormous cultural differences, humans would match the same facial expressions with the same emotions. A face with lowered brows, tight-pressed lips and bulging eyes meant “anger” to a banker in the United States and a semi-nomadic hunter in Papua New Guinea. Over the next two decades, Ekman drew on his findings to develop his method for identifying facial features and mapping them to emotions. The underlying premise was that if a universal emotion was triggered in a person, then an associated facial movement would automatically show up on the face. Even if that person tried to mask their emotion, the true, instinctive feeling would “leak through”, and could therefore be perceived by someone who knew what to look for. Throughout the second half of the 20th century, this theory – referred to as the classical theory of emotions – came to dominate the science of emotions. Ekman made his emotion detection method proprietary and began selling it as a training program to the CIA, FBI, Customs and Border Protection and the TSA. The idea of true emotions being readable on the face even seeped into popular culture, forming the basis of the show Lie to Me. And yet, many scientists and psychologists researching the nature of emotion have questioned the classical theory and Ekman’s associated emotion detection methods. ‘You’re already seeing recruitment companies using these techniques to gauge whether a candidate is a good hire or not’. In recent years, a particularly powerful and persistent critique has been put forward by Lisa Feldman Barrett, professor of psychology at Northeastern University. Barrett first came across the classical theory as a graduate student. She needed a method to measure emotion objectively and came across Ekman’s methods. On reviewing the literature, she began to worry that the underlying research methodology was flawed – specifically, she thought that by providing people with preselected emotion labels to match to photographs, Ekman had unintentionally “primed” them to give certain answers. She and a group of colleagues tested the hypothesis by re-running Ekman’s tests without providing labels, allowing subjects to freely describe the emotion in the image as they saw it. The correlation between specific facial expressions and specific emotions plummeted. Since then, Barrett has developed her own theory of emotions, which is laid out in her book How Emotions Are Made: the Secret Life of the Brain. She argues there are no universal emotions located in the brain that are triggered by external stimuli. Rather, each experience of emotion is constructed out of more basic parts. “They emerge as a combination of the physical properties of your body, a flexible brain that wires itself to whatever environment it develops in, and your culture and upbringing, which provide that environment,” she writes. “Emotions are real, but not in the objective sense that molecules or neurons are real. They are real in the same sense that money is real – that is, hardly an illusion, but a product of human agreement.” Barrett explains that it doesn’t make sense to talk of mapping facial expressions directly on to emotions across all cultures and contexts. While one person might scowl when they’re angry, another might smile politely while plotting their enemy’s downfall. For this reason, assessing emotion is best understood as a dynamic practice that involves automatic cognitive processes, person-to-person interactions, embodied experiences, and cultural competency. “That sounds like a lot of work, and it is,” she says. “Emotions are complicated.” Kaliouby agrees – emotions are complex, which is why she and her team at Affectiva are constantly trying to improve the richness and complexity of their data. As well as using video instead of still images to train their algorithms, they are experimenting with capturing more contextual data, such as voice, gait and tiny changes in the face that take place beyond human perception. She is confident that better data will mean more accurate results. Some studies even claim that machines are already outperforming humans in emotion detection. But according to Barrett, it’s not only about data, but how data is labeled. The labelling process that Affectiva and other emotion detection companies use to train algorithms can only identify what Barrett calls “emotional stereotypes”, which are like emojis, symbols that fit a well-known theme of emotion within our culture. According to Meredith Whittaker, co-director of the New York University-based research institute AI Now, building machine learning applications based on Ekman’s outdated science is not just bad practice, it translates to real social harms. “You’re already seeing recruitment companies using these techniques to gauge whether a candidate is a good hire or not. You’re also seeing experimental techniques being proposed in school environments to see whether a student is engaged or bored or angry in class,” she says. “This information could be used in ways that stop people from getting jobs or shape how they are treated and assessed at school, and if the analysis isn’t extremely accurate, that’s a concrete material harm.” Kaliouby says that she is aware of the ways that emotion detection can be misused and takes the ethics of her work seriously. “Having a dialogue with the public around how this all works and where to apply and where not to apply it is critical,” she told me. Having worn a headscarf in the past, Kaliouby is also keenly aware of the importance of building diverse data sets. “We make sure that when we train any of these algorithms the training data is diverse,” she says. “We need representation of Caucasians, Asians, darker skin tones, even people wearing the hijab.” This is why Affectiva collects data from 87 countries. Through this process, they have noticed that in different countries, emotional expression seems to take on different intensities and nuances. Brazilians, for example, use broad and long smiles to convey happiness, Kaliouby says, while in Japan there is a smile that does not indicate happiness, but politeness. Affectiva have accounted for this cultural nuance by adding another layer of analysis to the system, compiling what Kaliouby calls “ethnically based benchmarks”, or codified assumptions about how an emotion is expressed within different ethnic cultures. But it is precisely this type of algorithmic judgment based on markers like ethnicity that worries Whittaker most about emotion detection technology, suggesting a future of automated physiognomy. In fact, there are already companies offering predictions for how likely someone is to become a terrorist or pedophile, as well as researchers claiming to have algorithms that can detect sexuality from the face alone. Several studies have also recently shown that facial recognition technologies reproduce biases that are more likely to harm minority communities. One published in December last year shows that emotion detection technology assigns more negative emotions to black men’s faces than white counterparts. When I brought up these concerns with Kaliouby she told me that Affectiva’s system does have an “ethnicity classifier”, but that they are not using it right now. Instead, they use geography as a proxy for identifying where someone is from. This means they compare Brazilian smiles against Brazilian smiles, and Japanese smiles against Japanese smiles. “What about if there was a Japanese person in Brazil,” I asked. “Wouldn’t the system think they were as Brazilian and miss the nuance of the politeness smile?” “At this stage,” she conceded, “the technology is not 100% foolproof.” https://www.theguardian.com/technology/2019/mar/06/facial-recognition-software-emotional-science
  15. Toronto police used facial recognition technology to attempt to identify people in 2,591 searches since March of last year, according to a report by Chief Mark Saunders which revealed the force’s use of the technology, the Toronto Star reports. A report submitted to the Toronto police services board shows that images from public and private cameras are matched against an internal database of 1.5 million mugshots, and that the system’s use so far has cost CAD $451,718 (just over US$335,000). According to Saunders, the system was purchased to help police more quickly and efficiently identify suspects, including violent offenders. A provincial grant for police modernization funded the purchase. New Democratic Party Member of Parliament Charlie Angus told the Star that no legislative oversight is in place, and therefore “we need to hit the pause button.” Angus is also part of the House of Commons Standing Committee on Access to Information, Privacy and Ethics, which is currently studying artificial intelligence ethics. Angus told Mobile Syrup that his office is considering legislative changes to restrict facial recognition. Facial recognition is limited in Canada by the Personal Information Protection and Electronic Documents Act (PIPEDA), which requires consent for the collection, use, or transmission of personal information, but does not specifically deal with AI or facial recognition. “I think we really need to look at putting limits on facial recognition technology and lay the ground rules before it gets widely implemented,” Angus says. Toronto police’s implementation generated matches for roughly 60 percent of 1,516 searches between March and December of 2018, about 80 percent of which led to the identification of criminal offenders, according to the report to the police services board. The report also states that facial recognition is not used as a sole basis for arrests, unlike fingerprint identification, but rather to produce potential candidates for further investigation. Information provided by facial recognition helped to solve four homicides, multiple sexual assaults, armed robberies, and gang-related crimes. The Star asked Toronto Police Services about false positive rates overall and for different ethnic groups, and was told the technology is not used to make a positive identification. A representative of the police also said the force has no plans to extend its database beyond the existing mug-shot collection, it does not use real-time facial recognition, and does not have legal authorization to do so. Only six FBI-trained officers have access to the system, while body camera images can be used in the case when a criminal offence is captured on camera, court permission is still required. Saunders’ report notes that the police began a year-long pilot project in September, 2014, and conducted a Privacy Impact Assessment in 2017. “The fact that there has been very little — virtually no — public conversation about the fact that this is happening, despite the fact that they’ve been looking into it for at least the past five years … raises questions for me,” the Canadian Civil Liberties Association’s Privacy, Technology and Surveillance Project Director Brenda McPhail told the Star. Calgary is the only other Canadian city where police are known to use facial recognition technology. No federal legislation is imminent, but the house committee is attempting to set up the next sitting of parliament to move the issue forward, according to Angus. Liberal MP and committee co-chair Nathaniel Erskine-Smith told Mobile Syrup that there are benefits to some AI technologies, but acknowledged that negative outcomes could necessitate a ban. “Where it is not mitigated and in the case of San Francisco, if there is clear evidence that employing the algorithm leads to racial profiling then governments, be it local or national, should prohibit the use of that technology,” Erskine-Smith says. Canada’s government recently set up an Advisory Council on Artificial Intelligence. Source
  16. Facial Recognition ‘Consent’ Doesn’t Exist, Threatpost Poll Finds Half of Threatpost readers surveyed in a recent poll don’t believe that consent realistically exists when it comes to facial recognition. Half of respondents in a recent Threatpost poll said that they don’t believe consent realistically exists when it comes to real-life facial recognition. The recent poll of 170 readers comes as facial recognition applications continue to pop up in the real world – from airports to police forces. While biometrics certainly has advantages – such as making identification more efficient – gaining consent from people whose biometrics are being taken remains a mystery to some, with 53 percent of respondents saying they don’t believe that consent exists or is possible in real-life facial recognition applications . In the poll, 32 percent more respondents said that consent will be the act of giving people notification that an area is using facial recognition; and only 10 percent said consent is the ability to opt out of facial recognition applications. The issue of biometrics consent came to the forefront again in December when the Department of Homeland Security unveiled a facial-recognition pilot program for monitoring public areas surrounding the White House. When asked about consent, the department said that the public cannot opt-out of the pilot, except by avoiding the areas that will be filmed as part of the program. “A very weak form of protection is if the government or a business [that uses biometrics for] surveillance, they notify people,” Adam Schwartz, senior staff attorney with the Electronic Frontier Foundation’s civil liberties team, told Threatpost. “We think this is not consent – real consent is where they don’t aim a camera at you.” Beyond consent, more than half of poll respondents said that they have negative feelings toward facial recognition due to issues related to privacy and security – while 30 percent more said they have “mixed” feelings, understanding both the benefits and privacy concerns. When asked what concerns them the most about real-world facial applications, 55 percent of those surveyed pointed to privacy and surveillance issues, while 29 percent said the security of biometrics information and how the data is shared. Despite these concerns, biometrics continues to gain traction, with the EU last week approving a massive biometrics database for both EU and non-EU citizens. The EU’s approval of the database, called the “Common Identity Repository,” will aim to connect the systems used by border control, migration and law-enforcement agencies. As biometrics continue to increase, meanwhile, up to 85 percent of respondents said that they think that facial recognition should be regulated in the future. Such laws exist or are being discussed as it relates to consent: An Illinois law for instance regulates collection of biometric information (including for facial recognition) without consent. However, that law only applies to businesses and not law enforcement. Meanwhile, a new bill introduced in the Senate in March, the “Commercial Facial Recognition Privacy Act,” would bar businesses that are using facial recognition from harvesting and sharing user data without consent. “The time to regulate and restrict the use of facial recognition technology is now, before it becomes embedded in our everyday lives,” said Jason Kelly, digital strategist with EFF, in a recent post. “Government agencies and airlines have ignored years of warnings from privacy groups and Senators that using face recognition technology on travelers would massively violate their privacy. Now, the passengers are in revolt as well, and they’re demanding answers.” Source
  17. Facial recognition isn't ready to spot terrorists on the road. New York's bid to identify road-going terrorists with facial recognition isn't going very smoothly so far. The Wall Street Journal has obtained a Metropolitan Transportation Authority email showing that a 2018 technology test on New York City's Robert F. Kennedy Bridge not only failed, but failed spectacularly -- it couldn't detect a single face "within acceptable parameters." An MTA spokesperson said the pilot program would continue at RFK as well as other bridges and tunnels, but it's not an auspicious start. The problem may be inherent to the early state of facial recognition at these speeds. Oak Ridge National Laboratory achieved more than 80 percent accuracy in a study that spotted faces through windshields, but that was at low speed. It might not be ready for someone barrelling down a bridge. Not that privacy advocates will necessarily mind. Facial recognition is already a contentious issue, let alone when it's being used to peep into cars. Whether or not you see it as an Orwellian intrusion that could lead to abuses of power, there are accuracy problems at the best of times. It sometimes has trouble recognizing non-white people and women, and it assumes a culprit won't wear a mask or another disguise. While no terrorist detection system is foolproof, there are real concerns that current approaches could generate false positives or let suspects slip through the cracks. Source
  18. MoviePass founder wants to use facial recognition to score you free movies Facial recognition is the linchpin to PreShow, an app that will earn you free movie tickets by watching ads -- so long as you're OK with your phone watching you back. Everything free comes with a price. But PreShow, a new company from a founder of MoviePass, wants advertising to be the only price of a movie ticket. Launching a campaign on KickstarterThursday, PreShow is developing an app to earn you free movie tickets -- to any film in any theater -- if you watch 15 to 20 minutes of high-end advertising. Like a sponsored session of ad-free Spotify that you unlock with a special commercial, PreShow wants to make going to the movies feel like free. But PreShow hinges on what some may consider a cost and others consider a bargain: facial recognition. "If it weren't for facial recognition, I don't think we could still do it," Stacy Spikes, the founder and chief executive of PreShow, said in an interview last week. "If not, they could game this all day long." This is part of a CNET special report exploring the benefits and pitfalls of facial recognition. CNET Forgoing a password, PreShow's app will only unlock with your phone's facial recognition technology. And while you're watching the ads to earn that free ticket, your phone's camera monitors your level of attention. Walk away or even obscure part of your face? The ad will pause after five seconds. Facial recognition is already playing an ever-growing role in your life, for good and for ill. It's the technology that helps you find all the snapshots of a particular friend in Google Photos; it's central to how some smart-home technologies aim to make your life simpler. But it also could scan your face at an airportor concert without your knowledge. As face recognition advances and spreads, it opens up a host of privacy and security worries. But without facial recognition, PreShow wouldn't be possible. "We had two problems to solve: We didn't want people creating dummy accounts, and we're dealing with real currency at the end of the day, so we needed to uniquely lock it," Spikes said. "Facial recognition at the phone level is just a year and a half old. You couldn't do this company two years ago." Watching a phone watching you PreShow uses facial recognition for identification and verification -- it needs to make sure you're the only person who can open the app (and you only have one account) and that you're actually watching those ads while they play. The unlocking mechanism is built upon your phone's existing face recognition capabilities. In a demo of PreShow, Spikes unlocked the app via his iPhone 7's front-facing camera. The viewing accountability part was harder. PreShow built its app so that when you're watching an ad, a green border glows around the edge of the video. So long as the front-facing camera can recognize that your face is looking at the screen, the border stays green. Walk away, direct your head away from the camera or even obscure part of your face, and the outline turns to red in the demo. After five seconds of red, the commercial automatically pauses. PreShow CEO Stacy Spikes was a founder of MoviePass. PreShow Like the app itself, PreShow's privacy specifics are still in flux. A private beta launch of the PreShow app is slated to start in July, with Kickstarter patrons being the first to use and share it. The company hasn't specified when it will be made available to the public at large. So, for example, PreShow hasn't finalized an end user license agreement yet. Yes, that's the "fine print" digital contract you always sign, probably without reading, whenever you sign up for a online service. But it would be where company officially spells out how it plans to use its facial recognition. The beta testing will be key, PreShow says, as it will be working on terms and conditions during the beta as it learns from user feedback. Once PreShow starts letting people into the beta test, it will be sending out more details about the user experience overall, including the facial recognition technology, it said. Among the specifics PreShow would define today: The app will not be recording anyone as they watch, and it won't be sharing personally identifiable data to third parties. For its advertising partners, it will provide aggregated and anonymized data -- as Spikes put it, "they will not get anything other than a 30,000-foot view" of your activity. That 30,000-foot view will be based on basic demographic data you volunteer in the signup process. PreShow plans to ask you to identify your age, geography and gender so it can provide the aggregated demographic details that advertisers want. Will PreShow use facial recognition to verify that you were honest in identifying yourself as a man or a woman, for example? Facial recognition raises questions that PreShow will need to be address in black-and-white disclosures. Spikes notes that in other ways, this kind of advertising is more sensitive to your privacy. "What's happening today in media is the brand will buy a bunch of data, and then it will trade that data and it will follow you around and it will embed cookies -- it's a covert art," he said. "This is much more out in the open." A MoviePass second act Spikes was a co-founder of MoviePass in 2011, but he parted ways with the company after Helios and Matheson Analytics took control of it and introduced the $10-a-month, unlimited-daily-ticket deal. That unlimited deal that was beloved, and new members subscribed in drove -- until it became notorious. The initiative drove MoviePass to the edge of insolvency, triggering the company into a farce of constantly changing prices and benefits. Stacy Spikes was a MoviePass founder but parted ways with the company after it introduced its cut-rate $10 monthly subscription for unlimited daily movies. MoviePass But before MoviePass unraveled, Spikes (as a PreShow representative put it) "was let go." "You'd have to ask MoviePass about specific rationale for the decision, but Stacy didn't agree with the direction the company was going. That likely had something to do with it," the spokeswoman said. "He started working on PreShow soon thereafter." Like MoviePass, Spikes hopes that PreShow can help pull cinema into new ways of doing business. "If the innovation is there in a big way -- that is, universal from the consumer standpoint -- it helps cinema to leap forward. And hopefully, making moviegoing possibly free will radically do that again," he said. But unlike MoviePass, where the cost of movie admissions for a single member could far outstrip the revenue their subscription brings in, PreShow won't be a business that operates in the red, Spikes said. The consumer pays his or her own way by virtue of watching the ads, whether the company's user base is 3,000 or 30 million. Thursday's Kickstarter launch will fund the company's initial free-ticket campaigns. In addition, PreShow has funding from an unnamed angel investor who is a former wireless industry executive with an interest in cinema. Spikes also raised the possibility that earned points in PreShow could be applied to buy products, like a pair of tennis shoes, in addition to movie tickets. The advertising in the app will be video made by brands with integrations in movies -- think Dodge putting its cars in the Fast and the Furious franchise of movies. It would also include behind-the-scenes-style videos involving brands and films -- imagine a featurette about Brie Larson's training regimen for Captain Marvel that's sponsored by a fitness company. Demos of PreShow's design opened the app to splashy full-screen image ads that would lead into video advertising when tapped. PreShow Not all advertising in PreShow will earn you credit. When advertising is in new release, it will run a promotion that earns credit, but once that inventory is used up, the video will go to a libary where it's still viewable without any rewards. For example, when a new Marvel film is coming out, the advertising for it may drop into PreShow two weeks in advance. The company earns enough ad dollars for, say, 100,000 tickets. But once the promotional inventory "sells out," the credits are gone but the content will remain. "There will be scarcity in the ticket, but the advertising will have an afterlife," Spikes said. But the price of a PreShow free movie ticket isn't just advertising, it's facial recognition too. It'll be up to you to decide whether that price is a steal. Source
  19. With facial recognition, shoplifting may get you banned in places you've never been There are hundreds of stores using facial recognition -- none that have any rules or standards to prevent abuse. A live demonstration uses artificial intelligence and facial recognition in dense crowd spatial-temporal technology at the Horizon Robotics exhibit at the Las Vegas Convention Center during CES 2019. David Mcnew / AFP/Getty Images At my bodega down the block, photos of shoplifters sometimes litter the windows, a warning to would-be thieves that they're being watched. Those unofficial wanted posters come and go, as incidents fade from the owner's memory. But with facial recognition, getting caught in one store could mean a digital record of your face is shared across the country. Stores are already using the technology for security purposes and can share that data -- meaning that if one store considers you a threat, every business in that network could come to the same conclusion. One mistake could mean never being able to shop again. This is part of a CNET special report exploring the benefits and pitfalls of facial recognition. While that may be good news for shopkeepers, it raises concerns about potential overreach. It's just one example of how facial recognition straddles the line between being a force for good and being a possible violation of personal privacy. Privacy advocates fear that regulations can't keep up with the technology -- found everywhere from your phone to selfie stations -- leading to devastating consequences. "Unless we really rein in this technology, there's a risk that what we enjoy every day -- the ability to walk around anonymous, without fearing that you're being tracked and identified -- could be a thing of the past," said Neema Singh Guliani, the American Civil Liberties Union's senior legislative counsel. The technology is appearing in more places every day. Taylor Swift uses it at her concerts to spot potential stalkers, with cameras hidden in kiosks for selfies. It's being used in schools in Sweden to mark attendance and at airports in Australia for passengers checking in. Supermarkets in the UK are using it to determine whether customers are old enough to buy beer. Millions of photos uploaded onto social media are being used to train facial recognition without people's consent. Revenue from facial recognition is expected to reach $10 billion by 2025, more than double the market's total in 2018. But despite that forecast for rapid growth, there's no nationwide regulation on the technology in the US. The gap in standards means that it's possible the technology being used at US borders could have the same accuracy rate as facial recognition used to take selfies at a concert. Accuracy rates matter -- it's the difference between facial recognition determining you're a threat or an innocent bystander, but there's no standard on how precise the technology needs to be. The time is now to regulate this technology before it becomes embedded in our everyday lives. Jennifer Lynch, Electronic Frontier Foundation Without any legal restrictions, companies can use facial recognition without limits. That means being able to log people's faces without telling customers their data is being collected. Two facial recognition providers told CNET that they don't check on their customers to make sure they're using the data properly. There are no laws requiring them to. "So far, we haven't been able to convince our legislators that this is a big problem and will be an even larger problem in the future," said Jennifer Lynch, surveillance litigation director at the Electronic Frontier Foundation. "The time is now to regulate this technology before it becomes embedded in our everyday lives." Faced everywhere At the International Security Conference in New York last November, I walked past booths with hundreds of surveillance cameras. Many of them used facial recognition to log my gaze. These companies want this technology to be part of our daily routines -- in stores, in offices and in apartment buildings. One company, Kogniz, boasted it was capable of automatically enrolling people as they enter a camera's view. "Preemptively catalogues everyone ever seen by the camera so they can be placed on a watchlist," Kogniz's business card says. Kogniz's business card, boasting that it could create automatic watchlists. Alfred Ng / CNET This technology is available and advertised as a benefit to stores without any privacy concerns in mind. As more stores adopt this dragnet approach to facial recognition, data on your appearance could be logged anywhere you go. California-based video surveillance startup Kogniz launched in 2016 and now has about 30 retail and commercial customers, with thousands of security cameras using its facial recognition technology. Stores use Kogniz's facial recognition to identify known shoplifters. If a logged person tries entering the store, Kogniz's facial recognition will be able to detect that and flag security, Daniel Putterman, the company's co-founder and director, said in an interview. And it's not just for that one location. "We are a cloud system, so we're inherently multi-location," Putterman said. If someone is barred from one store because of facial recognition, that person could potentially be prevented from visiting another branch of that same store ever again. Kogniz also offers a feature called "collaborative security," which lets clients opt in to share facial recognition data with other customers and share potential threats across locations. That would mean facial recognition could detect you in a store you've never even visited to before. Putterman said none of Kogniz's customers has opted into that program yet, but it's still available. Recognition without regulation People don't have to be convicted of a crime to be placed on a private business' watch list. There aren't any rules or standards governing how companies use facial recognition technology. "The clients use it at their own discretion for their own purposes," Putterman said. "If someone is falsely accused of being a shoplifter, that's beyond our control." Amazon provides Rekognition, its facial recognition software, to law enforcement agencies like the Washington County sheriff's office in Oregon. The tech giant recommends that police use Rekognition with a 99 percent confidence threshold for the most accurate results. But the sheriff's office told Gizmodo in January that it doesn't use any established standards when employing facial recognition. Security cameras with facial recognition tech inside Gemalto, a digital security company, has been providing facial recognition to the Department of Homeland Security, which uses it at US exits to log foreign visitors leaving the country. The company also works with local police departments on facial recognition. As with Amazon's Rekognition, its customers aren't beholden to any standards when using its facial recognition. "Once the customer has it, they're going to operate with the standards that they define," said Neville Pattinson, Gemalto's senior vice president for government programs. "It's not our responsibility to have involvement past the point of delivery." The lack of standards across the entire industry leads to a massive potential for abuse, privacy advocates say. One store that uses Kogniz shares its login information with its local police department, Putterman said. He declined to disclose which store, but the police are automatically alerted when the store's facial recognition detects a flagged face -- even if the person is not committing a crime. "The retailer has given them permission to log in and see what's going on," Putterman said. "That local police department can look at the live footage and decide whether or not they want to act on it." I would hate to see the technology end up with reactionary restrictions on the basis of concerns on privacy. Neville Pattinson, Gemalto That practice comes with legal concerns, Guliani said. Police need a warrant to track a specific person's whereabouts, as the Supreme Court decided last June regarding phone location data. But with facial recognition, authorities can get around this limitation. "That means without a warrant, without any oversight, what law enforcement can do is track your movements any time you walk into a store," Guliani said. And as facial recognition expands into more places, privacy advocates worry that more businesses will provide that access to law enforcement agencies with no limits. "If they're using it for when you're shopping, or driving through the Holland Tunnel for your daily commute, what happens when law enforcement wants to tap into all of those systems and use them for recording?" said Jake Laperruque, a senior counsel at the Constitution Project. 'We need clear rules' The federal government hasn't taken action on facial recognition yet, but local governments are starting to limit how the technology can be used. In late January, San Francisco supervisor Aaron Peskin proposed legislation to completely ban the city's government agencies from using facial recognition. State lawmakers across the US have suggested similar legislation, like in Washington and Massachusetts. Last Thursday, senators proposed a bill that would stop businesses from using the technology without notifying customers and prevent them from sharing that data without people's consent. If passed, it would be the first federal law protecting your privacy from businesses using the technology. Facial recognition providers balked at the proposed regulations, arguing that the technology's benefits outweigh privacy concerns. "I would hate to see the technology end up with reactionary restrictions on the basis of concerns on privacy," Pattinson said. Putterman said Kogniz understands the potential for abuse, and said his company does not sell facial recognition to government agencies. He hopes for regulations to arrive so that the technology can be used responsibly. As it finds its way into every corner of our daily lives, facial recognition offers benefits -- but without regulations, it could grow into an uncontrollable privacy violation, advocates argue. Before it does, many are calling for lawmakers to protect your privacy. "This isn't something that needs to be completely banned or cut off, but we need clear rules," Laperruque said. isn't something that needs to be completely banned or cut off, but we need clear rules," Laperruque said. Source
  20. Facial recognition: Apple, Amazon, Google and the race for your face Facial recognition technology is both innovative and worrisome. Here's how it works and what you need to know. Facial recognition is a blossoming field of technology that is at once exciting and problematic. If you've ever unlocked your iPhone ($1,000 at Amazon) by looking at it, or asked Facebook or Google to go through an unsorted album and show you pictures of your kids, you've seen facial recognition in action. Whether you want it to or not, facial recognition (sometimes called simply "face recognition") is poised to play an ever-growing role in your life. Your face could be scanned at airports or concerts with or without your knowledge. You could be targeted by personalized ads thanks to cameras at shopping malls. Facial recognition has plenty of upside. The tech could help smart home gadgets get smarter, sending you notifications based on who it sees and offering more convenient access to friends and family. This is part of a CNET special report exploring the benefits and pitfalls of facial recognition. James Martin/CNET But at the very least, facial recognition raises questions of privacy. Experts have concerns ranging from the overreach of law enforcement, to systems with hidden racial biases, to hackers gaining access to your secure information. Over the next few weeks, CNET will be diving into facial recognition with in-depth pieces on a wide variety of topics, including the science that allows it to work and the implications, both positive and negative, for many of its applications. To get you up to speed, here's a brief overview including what facial recognition is, how it works, where you'll find it in use today, as well as a few of the implications of this rapidly expanding corner of technology. What is facial recognition? Facial recognition is a form of biometric authentication, which uses body measurements to verify your identity. Facial recognition is a subset of biometrics that identifies people by measuring the unique shape and structure of their faces. Different systems use different techniques, but at its core, facial recognition uses the same principles as other biometric authentication techniques, such as fingerprint scanners and voice recognition. How does facial recognition work All facial recognition systems capture either a two- or three-dimensional image of a subject's face, and then compare key information from that image to a database of known images. For law enforcement, that database could be collected from mugshots. For smart home cameras, the data likely comes from pictures of people you've identified as relatives or friends via the accompanying app. Woodrow "Woody" Bledsoe first developed facial recognition software at a firm called Panoramic Research back in the 1960s using two-dimensional images, with funding for the research coming from an unnamed intelligence agency. Even now, most facial recognition systems rely on 2D images, either because the camera doesn't have the ability to capture depth information -- such as the length of your nose or the depth of your eye socket -- or because the reference database consists of 2D images such as mugshots or passport photos. 2D facial recognition primarily uses landmarks such as the nose, mouth and eyes to identify a face, gauging both the width and shape of the features, and the distance between the various features of the face. Those measurements are converted to a numerical code by facial recognition software, which is used to find matches. This code is called a faceprint. This geometric system can struggle with different angles and lighting. A straight-on shot of a face will show a different distance from nose to eyes, for instance, than a shot of a face turned to the side. The problem can be somewhat mitigated by mapping the 2D image onto a 3D model and undoing the rotation. Adding a third dimension 3D facial recognition software isn't as easily fooled by angles and light and doesn't rely on average head size to guess at a faceprint. With cameras that sense depth, the faceprint can include the contours and curve of the face as well as depth of the eyes and distances from points like the tip of your nose. Most cameras gauge this depth by projecting invisible spectrums of light onto a face and using sensors to capture the distance of various points of this light from the camera itself. Even though these 3D sensors can capture much more detail than a 2D version, the basis of the technology remains the same -- turning the various shapes, distances and depths of a face into a numerical code and matching that code to a database. If that database consists of 2D images, software needs to convert the 3D faceprint back to a 2D faceprint to get a match. Apple's Face ID uses 30,000 infrared dotsthat map the contours of your face. The iPhone then remembers the relative location of those dots the next time you try to unlock your phone. Even these more advanced systems can be defeated by something as simple as different facial expressions, wearing glasses or scarves that obscure parts of your face. Apple's Face ID can struggle to match your tired, squinting, just-woke-up face to your made-up, caffeinated, ready-for-the-day face. Reading your pores A more recent development, called skin texture analysis, could help future applications overcome all of these challenges. Developed by Identix, a tech company focused on developing secure means of identification, skin texture analysis differentiates itself by functioning at a much smaller scale. Instead of measuring the distance between your nose and your eyes, it measures the distance between your pores. It then converts those numbers into a mathematical code. This code is called a skinprint. This method could theoretically be so precise that it can tell the difference between twins. Currently, Identix is working to integrate into facial recognition systems alongside a more normal 3D face map. The company claims the tech increases accuracy by 25 percent. Your face can be turned into a code. James Martin/CNET Where is facial recognition being used? While Bledsoe laid the groundwork for the tech, modern facial recognition began in earnest in the 1980s and '90s thanks to mathematicians at MIT. Since then, facial recognition has been integrated into all manner of commercial and institutional applications with varying degrees of success. The Chinese government uses facial recognition for large-scale surveillance in public CCTV cameras, both to catch criminals and monitor the behavior of all individuals with the intent of turning the data into a score. Seemingly harmless offenses like buying too many video games or jaywalking can lower your score. China uses that score for a sort of "social credit" system that determines whether the individual should be allowed to get a loan, buy a house or even much simpler things like board a plane or access the internet. The London Metropolitan Police also use it as a tool when narrowing their search for criminals, though their system supposedly isn't very accurate -- with incorrect matches reported in a whopping 98 percent of cases. In the US, police departments in Oregon and Florida are teaming up with Amazon to install facial recognition into government-owned cameras. Facial recognition is undergoing trials at airports to help move people through security more quickly. The Secret Service is testing facial recognition systems around the White House. Taylor Swift even used it to help identify stalkers at one of her concerts. Facial recognition famously led to the arrest of the Capital Gazette shooter in 2018 by matching a picture of the suspect to an image repository of mugshots and pictures from driver's licenses. The upcoming 2020 Olympics in Tokyo will be the first to use facial recognition to help improve security. Facial recognition could have large implications for retail outlets and marketers as well, beyond simply watching for thieves. At CES 2019, consumer goods giant Procter & Gamble showed a concept store where cameras could recognize your face and make personalized shopping recommendations. Bringing facial recognition home Aside from large-scale installations, facial recognition has several uses in consumer products. Beyond iPhones, some phoneswith Google's Android operating system like the Google Pixel 2 and the Samsung Galaxy S9 are capable of facial recognition, but the technology on Android isn't yet secure enough to verify mobile payments. The next version of Android is expected to get a more secure facial recognition system closer to Apple's Face ID, although Samsung did not incorporate any facial recognition into its newest phone, the Galaxy S10 ($1,000 at Amazon), as many industry watchers had expected. Facebook has used facial recognition for years to suggest tags for pictures. Other photo applications, such as Google Photos, are getting better at doing the same. In the smart home, after starting as a niche feature in connected cams such as the Netatmo Welcome, facial recognition is now built into several popular models, including the Nest Hello video doorbell. We saw a bunch of new gadgets with the tech on display at CES 2019. Security cameras with facial recognition tech inside Connected cams compare faces with others they've seen before so you can customize notifications based on who the camera sees. All the models we've tested take a while to learn faces, as they need to be able to recognize the members of your household at various angles and in various outfits. Once the cameras learn, you can use facial recognition to make your connected security system that much smarter by making your notifications more relevant to what you actually want to know. Beyond the security uses in the home, even robots like Lovot and Sony's Aibo robot dogcan recognize faces. Aibo and others learn faces not to track who comes and goes, but to adapt to the specific preferences of different people over time. What are the implications? Unlike other forms of biometric authentication, cameras can gather information about your face with or without your knowledge or consent. If you're a privacy-minded person, you could potentially be exposing your data when in a public place without knowing it. Because the technology is so new, there aren't any laws in the US limiting what companies can do with images of your face after they capture it. A bipartisan bill was recently introduced in the Senate to rectify the lack of regulation. The American Civil Liberties Union delivered a petition to Amazon last year asking it to stop giving its facial recognition technology to law enforcement agencies and the government, calling the prospect "a user manual for authoritarian surveillance." According to a report by Buzzfeed, the US Customs and Border Protection agency plans to implement facial recognition to verify the identity of passengers on international flights in airports across the country. The Electronic Privacy Information Center shared documents with Buzzfeed that suggested the UCB skipped over gathering public feedback before starting to implement these systems, and that they have a questionable accuracy rate and little established privacy regulations as far as what the airlines can do with this facial data after they collect it. NBC News reported that the databases of pictures used to improve facial recognition often comes from social media sites without the consent of the subject or photographer. Companies like IBM have the stated goal of using these images to try and improve the accuracy of facial recognition, particularly among people of color. Theoretically, by ingesting the data from a large catalog of faces, the system can fine tune its algorithms to account for a larger variety of facial structures. The Electronic Frontier Foundation notesthat current facial recognition systems tend to produce a disproportionately high number of false positives when identifying minorities. NBC's story also details how it can be tedious to impossible for private citizens to opt out of using their pictures in these databases. The Ring Doorbell would have watched for suspicious individuals. Chris Monroe/CNET Facebook faces a class action lawsuit over its own facial recognition technology, called DeepFace, which identified people in photos without their consent. Smart home company Ring, an Amazon subsidiary, also came under fire last year for filing patents based on facial technology that could have violated civil rights. Ring's video doorbells would have monitored neighborhoods for known sex offenders and those on "most wanted" lists and could then have automatically notified law enforcement. The idea was criticized as likely to target those unfairly deemed a threat and potentially even political activists. The science behind facial recognition is certainly exciting, and the tech could lead to a safer and more personal smart home, but facial recognition could easily result in a loss of privacy, unjust profiling and violations of personal rights. While the impact of facial recognition is still being determined and debated, it's important to recognize that facial recognition is no longer some distant concept reserved for science fiction. For better or worse, facial recognition is here now and spreading quickly. Check back throughout the month as CNET dives deeper into the implications of this developing technology. Source
  21. A Chinese subway is experimenting with facial recognition to pay for fares Fare would be automatically deducted from a linked payment method Scanning your face on a screen to get into the subway might not be that far off in the future. In China’s tech capital, Shenzhen, a local subway operator is testing facial recognition subway access, powered by a 5G network, as spotted by the South China Morning Post. The trial is limited to a single station thus far, and it’s not immediately clear how this will work for twins or lookalikes. People entering the station can scan their faces on the screen where they would normally have tapped their phones or subway cards. Their fare then gets automatically deducted from their linked accounts. They will need to have registered their facial data beforehand and linked a payment method to their subway account. There are some advantages to the system. For example, riders won’t have to worry about forgetting their subway card or a low balance, but at the same time, it likely means that their every journey into the subway will be tracked down to the pixels of their faces. It’s not clear if that’s any more tracking than what’s already been done. Many major Chinese cities have extensive surveillance camera systemsthat log citizens’ faces, ages, genders, and how long they’ve been staying in the area. The algorithms for the facial recognition tech were designed in a lab overseen by Shenzhen Metro and phone maker Huawei. Shenzhen Metro hasn’t given a timeline for when facial recognition could reach all of its stations and subway lines. We’ve reached out to Huawei for comment. PEOPLE COULD ALREADY BUY FRIED CHICKEN BY SCANNING THEIR FACES Using facial recognition for payments isn’t new, although using it on subways is. At KFC stores across China, people can scan their face to buy fried chicken, which has been around since 2017. China is ahead of the US when it comes to mobile payments, as nearly half of the country used their phones to make payments in 2018. Payments made through WeChat Pay or Alipay were so popular that China’s central bank had to warn stores last year not to reject cash or face unspecified penalties. Still, on one occasion when I was in China last year, a shop could not accept cash because it didn’t have enough bills in its coffers to make proper change. Source
  22. Samsung Galaxy S10 facial recognition fooled by a video of the phone owner There's a reason why Samsung tells users to avoid using facial recognition screen locking on Galaxy S10 smartphones. Experts have proven once again that facial recognition on modern devices remains hilariously insecure and can be bypassed using simple tricks such as showing an image or a video in front of a device's camera. The latest device to fall victim to such attacks is Samsung Galaxy S10, Samsung's latest top tier phone and considered one of the world's most advanced smartphones to date. Unfortunately, the Galaxy S10's facial recognition feature remains just as weak as the one supported in its previous versions or on the devices of its competitors, according to Lewis Hilsenteger, a smartphone reviewer better known as Unbox Therapy on YouTube. Hilsenteger showed in a demo video uploaded on his YouTube channel last week how putting up a video of the phone owner in front of the Galaxy S10 front camera would trick the facial recognition system into unlocking the device. Similarly, an Italian journalist from SmartWorld.it also unlocked a Galaxy S10 device using nothing but a photo, which would be much easier to obtain by an attacker, compared to a front-facing video of the device owner. However, this method didn't always yield the same result when others tried to replicate it --unlike Hilsenteger's approach, which seemed to work almost every time. Hearing that users have cracked the facial recognition screen lock feature in one of the world's top phones didn't trigger the same shock and awe reaction that it used to a few years back. This is because in the past few years, both security researchers and regular users alike have bypassed the facial recognition feature on a plethora of devices. For example, users bypassed the facial recognition on a Samsung S8 using a photo, they bypassed Apple's FaceID feature on an iPhone X with a $150 mask, they broke into many top tier Android phones using a 3D-printed head, and they used the same 3D printed head method to gain access to a Windows 10 device protected by the Windows Hello biometrics solution. In fact, the issue is quite widespread. A study by a Dutch non-profit last year found that investigators could bypass face unlock-type of features on 42 out of the 110 smartphones they tested. The issue with all these facial recognition systems implemented in current commercial products is that they don't perform any type of 3D depth scans of the tested face, but merely look at the position of the eyes, nose, or mouth to authorize a person and unlock a device --hence the reason most of them can be bypassed by flashing photos or videos in front of their cameras. Source
  23. It's a novel way of raising funds in Brexit Britain Good to know police won't be abusing new technology GOOD NEWS! THE MET POLICE'S controversial facial recognition trial has earned the public purse £90 it wouldn't have otherwise had. The bad news is that the way the money was earned should really make everyone stop and have a long hard think about where society is going. This particular chilling anecdote comes from campaign group Big Brother Watch, and describes a man who saw the warning of automatic facial recognition cameras, and took steps to avoid them. "He simply pulled up the top of his jumper over the bottom of his face, put his head down and walked past," explained Big Brother Watch director Silkie Carlo to The Independent. "There was nothing suspicious about him at all … you have the right to avoid [the cameras], you have the right to cover your face. I think he was exercising his rights." Carlo explained that this was enough to trigger suspicions, and the man was followed and eventually accosted by officers who "pulled him over to one side," and demanded to see his ID which he provided. It became heated, and the man the officers them to "piss off" - we think, anyway, the Independent has prissily censored the word, so it might be "pony." Probably not, though, as said words landed the man a £90 fine as a public order offence for swearing. "He was really angry", Carlo added, although in the circumstances we think that's understandable. The Metropolitan Police had previously put out a statement saying that "anyone who declines to be scanned will not necessarily be viewed as suspicious." It looks like the word "necessarily" is doing an awful lot of heavy lifting in that sentence. Source
  24. If you’ve been thinking about trying your hand at social media’s 10 Year Challenge and are concerned about your privacy, you may want to take a moment to see why some are saying the trend may not be so harmless. Like many fads in the social realm, this one could come with some unintended consequences. First, for those who are catching up on the 10-year craze, the challenge, otherwise known as #2009vs.2019, the #HowHardDidAgingHitYouChallenge and the #GloUpChallenge, involves posting two photos of yourself – one from 2009 and one from 2019.In lieu of that, 2008 and 2018, or some other decade or substantial length of time. On Facebook, people shared their first profile picture with their current picture. In all cases, the idea is to show how you’ve changed (or stayed the same, like Reese Witherspoon, pictured below) over that period. Celebrities ranging from Janet Jackson to Snooki, Kevin Smith, Fat Joe and Tyra Banks have taken up the challenge. Some, like Smith and Fat Joe, showed off a considerable slimdown, while others just had fun looking back 10 years. (Or 50, like Samuel L. Jackson.) What could go wrong? “Y’all posting these #2009v2019 comparison photos and that’s how you get your identity stolen,” tweeted Desus Nice of the upcoming Showtime series “Desus vs. Mero," on Sunday. “Imagine that you wanted to train a facial recognition algorithm on age-related characteristics, and, more specifically, on age progression (e.g. how people are likely to look as they get older),” she says. “Ideally, you’d want a broad and rigorous data set with lots of people’s pictures. It would help if you knew they were taken a fixed number of years apart—say, 10 years.” It’s not that Facebook or Twitter or Instagram didn’t already have photos of you, she says. It’s just that they just weren’t clearly organized in a specific, labeled progression, she explains. The date you posted a profile picture doesn’t necessarily mean that’s when it was taken. So with this trend, we are providing more detailed data by denoting when each photo was taken. “In other words, thanks to this meme, there’s now a very large data set of carefully curated photos of people from roughly 10 years ago and now,” O’Neill says. If you’re OK with that, by all means, proceed with showing off your glo-up. But know this: “Age progression could someday factor into insurance assessment and healthcare,” O’Neill says, allowing the lighthearted trend a dystopian ending. “For example, if you seem to be aging faster than your cohorts, perhaps you’re not a very good insurance risk. You may pay more or be denied coverage.” And law enforcement could use facial recognition technology to track people — she notes that Amazon sold its facial recognition services to police departments. But O’Neill also says the technology can be used to find missing children. Ultimately, every digital footprint comes with a wide host of implications for how that information can be used. Of course, it’s up to you to decide what photos and information you want to share, even if you’re just doing it for the “likes.” Source
×
×
  • Create New...