Jump to content

Search the Community

Showing results for tags 'google'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 886 results

  1. Its quantum computer can solve tasks that are otherwise unsolvable, a report says. A new quantum computer from Google can reportedly do the impossible. Google has reportedly built a quantum computer more powerful than the world's top supercomputers. A Google research paper was temporarily posted online this week, the Financial Times reported Friday, and said the quantum computer's processor allowed a calculation to be performed in just over 3 minutes. That calculation would take 10,000 years on IBM's Summit, the world's most powerful commercial computer, Google reportedly said. Google researchers are throwing around the term "quantum supremacy" as a result, the FT said, because their computer can solve tasks that can't otherwise be solved. "To our knowledge, this experiment marks the first computation that can only be performed on a quantum processor," the research paper reportedly said. Google declined to comment on the FT's report. The tech giant unveiled its 72-qubit quantum computer chip Bristlecone in March 2018, saying at the time that it was "cautiously optimistic that quantum supremacy can be achieved with Bristlecone." Quantum computing, which can simultaneously evaluate multiple possibilities, will likely be used for physics and chemistry simulations that aren't possible with classical computers, which can't simulate complex chemicals. They could also create new drugs and solar panels, help develop artificial intelligence and self-driving cars, and even manage investment portfolios. Earlier this week, IBM unveiled its 14th quantum computer, which has 53 qubits. It'll be available for quantum computing customers in October. AT&T also recently said it's working on quantum networking, or the technology to link quantum computers. Source
  2. Deep integration ASPIRING ROBO-DOCTOR Google has been given clearance to begin processing NHS data records, as it officially swallows its DeepMind AI lab. Some campaigners have suggested that this has reneged on a promise made by the health service not to allow any data into the hands of the US company. Remember this quote from 2016? "DeepMind operates autonomously from Google, and we've been clear from the outset that at no stage will patient data ever be linked or associated with Google accounts, products or services." Google argues that by removing the containerisation of DeepMind, it will offer additional resources that will allow its initiatives to help patients worldwide. The flagship medical scheme 'Streams' aimed at kidney patients, has already been criticised for giving Google access to too much data without permission. Of the existing initiatives between DeepMind and NHS Trusts, only one, Yeovil District Hospital has declined to transfer to Google Health, citing reasons of need, rather than concerns over privacy. At present, it's not clear who will monitor DeepMind's initiatives from outside Google, nor what the future of the head of the Health division's boss and co-founder Mustafa Suleyman is. He's already on ‘long-term leave' suggesting that perhaps he was in some way opposed or preventing the merger. He has hinted that his future role may be more as a consultant than an employee. Indeed, it's not even clear if DeepMind's headquarters will remain at the Kings Cross, London building it has been calling home. Although concerns over health records are at the forefront of discussion over the merger, it's worth remembering that DeepMind has its fingers in many other pies, including the legendary AlphaGo computer which was trained to beat the world champion at ancient board game ‘Go'. The inevitability of the move may be leaving concerns for some users, but Google says it remains committed to the confidentiality of partner records. Source
  3. Google’s next flagship smartphone launches October 15 We'll see the Pixel 4 and probably a bunch of other Google hardware. First image of article image gallery. Please visit the source link to see all images. Google has sent out invites for the latest "Made by Google" hardware event. On October 15, the company will officially launch the Pixel 4 and probably a slew of other hardware. The livestream is already registered on YouTube, with a launch time of 10am ET. The Pixel 4 is the worst-kept secret of the year. In addition to leaks from the usual suspects and Google's own public announcement of device features, any semblance of secrecy was killed last week when several Vietnamese YouTubers got hold of a Pixel 4 prototype and started posting full video reviews. Key features of the Pixel 4 include a 90Hz OLED display (just like the OnePlus 7 Pro), two rear cameras and a time-of-flight sensor (the first multi-rear camera setup for a Pixel phone), and a thick top bezel packed with sensors for things like face recognition (apparently the only supported form of biometrics) and air gestures. Air gestures—which requires you to wave your hand above the phone screen to control it—have been tried on phones before, usually with poor results. For the Pixel 4 though, Google is using a "Soli" radar sensor that it developed in house, which will hopefully make the feature more useful. As usual, we're expecting two phones, this year packing 5.7-inch and 6.3-inch OLED displays. The specs are a bit behind the competition, with the Snapdragon 855 instead of the faster Snapdragon 855+, only 6GB of RAM instead of the 8GB of RAM you get from other phones in this price range, and batteries that seem on the small side. Google's Pixel phones have always been about software, though. With this release, you'll get Android 10 and probably the "Next gen" Google Assistant that was announced at Google I/O 2019 as "coming to new Pixel phones." There's also the Pixel's top-shelf camera setup, which this year includes a focus on astrophotography. Alongside the yearly smartphone launch, Google is also in the habit of launching a plethora of other hardware. This year, the expectations are a second-gen Google Home Mini (probably rebranded the "Google Nest Mini") with better sound and an aux jack, and maybe even a Google Pixelbook 2, codenamed "Atlas," which popped up at the FCC in July. With Wi-Fi 6 ready for devices it's also time for a new Google Wi-Fi, and various Chromium commits have been hinting at a next-gen version of Google's mesh router. With Google's track record for leaks, we'll probably hear even more about these devices in the time between now and the launch event. We are already signed up for the event, so we'll bring you the latest on whatever Google announces on October 15. Listing image by ReLab Source: Google’s next flagship smartphone launches October 15 (Ars Technica) (To view the article's image gallery, please visit the above link)
  4. China Dull The wheels are coming off. As are the petals. GOOGLE HAS confirmed that it won't be able to provide its full Android offering to Huawei for the upcoming launch of its Mate 30 handset range. Huawei has been fighting a ban imposed by the Trump White House earlier this year, placing the company on its "Entity List" of companies which US firms cannot trade with, without a special licence. As of yet, no such licences have been issued. Although Huawei has been working on its own operating system, HarmonyOS, it's not flagship ready, meaning that the Mate 30 will carry the open-source version of Android. That means no Play Store, no GMail, no Google Maps, no YouTube… no point amiright? At present, the ban is not in full force, to allow partners of Huawei and other affected OEMs to service products that were available to the public before the crackdown. That temporary rest bite ends in November, but because neither the Mate 30 or Mate X folding phone are available currently, they will have to release those products under the restrictions. For Chinese users, that's no biggy. Google hasn't operated in mainland China since 2009 and local users are used to third-party app stores, though they do represent the vast majority of malware infections on Android. But for the European market, it's a disaster. Huawei can't say that "Nothing's changed" anymore. That slogan which has promoted the P30 range was true because the P30 scraped in under the wire of the ban. Of course for regular Android-heads, this is all a bit melodramatic, because adding Google services to a device (a Fire tablet for example) takes less than 10 minutes. But Huawei's ability to market the successor to the handsets that have made it a household name has been seriously impeded. And as for the Mate X being a rival to the Samsung Galaxy Fold? Fergetabowdit. Huawei relaunched the P30 range at IFA earlier this month adding new colours and Android 10, which won't be affected by the ban. Source
  5. A labor watchdog says Googlers should be able to discuss working conditions, the WSJ reports. Google has agreed to a proposed settlement with the National Labor Relations Board to remind employees they can freely discuss workplace issues, which follows a directive from the company ordering Googlers to "avoid controversies that are disruptive to the workplace." The NLRB was responding to formal complaints claiming Google punishes people who speak out on those matters and political issues, according to the Wall Street Journal. An NLRB director approved a settlement, according to the report. It should be enacted after an appeals period. As such, Google will remind workers they can be part of a union, and that they have the right to share details such as working conditions and wages with each other or reporters. Engadget has contacted Google and the NLRB for comment. One of the complaints was from engineer Kevin Cernekee, who claims Google fired him for discussing his supposedly unpopular right-leaning political views on internal message boards. Google says it dismissed him for misuse of company equipment. He asked to be reinstated with back pay, but that won't happen under the settlement. However, Google did agree to revoke Cernekee's final warning letter. It reportedly said he violated a section of the code of conduct requiring employees to respect each other, following remarks he made on the message boards. The second complainant was an anonymous Google employee. He claims the company punished him for posting critical comments about a Google executive on Facebook. Attorneys for both complainants have objected to the settlement and claimed they deserve a hearing, the WSJ reported. In August, Google updated its internal community guidelines to remind its employees they are responsible for their words and said they would be held to account for what they say. It urged them to steer clear of topics that make their colleagues feel as though they don't belong and to not discuss potentially disruptive "controversies." The NLRB directive comes at a time when regulators have Google firmly under the microscope. Last week, it reached a settlement with the Federal Trade Commission and the New York attorney general's office related to reported violations of child privacy rules. On Monday, it emerged that 50 state attorneys general have opened a joint antitrust investigation into the company. The Department of Justice is conducting a similar probe. Update 9/12 2:35PM ET: A Google spokesperson said the proposed settlement made "absolutely no mention of political activity," and, as part of the deal, it will post a notice reminding workers of their rights and clarifying that they're allowed to talk about workplace issues. The spokesperson also characterized the WSJ article as "another misleading piece" about Cernekee, and said its Community Guidelines updates are "completely unrelated." You can read the full statement below. "There has been some misreporting this morning about Google's workplace, sparked by WSJ publishing another misleading piece about Mr. Cernekee. To be clear, we have agreed to a proposed settlement with the NLRB of Mr Cernekee's complaint. Under that settlement, we have agreed to post a notice to our employees reminding them of their rights under the National Labor Relations Act. As a part of that notice, we will also remind employees of the changes we made to our workplace policies back in 2016 and 2017 that clarified those policies do not prevent employees from discussing workplace issues. There is absolutely no mention of political activity in the proposed settlement, and the updates we made to our Community Guidelines are completely unrelated and unaffected." Update 9/13/2019 11:30AM ET: Updated to clarify the scope of the settlement and that Google agreed to it, rather than the NLRB ordering it to post the notice. Source
  6. It's been a long road to get Google to pay its taxes. After a four-year investigation, Google has agreed to pay almost €1 billion ($1.10 billion) to French authorities because it did not fully declare its tax activities in the country, as reported by Reuters. The payment covers a €500 million fine and additional taxes of €465 million. Google's tax status in the European Union has always been contentious. It pays very little tax in most European countries despite doing business on the continent, because a loophole allows it to avoid taxes by essentially running a shell company in Ireland. This well-known loophole is called the Double Irish arrangement and has been described as the largest tax avoidance tool in history. "We have now settled tax and related disputes in France that have persisted for many years. The settlements comprise a €500 million payment that was ordered today by a French court, as well as €465 million in additional taxes that we had agreed to pay, and that have been substantially reflected in our prior financial results," Google said in a statement. "We continue to believe that the best way to provide a clear framework for companies that operate around the world is co-ordinated reform of the international tax system." French officials had originally hoped to claim €1.6 billion ($1.76 billion) from the search giant; far more than the £130 million (about $185 million) accepted by the UK for similar tax issues there. The French authorities raided Google's Paris headquarters in 2016 as part of their investigation, but eventually a French court found in Google's favor and said the company didn't have to pay the fine. That wasn't the end of the issue though. Together with Germany, France pushed for stricter tax regulations over major tech companies including Google, Apple, Facebook and Amazon. With the latest settlement achieved with Google, other tech companies may face similar action in France too. Google has had other legal troubles in France as well. Earlier this year it was fined €50 million (about $57 million) by the French National Commission on Informatics and Liberty (CNIL) for not complying with the EU's General Data Protection Regulation rules about data consent. Source
  7. Fifty state attorneys general have banded together to launch an investigation into whether Google has stifled competitors in a way that harms users. The investigation, announced on Monday from the steps of the Supreme Court in Washington, DC, is being led by Texas Attorney General Ken Paxton. "This investigation is not a lawsuit. It is an investigation to determine the facts," Paxton said. "Right now we're looking at advertising, but the facts will lead to where the facts lead." The group comprises attorneys general from 48 states, Washington, DC, and Puerto Rico; California and Alabama are not part of the investigation. Texas Attorney General Ken Paxton Fifty state attorneys general have banded together to launch an investigation into whether Google has stifled competitors in a way that harms users. The probe, announced on Monday from the steps of the Supreme Court in Washington, DC, is being led by Texas Attorney General Ken Paxton and will first focus on the company's advertising business. "This is a company that dominates all aspects of advertising on the internet and searching on the internet, as they dominate the buyer side, the seller side, the auction side, and even the video side with YouTube," Paxton said of Google on Monday. "This investigation is not a lawsuit. It is an investigation to determine the facts. Right now we're looking at advertising, but the facts will lead to where the facts lead." The group comprises attorneys general from 48 states, Washington, DC, and Puerto Rico. Notably, Alabama and California — where Google's headquarters is — are the only states that have not thrown their support behind the investigation. Paxton said that the attorneys general had already requested information from Google and that while the investigation would begin by looking into Google's advertising business, the group would consider examining other facets of the company "if there are other facts that demonstrate that we need to go in another direction." In 2019, Google is on pace to own over 31% of the worldwide digital-advertising market, according to eMarketer estimates. Karl Racine, the attorney general for the District of Columbia, said it was too early in the investigation to speculate about penalties should Google be found to be in violation of the law. Politicians including Sen. Elizabeth Warren, a 2020 Democratic presidential candidate, have called on Google to "unwind" by divesting itself of major acquisitions like its Waze map service, its Nest smart-home hardware company, and its DoubleClick advertising platform. The announcement from the state attorneys general followed reports on Friday that the Department of Justice had begun its own antitrust investigation into Google. The search giant said in a Securities and Exchange Commission filing released last week that the DOJ had requested information about its previous antitrust probes in the US and abroad. A Google representative declined to comment for this story and instead pointed to last week's blog post from Kent Walker, its senior vice president of global affairs, acknowledging that the DOJ had requested information from Google and that the company expected similar questions from state attorneys general. On Friday, The Wall Street Journal also confirmed that state AGs led by Letitia James of New York were planning a separate investigation into Facebook to evaluate its grip on competitors and whether it mishandled user data. Facebook acknowledged in July that it was under investigation by the Federal Trade Commission over antitrust concerns. Google faced a federal antitrust investigation in 2013 by the FTC regarding its search and smartphone business practices. Google walked away without incurring any financial penalties, committing itself only to vague promises to change some of its business practices. European regulators, on the other hand, have taken a sharper stance with Google, fining the tech giant roughly $10 billion in recent years for various anticompetitive practices involving its advertising, search, and mobile businesses. Source
  8. Huawei’s standoff with the U.S. has taken another turn, with the Chinese tech giant hinting the first time that it has developed a blacklist workaround to save its fast-growth international smartphone business from losing access to Google’s software and services. A workaround such as this will likely be viewed dimly in Washington. As I reported two-weeks ago, Google has confirmed that the imminent Mate 30 Series will ship without the officially licensed version of Android, alongside its broad set of software and services, giving Huawei a major sales risk to deal with. Huawei's Mate X foldable phone, expected shortly afterwards, will also ship without official Google software onboard. But now, as reported by Android Authority, the company’s consumer head Richard Yu has told the media at Germany’s IFA tech show that Huawei “might have a workaround on-hand.” At stake is the Play Store, along with default apps like Google Maps and Gmail. And underpinning all of that, are Google’s software and services that differentiate the full-blown version of Android from its open-source alternative. The launch date for the flagship new Mate 30 Series device has been confirmed for September 19, in Munich, Germany. "Rethink possibilities" was the tagline for the launch. But the only "possibility" being debated is a Huawei smartphone without Google's Android software and services onboard. Huawei has been preparing for life under a full U.S. blacklist for some time. There is an exemption in place for the support and maintenance of existing devices, but that temporary reprieve expires in November and it doesn’t help new devices, such as the Mate 30 Series in any case. And even through the Chinese tech giant has launched its own operating system, HarmonyOS, that is not going to fill the Google gap. Huawei execs have maintained all along that they want to stick with Google for as long as they can, in any way they can. And so it’s little surprise that workarounds are being explored. According to the specialist Android website, Yu said Huawei has been investigating the ability to let Mate 30 owners install Google apps on the AOSP (non-Google) version of Android.” Doing so would maintain access to the world of applications that Android customers outside China now take for granted. Yu claimed the process would be “quite easy” and that “the open-source nature of Android enables ‘a lot of possibilities’, while hinting that third-party developers have been working on such workarounds for some time, given that “Huawei itself is unable to provide Google Mobile Services on new products due to the ban.” I asked Huawei for an official statement regarding Yu’s comments, to be told that the official word from the Consumer Business Group is "we can't comment on that." It has always been feasible for open-source Android users to download non-default apps, including those from Google, through alternative app stores or side-loading. But what is being hinted at here, is some form of more “formal” way that this might work, without such security concerns for everyday users. This is a fine line for Huawei and Google to tread, working around the ban without breaching its restrictions. The devil, as they say, will be in the detail. Despite the latest analysts forecasts suggesting that losing Google could wipe as much as 30% from Huawei’s international shipments, Yu said that Huawei “can still consolidate its top two position,” for global shipments, holding off Apple and with just Samsung to catch. And so September remains a critical month for Huawei. There will be increasing news flow from Shenzhen as the new launch date approaches and we find out exactly how the company plans to maintain its brand under its new reality. In the meantime, we wait to see if there is any response from Google or U.S. regulators to this news. Source
  9. Own a rifle? Got a scope to go with it? The government might soon know who you are, where you live and how to reach you. That’s because Apple and Google have been ordered by the U.S. government to hand over names, phone numbers and other identifying data of at least 10,000 users of a single gun scope app, Forbes has discovered. It's an unprecedented move: never before has a case been disclosed in which American investigators demanded personal data of users of a single app from Apple and Google. And never has an order been made public where the feds have asked the Silicon Valley giants for info on so many thousands of people in one go. According to a court order filed by the Department of Justice (DOJ) on 5 September, investigators want information on users of Obsidian 4, a tool used to control rifle scopes made by night vision specialist American Technologies Network Corp. The app allows gun owners to get a live stream, take video and calibrate their gun scope from an Android or iPhone device. According to the Google Play page for Obsidian 4, it has more than 10,000 downloads. Apple doesn't provide download numbers, so it's unclear how many iPhone owners have been swept up in this latest government data grab. If Apple and Google decide to hand over the information, it could include data on thousands of innocent people who have nothing to do with the crimes being investigated, privacy activists warned. Edin Omanovic, lead on Privacy International's State Surveillance programme, said the order would set a dangerous precedent and scoop up “huge amounts of innocent people’s personal data.” “Such orders need to be based on suspicion and be particularized - this is neither,” Omanovic added Neither Apple nor Google had responded to a request for comment at the time of publication. ATN, the scope maker, also hadn't responded. Why the data grab? The Immigration and Customs Enforcement (ICE) department is seeking information as part of a broad investigation into possible breaches of weapons export regulations. It's looking into illegal exports of ATN's scope, though the company itself isn't under investigation, according to the order. As part of that, investigators are looking for a quick way to find out where the app is in use, as that will likely indicate where the hardware has been shipped. ICE has repeatedly intercepted illegal shipments of the scope, which is controlled under the International Traffic in Arms Regulation (ITAR), according to the government court filing. They included shipments to Canada, the Netherlands and Hong Kong where the necessary licenses hadn't been obtained. "This pattern of unlawful, attempted exports of this rifle scope in combination with the manner in which the ATN Obsidian 4 application is paired with this scope manufactured by Company A supports the conclusion that the information requested herein will assist the government in identifying networks engaged in the unlawful export of this rifle scope through identifying end users located in countries to which export of this item is restricted," the government order reads. (The order was supposed to have been sealed, but Forbes obtained it before the document was hidden from public view). It's unclear just whom ICE is investigating. No public charges have been filed related to the company or resellers of its weapons tools. Reports online have claimed ATN scopes were being used by the Taliban. Apple and Google have been told to hand over not just the names of anyone who downloaded the scope app from August 1 2017 to the current date, but their telephone numbers and IP addresses too, which could be used to determine the location of the user. The government also wants to know when users were operating the app. Innocents ensnared The request is undeniably broad and would likely include all users of the app within America, not just users abroad who might indicate illegal shipments of the gun appendage. Tor Ekeland, a privacy focused lawyer, said it amounted to a "fishing expedition." (The DOJ hadn’t responded to a request for comment at the time of publication). "The danger is the government will go on this fishing expedition and they'll see information unrelated to what they weren't looking for adn go after someone for something else," Ekeland said. He said there's a long history of that kind of behavior from the U.S. government. And he warned that the government could apply this demand to other types of app, such as dating or health apps. "There's a more profound issue here with the government able to vacuum up a vast amount of data on people they have no reason to suspect have committed any crime. They don't have any probable cause to investigate but they're getting access to data on them," Ekeland added. Even those who've worked in government surveillance were stunned by the order. "The idea that this data will only be used for pursuing ITAR violations is almost laughable," warned Jake Williams, a former NSA analyst and now a cybersecurity consultant at Rendition Infosec. "Google and Apple should definitely fight these requests as they represent a very slippery slope. This type of bulk data grab is seriously concerning for a number of reasons, not the least of which is that the download of an application does not automatically imply the 'intended use' of the application. For instance, researchers often bulk download applications looking for interesting vulnerabilities." He said that if the request was granted it may also have a "serious chilling effect on how people use the Google and Android app stores." "The idea that Google could be compelled to turn over, in secret, all of my identifiers and session data in its possession because I downloaded an application for research is such a broad overreach it's ridiculous." Source
  10. Tech investor John Borthwick doesn’t mince words when it comes to how he perceives smart speakers from the likes of Google and Amazon . The founder of venture capital firm Betaworks and former Time Warner and AOL executive believes the information-gathering performed by such devices is tantamount to surveillance. Image: John Borthwick “I would say that there's two or three layers sort of problematic layers with these new smart speakers, smart earphones that are in market now,” Borthwick told Yahoo Finance Editor-in-Chief Andy Serwer during an interview for his series “Influencers.” “And so the first is, from a consumer standpoint, user standpoint, is that these, these devices are being used for what's — it's hard to call it anything but surveillance,” Borthwick said. The way forward? Some form of regulation that gives users more control over their own data. “I personally believe that you, as a user and as somebody who likes technology, who wants to use technology, that you should have far more rights about your data usage than we have today,” Borthwick said. Smart assistants face a reckoning The venture capitalist’s comments follow a string of controversies surrounding smart assistants including Google’s Assistant, Amazon’s Alexa, and Apple’s (AAPL) Siri, in which each company admitted that human workers listen to users’ queries as a means of improving their digital assistants’ voice recognition capabilities. “They've gone to those devices and they've said, ‘Give us data when people passively act upon the device.’ So in other words, I walk over to that light switch,” Borthwick said. “I turn it off, turn it on, it's now giving data back to the smart speaker.” It’s important to note that smart speakers from every major company are only activated when you use their appropriate wake word. To activate your Echo speaker, for instance, you need to say “Alexa” followed by your command. The same thing goes for Google’s Assistant and Apple’s Siri. The uproar surrounding smart speakers and their assistants began when Bloomberg reported in April that Amazon used a global team of employees and contractors to listen to users’ voice commands to Alexa to improve its speech recognition. Image: Amazon Echo That was followed by a similar report by Belgian-based VRT News about Google employees listening to users’ voice commands for Google Assistant. The Guardian then published a third piece about Apple employees listening to users’ Siri commands. Facebook was also pulled into the controversy when Bloomberg reported it had employees listen to users’ voice commands made through its Messenger app. Google and Apple have since apologized, with Google halting the practice, and Apple announcing that it will automatically opt users out of voice sample collection. Users instead will have to opt in if they want to provide voice samples to improve Siri’s voice recognition. Amazon, for its part, has allowed users to opt out of having their voice samples shared with employees, while Facebook said it has paused the practice. Image: Google Home Mini smart speaker The use of voice samples has helped improved the voice recognition features of digital assistants, ensuring they are better able to understand what users say and the context in which they say it. At issue is whether users were aware that employees of these companies were listening. There’s also the matter of accidental activations, which can result in employees hearing conversations or snippets of conversations they might otherwise not have been meant to hear. As for how such issues can be dealt with in the future, Borthwick points to some form of tech regulation. Though he doesn’t offer specifics, the VC says that users need to be able to understand how their data is being used, and be able to take control of it. “I think generally, it's about giving the users a lot more power over the decisions that are being made. I think that's one piece of it,” he said. Source
  11. Google will be the target of an antitrust investigation by a broad coalition of state attorneys general set to be announced as early as next week, The Washington Post reported. According to the Post, more than half of the nation’s state attorneys general will be participating in the Google investigation, though it’s unclear which states are involved. An investigation has been rumored for months amid the growing federal scrutiny that Silicon Valley is facing over potential antitrust violations. A small group of state law enforcers met with Makan Delrahim, head of the Department of Justice’s (DOJ) Antitrust Division, earlier this summer to discuss competition issues in Silicon Valley. "Google's services help people every day, create more choice for consumers, and support thousands of jobs and small businesses across the country,” a Google spokesman said in a statement. “We continue to work constructively with regulators, including attorneys general, in answering questions about our business and the dynamic technology sector." The DOJ is also exploring tech giants’ market power for potential antitrust issues, but it’s unclear if the department will be collaborating with the states on their investigation. The Senate Judiciary subcommittee on antitrust also announced on Tuesday that it would hold a hearing later this month on competition concerns surrounding acquisitions by tech giants. Source
  12. Google Wants to Help Tech Companies Know Less About You By releasing its homegrown differential privacy tool, Google will make it easier for any company to boost its privacy bona fides. Olekcii Mach/Alamy As a data-driven advertising company, Google's business model hinges on knowing as much about its users as possible. But as the public has increasingly awakened to its privacy rights this imperative has generated more friction. One protection Google has invested in is the field of data science known as "differential privacy," which strategically adds random noise to user information stored in databases so that companies can still analyze it without being able to single people out. And now the company is releasing a tool to help other developers achieve that same level of differential privacy defense. Today Google is announcing a new set of open source differential privacy libraries that not only offer the equations and models needed to set boundaries and constraints on identifying data, but also include an interface to make it easier for more developers to actually implement the protections. The idea is to make it possible for companies to mine and analyze their database information without invasive identity profiles or tracking. The measures can also help mitigate the fallout of a data breach, because user data is stored with other confounding noise. "It’s really all about data protection and about limiting the consequences of releasing data," says Bryant Gipson, an engineering manager at Google. "This way, companies can still get insights about data that are valuable and useful to everybody without doing something to harm those users." "If you want people to use it right you need to put an interface on it that is actually usable by actual human beings." Lea Kissner, Humu Google currently uses differential privacy libraries to protect all different types of information, like location data, generated by its Google Fi mobile customers. And the techniques also crop up in features like the Google Maps meters that tell you how busy different businesses are throughout the day. Google intentionally built its differential privacy libraries to be flexible and applicable to as many database features and products as possible. Differential privacy is similar to cryptography in the sense that it's extremely complicated and difficult to do right. And as with encryption, experts strongly discourage developers from attempting to "roll your own" differential privacy scheme, or design one from scratch. Google hopes that its open source tool will be easy enough to use that it can be a one-stop shop for developers who might otherwise get themselves into trouble. "The underlying differential privacy noisemaking code is very, very general," says Lea Kissner, chief privacy officer of the workplace behavior startup Humu and Google’s former global lead of privacy technology. Kissner oversaw the differential privacy project until her departure in January. "The interface that’s put on the front of it is also quite general, but it’s specific to the use case of somebody making queries to a database. And that interface matters. If you want people to use it right you need to put an interface on it that is actually usable by actual human beings who don’t have a PhD in the area." (Which Kissner does.) Developers could use Google’s tools to protect all sorts of database queries. For example, with differential privacy in place, employees at a scooter share company could analyze drop-offs and pickups at different times without also specifically knowing who rode which scooter where. And differential privacy also has protections to keep aggregate data from revealing too much. Take average scooter ride length: even if one user’s data is added or removed, it won’t change the average ride number enough to blow that user’s mathematical cover. And differential privacy builds in many such protections to preserve larger conclusions about trends no matter how granular someone makes their database queries. Part of the reason it's so difficult to roll your own differential privacy is that these tools, like encryption schemes, need to be vetted by as many people as possible to catch all the flaws and conceptual issues that could otherwise go unnoticed. Google's Gipson says this is why it was such a priority to make the tool open source; he hopes that academic and technical communities around the world will offer feedback and suggestions about improving Google's offering. Uber similarly released an open source differential privacy tool in 2017 in collaboration with researchers at the University of California, Berkeley and updated it in 2018. Apple, meanwhile, uses a proprietary differential privacy scheme. The company has been on the forefront of implementing the technology, but independent researchers have found that the approach may not offer the privacy guarantees Apple claims Google says that one novel thing its solution offers is that it doesn't assume any individual in a database is only associated with one record at most, the way most other schemes do. This is true in a census or medical records database, but often doesn't apply to a data set about people visiting particular locations or using their mobile phones in various places around the world. Everyone gets surveyed once for the census, but people often visit the same restaurant or use the same cell tower many times. So Google's tool allows for the possibility that a person can contribute multiple records to a database over time, a feature which helps to maintain privacy guarantees in a broader array of situations. Along with the tool itself, Google is also offering a testing methodology that lets developers run audits of their differential privacy implementation and see if it is actually working as intended. "From our perspective, the more people that are doing differential privacy, inside of Google or outside, the better," Google's Gipson says. "Getting this out into the broader world is the real value here, because even with lots of eyes on a thing you can still miss glaring security holes. And 99.9 percent differentially private is not differentially private." As with any technique, differential privacy isn't a panacea for all of big tech's security ailments. But given how many problems there are to fix, it's worth having as many researchers as possible chasing that last tenth of a percent. Source: Google Wants to Help Tech Companies Know Less About You
  13. Microsoft and Twitter were reportedly present as well. Both intelligence agencies and tech companies are gearing up to secure the 2020 US election, and that apparently includes some heart-to-heart conversations between the two. Bloomberg sources have learned that Facebook, Google, Microsoft and Twitter are meeting members of the FBI, Homeland Security and the Office of the Director of National Intelligence to discuss the industry's security strategy. This reportedly includes plans for tighter coordination between tech and government, as well as curbing disinformation campaigns. We've asked the companies in question for comment. Microsoft confirmed to Engadget that it "is participating in this meeting." In a statement, Twitter said it "always welcome the opportunity" to meet with government agencies and fellow companies to discuss securing the 2020 election, and said there was a "joint effort in response to a shared threat." The meeting shows that both sides want to coordinate on election security in a way they didn't in 2016. Tech firms have been more proactive this time around -- Facebook has been operating "war rooms" to monitor elections, for instance, while Google has instituted measures to protect high-risk hacking targets. The question, as always, is whether or not these measures will be enough. Security improvements didn't stop Russia and others from targeting the 2018 midterms, and it's doubtful they'll back off just because they face a more united opposition. Update 9/4 7:30PM ET: Facebook has also confirmed the meeting in a detailed response, outlining how companies and government bodies were finding ways to share data and coordinate responses. You can read the full statement below. "Today security teams from Facebook and a number of technology companies, including Google, Microsoft, and Twitter, met at Facebook's headquarters in Menlo Park, CA with representatives from the Federal Bureau of Investigation, the Office of the Director of National Intelligence, and the Department of Homeland Security. The purpose was to build on previous discussions and further strengthen strategic collaboration regarding the security of the 2020 U.S. state, federal, and presidential elections. "Participants discussed their respective work, explored potential threats, and identified further steps to improve planning and coordination. Specifically, attendees talked about how industry and government could improve how we share information and coordinate our response to better detect and deter threats. "For Facebook, we've developed a comprehensive strategy to close previous vulnerabilities, while analyzing and getting ahead of new threats. Our work focuses on continuing to build smarter tools, greater transparency, and stronger partnerships. "Improving election security and countering information operations are complex challenges that no organization can solve alone. Today's meeting builds on our continuing commitment to work with industry and government partners, as well as with civil society and security experts, to better understand emerging threats and prepare for future elections." Source
  14. The FTC will receive its biggest ever payment related to a COPPA case. Google will pay $170 million to settle charges from the Federal Trade Commission and the New York Attorney General that YouTube illegally collected data from kids who watch the video-streaming service. The company will shell out $34 million to New York and $136 million to the FTC, in what the agency says is the largest amount it's ever obtained in a Children's Online Privacy Protection Act case since the law took effect in 1998. The New York AG and the FTC accused Google and YouTube of collecting personal data from viewers of channels aimed at under-13s without informing parents or gaining their consent, including through the Kids app. YouTube allegedly made millions from using that data to display targeted ads to people watching those channels. In the complaint, officials said Google told Mattel YouTube was "today's leader in reaching children age 6-11 against top TV channels" and informed Hasbro it's the top "website regularly visited by kids." "YouTube touted its popularity with children to prospective corporate clients," FTC chairman Joe Simons said in a statement. "Yet when it came to complying with COPPA, the company refused to acknowledge that portions of its platform were clearly directed to kids. There's no excuse for YouTube's violations of the law." Under the settlement, YouTube will stop collecting data from videos that are aimed at kids. It'll also have to require channel owners to flag videos that are intended for kids and provide annual COPPA compliance training to employees who deal with channel owners. In addition, YouTube and Google are prohibited from violating COPPA and they'll have to "provide notice about their data collection practices and obtain verifiable parental consent before collecting personal information from children." Reports last week suggested Google would pay up to $200 million to settle the accusations. The previous record payment the FTC received over COPPA violations was a $5.7 million settlement with TikTok earlier this year. The agency also reached a $5 billion settlement with Facebook this summer related to the Cambridge Analytica privacy scandal. Source
  15. Google today pushed the latest source code for Android 10, formerly Android Q and the successor to Android 9.0 Pie, to the Android Open Source Project (AOSP). Google also started rolling out the latest version of its mobile operating system today as an over-the-air update to Pixel phones. If you don’t have a Pixel phone, you won’t be getting Android 10 for a while (if at all). During the beta testing phase, Android Q was made available on the Asus ZenFone 5Z, Essential Phone, Huawei Mate 20 Pro, LG G8, Nokia 8.1, OnePlus 7 Pro, OnePlus 7, OnePlus 6T, Oppo Reno, Realme 3 Pro, Sony Xperia XZ3, Tecno Spark 3 Pro, Vivo X27, Vivo Nex S, Vivo Nex A, Xiaomi Mi 9, and Xiaomi Mi Mix 3 5G. Google says it is “working with a number of partners to launch or upgrade devices to Android 10 this year.” A spokesperson confirmed that this includes the list of devices that received Android Q betas. Android Q was on a tight beta schedule. Last year, there were five developer previews (four betas). This year, Google had six betas in total. Google launched Android Q Beta 1 in March, Android Q Beta 2 in April, and Android Q Beta 3 in May at its I/O 2019 developers conference. Android Q Beta 4 arrived in June, Android Q Beta 5 in July, and Android Q Beta 6 in August. Android 10 features Most importantly, Android 10 brings features powered by on-device machine learning and supports new technologies like foldables and 5G. Google also promises faster app startup and “almost 50 new features and changes focused on privacy and security.” Here’s Google top 10 list for users: Smart Reply now suggests actions without any copying and pasting required. It also works in messaging apps. The new Dark Theme works on your entire phone or for specific apps. It’s easier on your eyes and on your phone battery. A new gesture navigation introduces single swipes that let you go backwards, pull up the homescreen, and move between tasks. Live Caption (coming later this fall) will automatically caption videos, podcasts, and audio messages across any app. Choose to only share location data with apps while you’re using them. Reminders let you know when an app that you are not actively using is accessing your location. Settings has a new dedicated Privacy section for controls like Web & App Activity and Ad Settings. Google Play can send system updates with security and privacy fixes just like app updates (Project Mainline). Greater control over where and when notifications will alert you. Mark notifications as Silent and they won’t make noise nor appear on your lockscreen. Focus mode lets you select the apps you find distracting and silence them until you say otherwise. Family Link is now part of every device running Android 9 or 10, so parents can set digital ground rules for their children. You can use different keyboards per profile, set app timers for websites, use gender-inclusive emoji, and stream audio to hearing aid devices. For developers, Android 10 brings new APIs, new media codecs and camera capabilities, NNAPI extensions, Vulkan 1.1, a foldables emulator, biometrics improvements, and TLS 1.3. Source
  16. No word yet on when the fix will arrive, however. For the past few months, a method of spamming Google accounts via Calendar has made the rounds. As of the past couple of weeks, though, things have just gotten much worse. Today, Google has officially acknowledged the spam problem with Calendar and says that a fix will be coming. A very brief post on Google’s support forums confirms the company is aware of this spam issue that has been affecting users. The company says that it is “working diligently” to resolve the problem, but at the moment, there’s no timeline attached to that. For those unaware, Google Calendar spam takes advantage of a function that automatically imports events from Google’s Gmail. While users are accustomed to seeing and avoiding spam in email form, Calendar is a trusted application that makes it easy to catch people off-guard. Many of the spam “events” lately have been related to free iPhones and other products that simply try to get users to click on a link. It’s relatively easy to fix this problem in the interim, but clearly, Google Calendar needs an official fix for this spam issue. It’s a shame we don’t know when that’s coming, but hopefully, it’s sooner rather than later. We’re aware of the spam occurring in Calendar and are working diligently to resolve this issue. We’ll post updates to this thread as they become available. Learn how to report and remove spam. Thank you for your patience. Source
  17. The change is either going to annoy you in a big way, or you won't notice. Google is planning to carry out a major decluttering of the tab options that users are presented with when they right-click on a tab in the Chrome browser. And you're either going to hate these changes, or not notice them at all. Google has killed off four tab options in the latest Canary preview nightly build aimed at developers. Gone are the following options: New tab Close other tabs Reopen closed window Bookmark all tabs In my experience the number of people who use these features is pretty low, with most people surprised that these options exist. That said, the last time Google floated the idea of culling these options, user outrage was high, and the search giant gave in to pressure. Not all features and changes in Canary builds make it to the final release, so Google may back down again. If you use these options, then my suggestion is that you start learning and using the keyboard shortcuts for the options you need. Source
  18. The unprecedented attack on Apple iPhones revealed by Google this week was broader than first thought. Multiple sources with knowledge of the situation said that Google’s own Android operating system and Microsoft Windows PCs were also targeted in a campaign that sought to infect the computers and smartphones of the Uighur ethnic group in China. That community has long been targeted by the Chinese government, in particular in the Xinjiang region, where surveillance is pervasive. Google’s and Microsoft’s operating systems were targeted via the same websites that launched the iPhone hacks, according to the sources, who spoke on the condition of anonymity. That Android and Windows were targeted is a sign that the hacks were part of a broad, two-year effort that went beyond Apple phones and infected many more than first suspected. One source suggested that the attacks were updated over time for different operating systems as the tech usage of the Uighur community changed. Android and Windows are still the most widely used operating systems in the world. They both remain hugely attractive targets for hackers, be they government-sponsored or criminal. Neither Microsoft nor Google had provided comment at the time of publication. It’s unclear if Google knew or disclosed that the sites were also targeting other operating systems. One source familiar with the hacks claimed Google had only seen iOS exploits being served from the sites. Apple has yet to offer any statement on the attacks and hadn’t provided comment on the latest developments. Google told Apple which sites had been targeted in February, according to one source close to Google, whose researchers revealed the attacks on August 29. But no one has yet named which specific Uighur-interest sites were used to launch malicious code on iPhones. It's unclear exactly what Android and Windows exploits were launched via the websites that were used to launch attacks on Apple's OS. In the case of the iOS hacks, the exploits placed malware on the phone and could spy on a massive amount of data. That included encrypted WhatsApp, iMessage and Telegram texts, as well as live location. Sustained surveillance in Xinjiang The attacks appear to form part of a mass surveillance operation taking place on Uighur civilians, who've faced various forms of persecution in Xinjiang. Surveillance cameras are scattered across the region and facial recognition is prevalent. "The Chinese government has been systematically targeting the Uighur population for surveillance and imprisonment for years," said Cooper Quintin, senior staff technologist at the Electronic Frontier Foundation. "These attacks likely have the goal of spying on the Uighur population in China, the Uyghur diaspora outside of China and people who sympathize with and might wish to help the Uighur in their struggle for independence." Quintin told Forbes this appeared to be a "high-risk, high-reward campaign" that was trying to scoop up as much intelligence on possible Uighur sympathizers as possible. One source told TechCrunch, which first reported the Uighur targeting, that it's likely even those who weren't part of the ethnic group were hit. Source
  19. Last week, Google announced a plan to “build a more private web.” The announcement post was, frankly, a mess. The company that tracks user behavior on over ⅔ of the web said that “Privacy is paramount to us, in everything we do.” Google not only doubled down on its commitment to targeted advertising, but also made the laughable claim that blocking third-party cookies -- by far the most common tracking technology on the Web, and Google’s tracking method of choice -- will hurt user privacy. By taking away the tools that make tracking easy, it contended, developers like Apple and Mozilla will force trackers to resort to “opaque techniques” like fingerprinting. Of course, lost in that argument is the fact that the makers of Safari and Firefox have shown serious commitments to shutting down fingerprinting, and both browsers have made real progress in that direction. Furthermore, a key part of the Privacy Sandbox proposals is Chrome’s own (belated) plan to stop fingerprinting. But hidden behind the false equivalencies and privacy gaslighting are a set of real technical proposals. Some are genuinely good ideas. Others could be unmitigated privacy disasters. This post will look at the specific proposals under Google’s new “Privacy Sandbox” umbrella and talk about what they would mean for the future of the web. The good: fewer CAPTCHAs, fighting fingerprints Let’s start with the proposals that might actually help users. First up is the “Trust API.” This proposal is based on Privacy Pass, a privacy-preserving and frustration-reducing alternative to CAPTCHAs. Instead of having to fill out CAPTCHAs all over the web, with the Trust API, users will be able to fill out a CAPTCHA once and then use “trust tokens” to prove that they are human in the future. The tokens are anonymous and not linkable to one another, so they won’t help Google (or anyone else) track users. Since Google is the single largest CAPTCHA provider in the world, its adoption of the Trust API could be a big win for users with disabilities, users of Tor, and anyone else who hates clicking on grainy pictures of storefronts. Google’s proposed “privacy budget” for fingerprinting is also exciting. Browser fingerprinting is the practice of gathering enough information about a specific browser instance to try to uniquely identify a user. Usually, this is accomplished by combining easily accessible information like the user agent string with data from powerful APIs like the HTML canvas. Since fingerprinting extracts identifying data from otherwise-useful APIs, it can be hard to stop without hamstringing legitimate web apps. As a workaround, Google proposes limiting the amount of data that websites can access through potentially sensitive APIs. Each website will have a “budget,” and if it goes over budget, the browser will cut off its access. Most websites won’t have any use for things like the HTML canvas, so they should be unaffected. Sites that need access to powerful APIs, like video chat services and online games, will be able to ask the user for permission to go “over budget.” The devil will be in the details, but the privacy budget is a promising framework for combating browser fingerprinting. Unfortunately, that’s where the good stuff ends. The rest of Google’s proposals range from mediocre to downright dangerous. The bad: Conversion measurement Perhaps the most fleshed-out proposal in the Sandbox is the conversion measurement API. This is trying to tackle a problem as old as online ads: how can you know whether the people clicking on an ad ultimately buy the product it advertised? Currently, third-party cookies do most of the heavy lifting. A third-party advertiser serves an ad on behalf of a marketer and sets a cookie. On its own site, the marketer includes a snippet of code which causes the user’s browser to send the cookie set earlier back to the advertiser. The advertiser knows when the user sees an ad, and it knows when the same user later visits the marketer’s site and makes a purchase. In this way, advertisers can attribute ad impressions to page views and purchases that occur days or weeks later. Without third-party cookies, that attribution gets a little more complicated. Even if an advertiser can observe traffic around the web, without a way to link ad impressions to page views, it won’t know how effective its campaigns are. After Apple started cracking down on advertisers’ use of cookies with Intelligent Tracking Prevention (ITP), it also proposed a privacy-preserving ad attribution solution. Now, Google is proposing something similar. Basically, advertisers will be able to mark up their ads with metadata, including a destination URL, a reporting URL, and a field for extra “impression data” -- likely a unique ID. Whenever a user sees an ad, the browser will store its metadata in a global ad table. Then, if the user visits the destination URL in the future, the browser will fire off a request to the reporting URL to report that the ad was “converted.” In theory, this might not be so bad. The API should allow an advertiser to learn that someone saw its ad and then eventually landed on the page it was advertising; this can give raw numbers about the campaign’s effectiveness without individually-identifying information. The problem is the impression data. Apple’s proposal allows marketers to store just 6 bits of information in a “campaign ID,” that is, a number between 1 and 64. This is enough to differentiate between ads for different products, or between campaigns using different media. On the other hand, Google’s ID field can contain 64 bits of information -- a number between 1 and 18 quintillion. This will allow advertisers to attach a unique ID to each and every ad impression they serve, and, potentially, to connect ad conversions with individual users. If a user interacts with multiple ads from the same advertiser around the web, these IDs can help the advertiser build a profile of the user’s browsing habits. The ugly: FLoC Even worse is Google’s proposal for Federated Learning of Cohorts (or “FLoC”). Behind the scenes, FLoC is based on Google’s pretty neat federated learning technology. Basically, federated learning allows users to build their own, local machine learning models by sharing little bits of information at a time. This allows users to reap the benefits of machine learning without sharing all of their data at once. Federated learning systems can be configured to use secure multi-party computation and differential privacy in order to keep raw data verifiably private. The problem with FLoC isn’t the process, it’s the product. FLoC would use Chrome users’ browsing history to do clustering. At a high level, it will study browsing patterns and generate groups of similar users, then assign each user to a group (called a “flock”). At the end of the process, each browser will receive a “flock name” which identifies it as a certain kind of web user. In Google’s proposal, users would then share their flock name, as an HTTP header, with everyone they interact with on the web. This is, in a word, bad for privacy. A flock name would essentially be a behavioral credit score: a tattoo on your digital forehead that gives a succinct summary of who you are, what you like, where you go, what you buy, and with whom you associate. The flock names will likely be inscrutable to users, but could reveal incredibly sensitive information to third parties. Trackers will be able to use that information however they want, including to augment their own behind-the-scenes profiles of users. Google says that the browser can choose to leave “sensitive” data from browsing history out of the learning process. But, as the company itself acknowledges, different data is sensitive to different people; a one-size-fits-all approach to privacy will leave many users at risk. Additionally, many sites currently choose to respect their users’ privacy by refraining from working with third-party trackers. FLoC would rob these websites of such a choice. Furthermore, flock names will be more meaningful to those who are already capable of observing activity around the web. Companies with access to large tracking networks will be able to draw their own conclusions about the ways that users from a certain flock tend to behave. Discriminatory advertisers will be able to identify and filter out flocks which represent vulnerable populations. Predatory lenders will learn which flocks are most prone to financial hardship. FLoC is the opposite of privacy-preserving technology. Today, trackers follow you around the web, skulking in the digital shadows in order to guess at what kind of person you might be. In Google’s future, they will sit back, relax, and let your browser do the work for them. The “ugh”: PIGIN That brings us to PIGIN. While FLoC promises to match each user with a single, opaque group identifier, PIGIN would have each browser track a set of “interest groups” that it believes its user belongs to. Then, whenever the browser makes a request to an advertiser, it can send along a list of the user’s “interests” to enable better targeting. Google’s proposal devotes a lot of space to discussing the privacy risks of PIGIN. However, the protections it discusses fall woefully short. The authors propose using cryptography to ensure that there are at least 1,000 people in an interest group before disclosing a user’s membership in it, as well as limiting the maximum number of interests disclosed at a time to 5. This limitation doesn’t hold up to much scrutiny: membership in 5 distinct groups, each of which contains just a few thousand people, will be more than enough to uniquely identify a huge portion of users on the web. Furthermore, malicious actors will be able to game the system in a number of ways, including to learn about users’ membership in sensitive categories. While the proposal gives a passing mention to using differential privacy, it doesn’t begin to describe how, specifically, that might alleviate the myriad privacy risks PIGIN raises. Google touts PIGIN as a win for transparency and user control. This may be true to a limited extent. It would be nice to know what information advertisers use to target particular ads, and it would be useful to be able to opt-out of specific “interest groups” one by one. But like FLoC, PIGIN does nothing to address the bad ways that online tracking currently works. Instead, it would provide trackers with a massive new stream of information they could use to build or augment their own user profiles. The ability to remove specific interests from your browser might be nice, but it won’t do anything to prevent every company that’s already collected it from storing, sharing, or selling that data. Furthermore, these features of PIGIN would likely become another “option” that most users don’t touch. Defaults matter. While Apple and Mozilla work to make their browsers private out of the box, Google continues to invent new privacy-invasive practices for users to opt-out of. It’s never about privacy If the Privacy Sandbox won’t actually help users, why is Google proposing all these changes? Google can probably see which way the wind is blowing. Safari’s Intelligent Tracking Prevention and Firefox’s Enhanced Tracking Protection have severely curtailed third-party trackers’ access to data. Meanwhile, users and lawmakers continue to demand stronger privacy protections from Big Tech. While Chrome still dominates the browser market, Google might suspect that the days of unlimited access to third-party cookies are numbered. As a result, Google has apparently decided to defend its business model on two fronts. First, it’s continuing to argue that third-party cookies are actually fine, and companies like Apple and Mozilla who would restrict trackers’ access to user data will end up harming user privacy. This argument is absurd. But unfortunately, as long as Chrome remains the most popular browser in the world, Google will be able to single-handedly dictate whether cookies remain a viable option for tracking most users. At the same time, Google seems to be hedging its bets. The “Privacy Sandbox” proposals for conversion measurement, FLoC, and PIGIN are each aimed at replacing one of the existing ways that third-party cookies are used for targeted ads. Google is brainstorming ways to continue serving targeted ads in a post-third-party-cookie world. If cookies go the way of the pop-up ad, Google’s targeting business will continue as usual. The Sandbox isn’t about your privacy. It’s about Google’s bottom line. At the end of the day, Google is an advertising company that happens to make a browser. Source: EFF
  20. Let them eat 10 Let them eat... GOOGLE'S DESSERT-RELATED names for Android builds will continue as codenames for internal use. All About Android revealed all during an interview with VP of Android engineering Dave Burke and software engineer Dan Sandler. Google shocked the world last week with the announcement that future versions of the mobile operating system would be known by a number, not name - with Android Q becoming Android 10. That, of course, begs the question - what would Android Q have been called. We all knew it was a tall order to find something beginning with Q that would work - and the results demonstrate exactly why Google chose to break with convention. Popular suggestions in the run-up to release included ‘Quality Street' (a selection box of chocolates from the UK) and ‘Quince' (like an apple hit with a baseball bat). But Google's name for Android Q isn't either of those - it would have been "Queen Cake" to us, after being monikered "Quince Tart" internally. Now, we're sure you're kind of looking blankly at the screen, so let's refer this one to Wikidictionary: "A soft, muffin-sized cake, popular particularly in the 1700s, containing currants, mace and sometimes flavoured with orange or lemon marmalade or shredded coconut and chocolate toppings." Sounds nommy. But all three of these suggestions demonstrate the reason for the change in the global marketplace - these names are either obscure or regional - meaning huge chunks of the world won't know what the heck they are. Funny. Never bothered them with ‘Froyo'. That was at a time before us Limeys had much access to it. Speaking of Lime - another fact which came to light in an interview with All About Android was that Android KitKat was known internally as Key Lime Pie before the tie-in deal was struck. The point is, although we'll all miss the dessert names, Q is not the only letter that would cause a mare. Google did well to get this far, but it's time to knock it on the head, at least for us plebs. Source
  21. Google reportedly reached a settlement with the FTC today. Google will allegedly pay between $150 and $200 million to end the FTC investigation into whether YouTube violated a children's privacy law, Politico reported this afternoon. The FTC reportedly voted along party lines (3-2) to approve the settlement, which will now be reviewed by the Justice Department. The FTC approved the fine last month, but this is the first time the dollar amount has been reported. At the moment, details on the other terms of the settlement are unavailable. Just last week, we learned that YouTube is allegedly planning to ban targeted ads on videos aimed at children. It's unclear if YouTube's decision to do so was related to the FTC settlement. The FTC launched the investigation after advocacy groups charged that YouTube violated the Children's Online Privacy Protection Act (COPPA) by collecting data for children under the age of 13. Those complaints allegedly date back as far as 2015. This isn't the first time the FTC has come down on big tech companies for privacy violations. Earlier this summer, Facebook agreed to a $5 billion settlement. Until now, the FTC's largest fine for COPPA violations was the $5.7 million settlement it reached with TikTok in February. Some have suggested that YouTube disable ads on all videos aimed at kids, and others have called for YouTube to move all of its children's content to a designated app. As companies like TikTok have found, it can be difficult to enforce age checks. It's still unclear how Google plans to remedy the issue. Source
  22. Google, which has already paid security researchers over $15 million since launching its bug bounty program in 2010, today increased the scope of its Google Play Security Reward Program (GPSRP). Security researchers will now be rewarded for finding bugs across all apps in Google Play with 100 million or more installs. At the same time, the company launched the Developer Data Protection Reward Program (DDPRP) in collaboration with HackerOne. That program is for data abuses in Android apps, OAuth projects, and Chrome extensions. Bug bounty programs are a great complement to existing internal security programs. They help motivate individuals and hacker groups to not only find flaws but disclose them properly, instead of using them maliciously or selling them to parties that will. Rewarding security researchers with bounties costs peanuts compared to paying for a serious security snafu. Today’s updates come after Google increased rewards for hacking Chrome, Chrome OS, and Google Play last month. Google Play Security Reward Program GPSRP has paid out over $265,000 in bounties so far. Adding more popular apps makes them eligible for rewards even if their developers don’t have their own vulnerability disclosure or bug bounty program. In these scenarios, the security researcher discloses identified vulnerabilities to Google, which in turn passes them on to the affected app developer. As a result, security researchers can help hundreds of organizations identify and fix vulnerabilities in their apps. If the developers already have their own programs, researchers can collect rewards from them and from Google. This isn’t a one-way street. Google also uses this vulnerability data to create automated checks that scan all Google Play apps for similar vulnerabilities. Affected app developers are notified via the Play Console. The App Security Improvement (ASI) program provides them with information on the vulnerability and how to fix it. In February, Google revealed that ASI has helped over 300,000 developers fix over 1,000,000 apps on Google Play. Developer Data Protection Reward Program DDPRP is a new bug bounty program for identifying and mitigating data abuse issues in Android apps, OAuth projects, and Chrome extensions. The goal is to recognize security researchers who report apps that are violating Google Play, Google API, or Google Chrome Web Store Extensions program policies. If you can provide verifiably and unambiguous evidence of data abuse, you could get paid. In particular, Google is interested in situations “where user data is being used or sold unexpectedly, or repurposed in an illegitimate way without user consent.” Google didn’t provide a maximum reward amount, but said that “depending on impact, a single report could net as large as a $50,000 bounty.” Android apps and Chrome extensions with data abuse will be removed from Google Play and the Chrome Web Store. If a developer is found to be abusing access to Gmail restricted scopes, their API access will be removed. Source
  23. It may be the biggest attack against iPhone users yet. In what may be one of the largest attacks against iPhone users ever, researchers at Google say they uncovered a series of hacked websites that were delivering attacks designed to hack iPhones. The websites delivered their malware indiscriminately, were visited thousands of times a week, and were operational for years, Google said. "There was no target discrimination; simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant. We estimate that these sites receive thousands of visitors per week," Ian Beer, from Google's Project Zero, wrote in a blog post published Thursday. Some of the attacks made use of so-called zero day exploits. This is an exploit that takes advantage of a vulnerability that the impacted company, in this case Apple, is not aware of, hence they have had "zero days" to find a fix. Generally speaking, zero day attacks can be much more effective at successfully hacking phones or computers because the company does not know about the vulnerability and thus has not fixed it. iPhone exploits are relatively expensive and the iPhone is difficult to hack. The price for a full exploit chain of a fully up to date iPhone has stretched up to at least $3 million. This includes various vulnerabilities for different parts of the iPhone operating system, including the browser, the kernel, and others to escape an application's sandbox, which is designed to keep code running only inside the part of the phone it is supposed to. Beer writes that Google's Threat Analysis Group (TAG) was able to collect five distinct iPhone exploit chains based on 14 vulnerabilities. These exploit chains covered versions from iOS 10 up to the latest iteration of iOS 12. At least one of the chains was a zero day at the time of discovery and Apple fixed the issues in February after Google warned them, Beer writes. Once the attack has successfully exploited the iPhone, it can deploy malware onto the phone. In this case "the implant is primarily focused on stealing files and uploading live location data. The implant requests commands from a command and control server every 60 seconds," Beer writes. The implant also has access to the user's keychain, which contains passwords, as well as the databases of various end-to-end encrypted messaging apps, such as Telegram, WhatsApp, and iMessage, Beer's post continues. End-to-end encryption can protect can messages being read if they're intercepted, but less so if a hacker has compromised the end device itself. The implant does not have persistence though; if a user reboots their iPhone, it will wipe the malware, Beer explains. But one infection can still of course deliver a treasure trove of sensitive information. "Given the breadth of information stolen, the attackers may nevertheless be able to maintain persistent access to various accounts and services by using the stolen authentication tokens from the keychain, even after they lose access to the device," Beer writes. The information is also transferred to the server unencrypted, the post adds. Previously documented attacks have been more targeted in nature, typically by a text message sent to the target, along with a link to a malicious site, sometimes just for that target. This attack appears to, or at least has the potential to be, broader in scope. "This indicated a group making a sustained effort to hack the users of iPhones in certain communities over a period of at least two years," Beer added. Apple did not immediately respond to a request for comment. Update: This piece has been updated to include more information from Google's blog post. Source
  24. Add another one to the Google Cemetery. Google has disclosed that it will shut down Google Hire, the job application tracking system it launched just two years ago. Google built Hire in an effort to simplify the hiring process, with a workflow that integrated things like searching for applicants, scheduling interviews, and providing feedback about potential hires into Google’s G Suite (Search/Gmail/Calendar/Docs etc.) It was built mostly for small to medium sized businesses, with a price that ranged from $200 to $400 a month depending on how many G Suite licenses you needed. Hire came into existence after Google acquired Bebop — a company started by VMWare founder Diane Greene — for a reported $380 million in 2015. Greene went on to act as the CEO of Google’s Cloud division, but left the role in early 2019. In an email to customers, Google says: While Hire has been successful, we’re focusing our resources on other products in the Google Cloud portfolio. We are deeply grateful to our customers, as well as the champions and advocates who have joined and supported us along the way. On the upside: it’s not getting the axe immediately. In fact, you can keep using it for over a full year; Google says it won’t actually be shutdown until September 1st of 2020. Just don’t expect any new features to be added. Google also notes that it intends to stop taking payment for the product in the meantime, saying in a support FAQ that customers will see no additional charges for Google Hire after their next billing cycle. Source
  25. Google defends tracking cookies—some experts aren’t buying it Google: Banning tracking cookies "jeopardizes the future of the vibrant Web." Enlarge Leni Tuchsen Google's Chrome team is feeling pressure from competitors over ad tracking. Apple has long offered industry-leading protection against tracking cookies, while Mozilla recently announced that Firefox will begin blocking tracking cookies by default. Microsoft has been experimenting with tracking protection features in Edge, too. But Google has a problem: it makes most of its money selling ads. Adopting the same aggressive cookie blocking techniques as its rivals could prevent Google's customers from targeting ads—potentially hurting Google's bottom line. So in a blog post last week, Google outlined an alternative privacy vision—one that restricts some forms of user tracking without blocking the use of tracking cookies any time soon. "Blocking cookies without another way to deliver relevant ads significantly reduces publishers’ primary means of funding, which jeopardizes the future of the vibrant Web," Google's Justin Schuh writes. (Those publishers, of course, include Ars publisher Conde Nast. We use cookies to serve targeted ads because they generate more revenue to support our journalism.) Google also warns that completely blocking tracking cookies will cause ad networks to resort to browser fingerprinting as an alternative means of tracking users. Under this technique, a site harvests many small pieces of data about a user's browser—browser version, fonts installed, extensions active, screen size, and so forth—to generate a "fingerprint" that uniquely identifies a particular device. So Google's proposal is to declare war on browser fingerprinting while only gradually restricting the use of cookies for ad targeting. A privacy sandbox? To prevent fingerprinting, Google says it's working on a new approach called a "privacy budget." Under this approach, the browser would impose a hard cap on the amount of information any site could request from the browser that might reveal a user's identity. If a site exceeded the cap, the browser would either throw an error or it would return deliberately inaccurate or generic information. But this is only a proposal, not a shipping feature. And it has some obvious challenges. Some API calls might return so much information that they could identify the user all on their own. If a site made one of these calls, the browser would need to warn the user and get explicit approval—which could be annoying for users. And there's a risk that a too-strict privacy budget could break some existing sites even if they're not engaging in user fingerprinting. The privacy budget is one component of a larger framework Google calls a "privacy sandbox." The goal is to enable advertisers to serve more relevant ads without allowing them to track individual users: We're exploring how to deliver ads to large groups of similar people without letting individually identifying data ever leave your browser — building on the Differential Privacy techniques we've been using in Chrome for nearly 5 years to collect anonymous telemetry information. New technologies like Federated Learning show that it's possible for your browser to avoid revealing that you are a member of a group that likes Beyoncé and sweater vests until it can be sure that group contains thousands of other people. Some pro-privacy experts remain skeptical Google's post was blasted by a pair of Princeton computer scientists who have long advocated for stricter browser privacy protections. They point out that Apple and Mozilla are also working to restrict browser fingerprinting. They argue that it's a non-sequitur to say that the risk of fingerprinting is a reason not to adopt robust restrictions on cookie-based tracking. The researchers disputed Google's claim that nuking tracking cookies would undermine the economic foundation of the online advertising industry. They point out that after the EU adopted the General Data Protection Regulation, the New York Times discontinued its use of tracking cookies in Europe. The Grey Lady shifted to using contextual and geographic ad targeting—and its ad revenue hasn't suffered as a result. They also argued that Google is now endorsing ideas that the company dismissed as impractical earlier this decade. "Privacy preserving ad targeting has been an active research area for over a decade," the pair wrote. They argue that Google was dismissive of alternatives to cookie-based ad tracking during the Do Not Track debate earlier in the decade. "We are glad that Google is now taking this direction more seriously, but a few belated think pieces aren’t much progress." Browser privacy has emerged as an important differentiator for Google's rivals in the browser market. Apple in particular has been running ads in recent months touting the privacy protections offered by the iPhone. These attacks put Google in a difficult position, because Google can't match its rivals' privacy protections without potentially hurting its own lucrative ad business. Source: Google defends tracking cookies—some experts aren’t buying it (Ars Technica)
×
×
  • Create New...