Jump to content

Search the Community

Showing results for tags 'google'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 918 results

  1. Google’s coronavirus website finally launches alongside enhanced search results After lots of complicated drama, a simple website Google’s Covid-19 information website, at google.com/covid19 One week ago, President Donald Trump held a press conference wherein he claimed Google would be building a screening website for the coronavirus that would direct people to testing sites. As we learned in the following days, that wasn’t true. Google’s sister company Verily did launch such a site, but only for the Bay Area and reportedly it only offered tests to a very small number of people. Google, however, did say it would launch some sort of website and after a small delay, it’s here. Alongside the website and potentially more importantly, Google will start providing more enhanced information cards for people who search for terms related to the coronavirus. There will be information tabs for symptoms, prevention, global statics, and locally relevant information. It will look a bit like this: Google The website is at google.com/covid19. It does have useful resources, including a card that mimics what you see above. Google’s post announcing the site says that you will be able to find “state-based information, safety and prevention tips, search trends related to COVID-19, and further resources for individuals, educators and businesses.” Google emphasizes that it’s pulling information from “authoritative” sources like the WHO and the CDC. It’s only available in English right now, but a Google spokesperson tells The Verge that Spanish language support is soon to follow. The site was also designed with accessibility in mind, including with the larger fonts that Google usually uses. The website has videos in ASL, a global map showing confirmed cases by country, and plenty of information about Google’s other relief efforts — plus some feel-good YouTube videos. Reading through that description, however, you’ll notice that it doesn’t include what Trump originally claimed it would. The nearest thing to finding a test is a drop-down menu that provides links to local websites — for example, choosing California provides a link to the California Department of Public Health. Right now, the CDC has a “self-checker” chatbot that Microsoft helped build, but the WSJ quoted an executive from a healthcare provider who put in a realistic context: “It’s just something consumers need now to help with anxiety.” In other words, lots of big tech companies are making efforts to provide coronavirus-related support, but none of them are able to solve some of the biggest problems in the pandemic: access to testing and the impending crisis in our healthcare infrastructure. At some point in the future, Google may actually provide a questionnaire and information about local drive-thru testing locations. But a spokesperson says that the company won’t do so until there’s authoritative and trustworthy information on those sites. That could be a long time coming, unfortunately. Source: Google’s coronavirus website finally launches alongside enhanced search results (The Verge)
  2. Google: "Due to adjusted work schedules at this time, we are pausing upcoming Chrome and Chrome OS releases." Google said today it is pausing upcoming Chrome and Chrome OS releases due to the ongoing coronavirus (COVID-19) outbreak. The company cited "adjusted work schedules" as the primary reason for the delay, as most of its engineers are now working from home. The company published an official statement today after ZDNet reached out for comment last night, when Google failed to release Chrome v81. YouTube videos, tweets, and blog posts announcing the new Chrome release were posted online yesterday -- most likely scheduled days or weeks in advance. However, the actual Chrome v81 release never made it to users' devices, and the same videos, tweets, and blog posts were removed shortly after Google's PR realized their mistake. While investigating the reasons why the Chrome v81 release was pulled last night, several Google employees told this reporter the v81 release had been postponed due to the coronavirus outbreak and the availability of some engineers in the case of errors or other issues related to the rollout. Following an inquiry last night, Google made a formal announcement today in regards to the confusion surrounding the abandoned Chrome 81 release and its future releases. For the moment, Google plans to release all the Chrome 81 security updates as a minor Chrome 80 release, and pause any other major stable rollouts while it waits for things to return to normal. The decision is understandable, as Chrome is one of the most used software applications in the world, and even the slightest error can cause problems for thousands of users and organizations. Take this incident from November 2019, as an example of how even the smallest Chrome change can cause unimaginable havoc. Google's official statement on the matter is below, in full: "Due to adjusted work schedules at this time, we are pausing upcoming Chrome and Chrome OS releases. Our primary objectives are to ensure they continue to be stable, secure, and work reliably for anyone who depends on them. We'll continue to prioritize any updates related to security, which will be included in Chrome 80. Please, follow this blog for updates." Google Chrome 81 was initially scheduled to be released yesterday, on March 17. Yesterday's release was supposed to add improved support for WebXR (Chrome's Augmented Reality feature), to deprecate the TLS 1.0 and TLS 1.1 encryption protocols, and add initial support for the Web NFC standard. Source
  3. Google tells employees to work from home to prevent coronavirus spread Google wants all North American employees to work remotely through April 10. Enlarge / Exterior view of a Googleplex building, the corporate headquarters of Google and parent company Alphabet, May 2018. Getty Images | zphotos 39 with 32 posters participating The threat of the new coronavirus is making working from home a more and more popular option for tech companies, and yesterday Google expanded its work-from-home recommendation to all North American employees. In a memo obtained by CNN, Google's vice president of global security, Chris Rackow, said, "Out of an abundance of caution, and for the protection of Alphabet and the broader community, we now recommend you work from home if your role allows." For now, Google's work-from-home recommendation extends through April 10, with the company saying it is "carefully monitoring the situation and will update the timeline as necessary." Alphabet, Google's parent company, employs around 120,000 people, and as a US-based company, the majority of those employees are based in North America. The new coronavirus has led to the cancellation of most of this year's large trade show gatherings. Mobile World Congress, which was scheduled for February, was canceled at the last minute. Google killed Google I/O 2020 just last week, Facebook shut down F8, and E3 was canceled yesterday. Big gatherings present a higher risk for spreading the virus, and along the same lines of thinking, going to work at your big tech campus is also a vector for infection. In a blog post yesterday, Google said it is "establishing a COVID-19 fund that will enable all our temporary staff and vendors, globally, to take paid sick leave if they have potential symptoms of COVID-19, or can’t come into work because they’re quarantined. Working with our partners, this fund will mean that members of our extended workforce will be compensated for their normal working hours if they can’t come into work for these reasons." Source: Google tells employees to work from home to prevent coronavirus spread (Ars Technica)
  4. Google removes banner dissuading Edge users from running Chrome extensions Microsoft announced back in December of 2018 that it was building a Chromium-based Edge browser, which then became generally available in January 2020. An advantage of using Chromium is the ability to run Chrome extensions. However, Google had a somewhat dissuasive banner for Edge users recommeding them that the extensions be used on Chrome for them to run “securely”. It looks like with the backlash from users and tech journalists, Google has decided to remove the banner (spotted first by Techdows). It is not clear as to when this change was made. The Chrome Web Store on Edge now shows a banner from the Redmond giant itself that reads “You can now add extensions from the Chrome Web Store to Microsoft Edge – Click on Add to Chrome”. This is a welcome change from Google since the prompt asking users to run Chrome for using the extensions securely was misleading. Any security issues with extensions are likely to affect either of the browsers. Interestingly, even Microsoft has begun using more subtle verbiage on Edge when users head to the Chrome Web Store for the first time. The message asks users to ‘Allow extensions from other stores” to be able to run Chrome extensions. This contrasts with some earlier messages which implied that running “unverified” extensions from other stores might affect performance. With the two companies working together to contribute to Chromium and bring about features from each other’s offerings, refraining from petty tactics to dissuade users from using competing offerings seems like the right thing to do. Source: Google removes banner dissuading Edge users from running Chrome extensions (Neowin)
  5. Federal Court ordered Google to reveal the identity of someone who wrote a negative review. On Thursday, the Federal Court ordered Google to reveal the identity of someone who left a negative review about a teeth whitening practice, the ABC reported. Melbourne dentist Matthew Kabbabe, who runs the teeth whitening service Asprodontics, called for Google to reveal the identity of a person who left a negative review of his business so he could take legal action. The user review. Image: Screenshot. Kabbabe told the ABC that the negative review from a user with the name “CBsm 23” – the only negative review at the time amid five star ratings – was put up on Google three months ago and affected both his life and business. Kabbabe’s lawyer Mark Stanarevic said in the report he believes Google “has a duty of care” to businesses for allowing these reviews. “A bad review can shut down a business these days because most people live and breathe online,” Stanarevic said. Google was ordered to hand over information that identified “CBsm 23”, including phone numbers, names, location metadata and IP addresses. This may require companies to think harder about their business practices around reviews Rob Nicholls Associate Professor at the UNSW Business School told Business Insider Australia, “The dentist had no way of being able to serve court papers on that person directly because they were shielded by Google. So the court said to Google, you have to get rid of that shield so that the normal process can continue.” At the same time, the dentist claimed the reviewer hadn’t actually been to the business. “If that reviewer had been to their practice, they wouldn’t need to have called on Google because they’d actually have their names and addresses,” Nicholls added. Nicholls believes in practice it’s “not such a big threat” for companies like Google because getting a court to agree that the action of the reviewer has caused such harm as to give rise to a case for defamation is “not likely to happen often”. What it does mean, Nicholls said, is that Google and other companies might have to think much harder about their business practices in relation to reviews. “Potentially Google and others might have to think a bit harder about what reviews they allow to be published if they can see that on their face they look as if they’re defamatory,” he said. But that wouldn’t stop a bad review, he added. While Nicholls thinks Google will have to identify the reviewer in this case, he said if he was advising Google he would opt to appeal the decision so as not to give out the information. That way Google wouldn’t need to change its business model if it was successful. “Otherwise Google and all publishers of reviews will have to think about how do they manage potentially defamatory reviews,” Nicholls said. He explained that it could add an extra business process step for these companies “under which an AI system looks to see if a review is essentially defamatory and won’t publish that immediately until it’s been reviewed or simply doesn’t publish it.” Will this impact privacy? When asked whether this situation has any implications on privacy, Nicholls didn’t think so, especially with companies like Google and Facebook already knowing who people are. “The reality is, Google knows the name of the reviewer,” he said. “In effect, from an individual’s perspective, you’ve given up… some of the privacy by agreeing to Google and Facebook’s standard of submission.” Google told Business Insider in an email that it takes court orders seriously but does not comment on ongoing legal matters. Source
  6. Amazon, Microsoft, and IBM are under pressure to follow Google and drop gender labels like 'man' and 'woman' from their AI Google's API no longer uses gendered labels for photos Microsoft, IBM, and Amazon are under pressure to stop using gender labels such as "man" or "woman" for their facial recognition and AI services. Google announced its AI tool would stop adding gender classification tags in mid-February, instead tagging images of people with neutral terms such as "person." Joy Buolamwini, a researcher who found AI tools misclassified people's gender, told Business Insider: "There is a choice... I would encourage all companies to reexamine the identity labels they are using as demographic markers." Microsoft, Amazon, and IBM are under pressure to stop automatically applying gendered labels such as "man" or "woman" from images of people, after Google announced in February it would stop using such tags. All four companies offer powerful artificial intelligence tools that can classify objects and people in an image. The tools can variously describe famous landmarks, facial expressions, logos and gender, and have many applications including content moderation, scientific research, and identity verification. Google said it would drop gender labels from its Cloud Vision API image classification service last week, saying that it wasn't possible to infer someone's gender by appearance and that such labels could exacerbate bias. Now the AI researchers who helped bring about the change say Amazon's Rekognition, IBM's Watson, and Microsoft's Azure facial recognition should follow suit. Joy Buolamwini, a computer scientist at MIT and expert in AI bias, told Business Insider: "Google's move sends a message that design choices can be changed. With technology it is easy to think some things cannot be changed or are inevitable. This isn't necessarily true." Microsoft's AI continues to classify people in images by binary gender Source
  7. A network of malicious Chrome extensions was injecting malicious ads in millions of Chrome installs. Google has removed more than 500 malicious Chrome extensions from its official Web Store following a two-months long investigation conducted by security researcher Jamila Kaya and Cisco's Duo Security team. The removed extensions operated by injecting malicious ads (malvertising) inside users' browsing sessions. The malicious code injected by the extensions activated under certain conditions and redirected users to specific sites. In some cases, the destination would be an affiliate link on legitimate sites like Macys, Dell, or BestBuy; but in other instances, the destination link would be something malicious, such as a malware download site or a phishing page. According to a report published today and shared with ZDNet, the extensions were part of a larger malware operation that's been active for at least two years. The research team also believes the group who orchestrated this operation might have been active since the early 2010s. Millions of users believed to be impacted Responsible for unearthing this operation is Kaya. The researcher told ZDNet in an interview that she discovered the malicious extensions during routine threat hunting when she noticed visits to malicious sites that had a common URL pattern. Leveraging CRXcavator, a service for analyzing Chrome extensions, Kaya discovered an initial cluster of extensions that run on top of a nearly identical codebase, but used various generic names, with little information about their true purpose. "Individually, I identified more than a dozen extensions that shared a pattern," Kaya told us. "Upon contacting Duo, we were able to quickly fingerprint them using CRXcavator's database and discover the entire network." According to Duo, these first series of extensions had a total install count of more than 1.7 million Chrome users. "We subsequently reached out to Google with our findings, who were receptive and collaborative in eliminating the extensions," Kaya told ZDNet. After its own investigation, Google found even more extensions that fit the same pattern, and banned more than 500 extensions, in total. It is unclear how many users had installed the 500+ malicious extensions, but the number is more than likely to be in the millions range. Extensions disabled in users' Chrome installs Networks of malicious Chrome extensions have been unearthed in the past. Typically, these extensions usually engage in injecting legitimate ads inside a user's browsing session, with the extension operators earning revenue from showing ads. In all cases, the extensions try to be as non-intrusive as possible, so not to alert users of a possible infection. What stood out about this scheme was the use of "redirects" that often hijacked users away from their intended web destinations in a very noisy and abrasive manner that was hard to ignore or go unnoticed. However, in the current state of the internet where many websites use similar advertising schemes with aggresive ads and redirects, many users didn't even bat an eye. "While the redirects were incredibly noisy from the network side, no interviewed users reported too obtrusive of redirects," Kaya told ZDNet. A list of extension IDs that were part of this scheme are listed in the Duo report. When Google banned the extensions from the official Web Store, it also deactivated them inside every user's browser, while also marking the extension as "malicious" so users would know to remove it and not reactivate it. Source
  8. It’s easy if you try Fifteen years ago this month, one of the most important web domains in history was registered: youtube.com. Today’s teenagers have never known an internet that couldn’t host as much video as they want for free, server costs be damned. YouTube has helped elect politicians, create entire industries, and taught millions of people how to use eyeliner. It’s not a stretch to say it shaped the internet as we know it. But what if YouTube had failed? Would we have missed out on decades of cultural phenomena and innovative ideas? Would we have avoided a wave of dystopian propaganda and misinformation? Or would the internet have simply spiraled into new — yet strangely familiar — shapes, with their own joys and disasters? Here’s one idea of what it might have looked like, tracing the line from why YouTube might have failed to what the world would have looked like without it. It’s far from the only option — but if you’re struggling to imagine a world without YouTube, it may not be as hard as you think. This is a creative work of fiction. Any references to real-life companies, persons, or historical events have been fictionalized for the purposes of furthering this narrative story. Other names, characters, places, companies, and events are imagined, and any resemblance to actual companies, events, or persons, living or dead, is entirely coincidental. 2005–2006: The False Start A video platform fights copyright law (and copyright law wins) It’s 2005, and three guys named Steve Chen, Jawed Karim, and Chad Hurley have just launched a dating website called YouTube. While nobody accepts YouTube’s invitation to “Tune In, Hook Up,” people do love sharing pop culture clips and little videos about their lives. By 2006, YouTube’s viewership has exploded, but reporters raise ominous questions about its financial strategy and legal risks. NPR, for example, declares that “YouTube does for video what Napster did for audio” — and that, like Napster, its days might be numbered. YouTube discusses an acquisition offer with Google, Microsoft, and Oracle, but all three deals fall through, and growing server costs threaten to eat through the company’s funding. YouTube has its first viral hit in early 2006 with a bootleg upload of SNL’s “Lazy Sunday” (also known as the “Narnia rap”). Faced with an obvious copyright violation, NBC must decide whether to sign an ad deal with YouTube or try to destroy it. The network chooses the path of war, filing aggressive legal requests and rushing the launch of Hulu, which is soon available through popular websites like Microsoft’s MSN portal and News Corp’s social network Myspace. With Hulu established as a legitimate content source, networks view YouTube as a piracy vector for valuable movie and TV clips at a time when the music industry and internet service providers are aggressively pursuing copyright infringers. Companies file lawsuits against YouTube instead of signing deals, and a flood of legal challenges from content-holders threatens to damage the platform’s safe harbor status under the Digital Millennium Copyright Act. Without YouTube, Google focuses on its existing Google Video service. It shifts focus to expanding a recently acquired stake in AOL, reviving plans for a joint venture with Comcast. Focusing on search and advertising services for other web portals, it’s largely seen as a web software and infrastructure company. Facing high bandwidth costs and no revenue stream, YouTube declares bankruptcy. Apple quietly hires most of its talent, assigning them to an iPhone video chat system codenamed 2006–2007: The Power Vacuum Old media giants meet the influencer economy As YouTube descends into bankruptcy, media companies start buying up lower-profile video sites. Instead of letting anyone immediately post a video, these companies implement a review process and focus on nurturing stables of internet stars often poached from YouTube — including a teenage singer named Justin Bieber. The resulting services often look more like the Sony-acquired platform Grouper than the chaos of YouTube. Some leverage user-generated content into new business models, particularly NBCUniversal, which acquires a life-streaming platform called Justin.tv in 2007. Results are mixed. Sponsorship deals with “lifecasters” offer 24/7 exposure for brands but create an ongoing trickle of PR gaffes, including a Law & Order ad campaign that derails when viewers provoke a police raid on the broadcaster’s apartment. (The incident is dramatized three months later in a Law & Order episode.) Similarly, a licensing process for cosplay live streams earns criticism from fans who object to a prudish dress code and sweeping contract agreement. The incident fuels a broader discussion of the relationship between fandom and corporate media, alienating many potential streamers. NBCUniversal nudges the platform toward semi-curated reality and talent show formats. 2008–2011: The Divergence Peer-to-peer video turns the internet upside-down HostingHosting a giant streaming repository is expensive and legally risky. But there’s a free alternative: peer-to-peer sharing. Without YouTube, decentralized streaming services are developed and popularized earlier. What these systems lack in user-friendliness, they recover in anarchic fun (and a fair amount of pirated content, especially when The Pirate Bay builds a YouTube-style landing page for discovering original videos). Their distributed design makes videos easy to create and difficult to fully erase, and dedicated local networks also spring up on college campuses and high schools. As Apple’s recently released iPhone grows in popularity, the company launches FaceTime: a video calling service that supports both one-on-one chats and small-scale broadcasting. It promotes the feature with a series of heartwarming ads, including an estranged family that reconnects over a shared viewing of a high-school musical. Somewhat unpredictably, the appetite for group broadcasting drives performers and audiences to hold events in massively multiplayer games and virtual worlds, particularly Second Life, which is acquired by Microsoft in 2010. Major telecoms respond by attacking peer-to-peer systems at the network level. Some internet service providers block peer-to-peer streaming in a violation of fledgling net neutrality rules, setting up a conflict between ISPs and the Federal Communications Commission. These services find an unlikely ally in Apple, whose own FaceTime app runs into similar problems. And rampant copyright infringement alarms Hollywood and record labels, which begin lobbying Congress for stricter intellectual property laws. 2011–2012: The Crackdown Congress takes down the video underground By 2011, legitimate online video services have seen moderate success. Their submission review process, powered by a combination of automated tools and human moderators, drastically slows the posting of videos. But it heads off some serious problems, quickly stemming the growth of child abuse material and disturbing videos aimed at kids. Small-scale group broadcasting has also taken off. Public figures regularly use Apple’s group broadcasting options to host intimate discussions — including a variety of streaming stars and noted iPhone fan President Barack Obama who kicks off a virtual tour of American classrooms using FaceTime. Microsoft integrates Skype support into Second Life, letting webcam users “dial in” to virtual book readings and other live events. These systems create an expectation of intimacy and personalization as well as a certain level of privacy from outside eyes. By contrast, decentralized streaming is a free-for-all. Its openness creates a wellspring of creativity, but also persistent problems with harassment and quasi-ironic bigotry. One peer-to-peer streaming subculture is devoted almost entirely to “griefing” mainstream video sites and virtual worlds — clogging submission queues with nonsensical meme videos, launching raids on Second Life, and running elaborate hoaxes to trick celebrities into personal FaceTime and Skype broadcasts. Pirated content continues to circulate, including rips of legitimate video sites’ biggest shows. The combination of lobbyist pressure and increasingly aggressive trolling eventually spurs Congress to crack down. Lawmakers begin debating a sweeping bill called the Stop Online Piracy Act (SOPA), which requires ISPs to block any foreign sites that host illegal copies of photos, videos, or music. This includes any peer-to-peer services with users outside the US. Internet advocacy groups protest SOPA, holding an online “blackout” in protest. But they lack the support of web giants like Google — its partner, Comcast, staunchly supports the bill — and peer-to-peer platforms’ reputation for unsavory content makes them easy targets for lawmakers. The law passes in 2012, and ISPs quickly block P2P streaming systems without the threat of FCC censure. The resulting crackdown scuttles some innovative projects, including a popular Lego-like game called Minecraft, which had integrated a peer-to-peer streaming system for players. And it galvanizes young voters into political awareness. Some of their enthusiasm is captured by a growing right-wing extremist movement, which has operated under the radar, thanks to decentralized video. 2013–2015: The Backlash The internet interprets censorship as damage and routes around it Peer-to-peer video is increasingly inaccessible, alongside foreign streaming services like DailyMotion and Tudou, and people flock to services like FaceTime, Hulu, and Justin.tv. This sudden growth adds both technical and social pressure. Users submit swathes of popular videos like the peer-to-peer hit “Charlie Bit My Finger,” offering welcome ad revenue but requiring arduous hunts for the original creators. Griefers launch all-out swatting campaigns against live performers. AT&T attempts to justify blocking Apple’s FaceTime under SOPA, making the service unavailable to many iPhone users on its network. And as mainstream platforms face more scrutiny, disturbing reports suggest that conglomerates like Sony and NBCUniversal turned a blind eye toward streamers accused of sexual misconduct, or even offered help by suppressing rumors on their platforms. It’s especially troubling because kids’ content is thriving on the services. Children’s channels are filtered to remove disturbing content, but they’re also filled with product placement, free from the requirements placed on broadcast TV. And while their young stars have the support of a studio system, it also places strict rules on their conduct — which, combined with the always-on ethos of streaming, can prove psychologically damaging. Peer-to-peer video devotees take increasingly extreme measures to stay online. They respond to the ISP bans by developing local mesh networks that can stream video across limited ranges, creating pocket subcultures split along geographical lines. Some popular videos make the leap between meshnets. Re-edited versions of a 9/11 conspiracy documentary called Loose Change becomes a rare national hit across the meshnets, circulating throughout California and across the northern Appalachian region. Aspiring streamers flock to dense urban centers like New York and Los Angeles whose networks are still closely watched by mainstream sites’ talent scouts. (Similar scouts watch international sites, poaching stars like DailyMotion streamer Felix “PewDiePie” Kjellberg to host Comedy Central’s Pew.0.) Others gather in smaller cities like Kansas City, Missouri, and Akron, Ohio, creating regional media hubs colloquially known as “streamtowns.” Streamers from isolated areas with a strong survivalist tradition are often lured into a burgeoning network of far-right media compounds, intermittently monitored by the FBI. Non-video social media becomes more atomized, regionalized, and personal. Facebook CEO Mark Zuckerberg puts a premium on encryption, declaring in 2013 that “the future is private.” (Encryption and limited virality make Facebook less attractive for both pirates and anti-piracy enforcers.) In 2014, public micro-blogging platform Twitter becomes a wire news service for verified businesses and journalists, following a widely criticized public shaming frenzy on the site. Microsoft acquires a buzzy VR startup called Oculus and integrates its technology into Second Life, offering a virtual world anchored by persistent identities and a real-money economy, although its crackdown on sexual content — particularly quirky subcultures like furries — draws some criticism. To fight griefers, mainline sites downrank and demonetize most political discussion, limiting divisive topics like vaccine denial and climate change to a handful of carefully vetted channels. Movements like Occupy Wall Street, organized through a local New York meshnet, have earned little mainstream attention. Private networks can avoid censorship but breed unverified rumors and conspiracy theories, which incubate with little outside awareness or intervention. Streaming sites begin adopting sophisticated machine learning systems and mining sensitive user data gathered by ISPs, which is made possible by consolidation deals like the 2011 merger between Comcast and NBCUniversal. Drawing on Google’s AI research, Comcast-NBCUniversal’s juggernaut Justin.tv carefully parses the most minor shifts in video performance to set advertising rates and surface content, leaving streamers at the mercy of an unknowable algorithm. 2016–2020: The Calm The internet’s immune system is weaker than we think The internet of 2016 has its critics. Media theorists question the “mind-numbing wasteland of sanitized, algorithm-driven monocultures” in which a few media gatekeepers produce limited quantities of web television for the broadest possible audience, padded with some superficially personalized elements like custom title cards that look different for each user. A housing bubble within Second Life has made the thriving virtual world inaccessible to many lower-income Americans, leading to accusations of “virtual gentrification” and debilitating digital mortgages for some unlucky residents. Even so, it’s seen as widely superior to the chaos of the meshnet, which becomes a persistent target for law enforcement after a series of violent inter-network clashes and domestic extremist attacks. With no centralized point of attack, hacking and misinformation campaigns by meshnet griefers and Russian cyber-operatives fail to land, and Hillary Clinton defeats opponent Ted Cruz by a narrow margin in the 2016 presidential election. President Clinton leads a meshnet compound crackdown with bipartisan congressional support — although ardent progressives see it as a cynical gift to the telecom industry and a substitute for meaningful gun control, while populist conservatives decry the advent of a “Waco 2.0.” The newly merged Comcast-Google-AOL-NBCUniversal offers glowing coverage and algorithmically tailored promotion of the campaign. (Disclosure: Comcast-Google-AOL-NBCUniversal is a minority investor in Vox Media.) While aimed at violent extremists, the crackdown embitters many local streamtowns, and large networks grow paranoid over fears of police infiltration. Conspiracy theorist Alex Jones rallies political support from the local Austin meshnet, running for state Congress on a platform of Texan independence. Political concerns leave other networks nearly untouched — including the Miami meshnet, a hotbed for organized crime in the swing state of Florida. Some small networks are repurposed as honeypot operations by small-time blackmailers who trawl their nodes for nude photos and other embarrassing material. Clinton’s FCC starts a massive push for municipal internet development, hoping to unify a geographically polarized country. But powerful telecom-internet-media conglomerates immediately mire the plan in litigation. As the 2020 election approaches, a superficial national calm belies a series of brewing secessionist campaigns and potent localized conspiracy theories. A DC-area network plays host to a supposed Department of Energy operative codenamed “Q” who offers dire warnings about President Clinton and a network of baby-eating satanists — warnings that Fox News promotes on the popular web version of its news channel. A grassroots “Occupy Airwaves” movement is spreading open-source plans for a long-wave transmitter that can bridge the gaps between networks, creating a full-scale decentralized alternative internet. Conversely, the wealthy Mercer family is building networks that offer the illusion of a local meshnet, laced with propaganda for their preferred candidate: Donald Trump. Amid all of this, Apple adds an extra feature to its popular FaceTime application, capitalizing on the app’s widespread popularity. It’s a geolocation-based dating tool where users can “pin” short videos to participating bars and restaurants, hoping to attract another patron on a date. It’s called FaceTime Dating. Source
  9. What is going on As you may have heard already, because of brexit, Google is moving UK citizens data from the Northern Ireland data controller to the US one (Google LLC). Leaving the EU, UK citizens are not protected anymore by GDPR, and while this may be unfair, Google is legally allowed to do it. The problem Even if I'm an Italian citizen and I live in Italy, a few days ago I received this email from them: What's wrong with it? The point is that I'm an Italian citizen, living in Italy. I have nothing to do with UK (even if I lived there for a few years in the past, my account was created from Italy). Why do they mention "UK leaving EU" to me, if I don't live in UK? I tried to contact them multiple times on their @Google account on Twitter, but I got no reply at all. I tried to search online and it looks like I'm not alone, they are doing this to many other people: https://support.google.com/accounts/thread/29317992?hl=en&authuser=1 Looking for help What should I do? Is this legally allowed? If there was an easy way to complain with them, I would have done it already, but I've tried to search on their website (even googling it... no pun intended) but I couldn't find a single contact form to report this issue and of course they are ignoring both Twitter and that forum I linked previously. Should I report them to the Privacy Authority? If yes, how? Full text of the email Here is the full text of the email I received: We’re improving our Terms of Service and making them easier for you to understand. The changes will take effect on 31 March 2020, and they won’t impact the way that you use Google services. And, because the United Kingdom (UK) is leaving the European Union (EU), Google LLC will now be the service provider and the data controller responsible for your information and for complying with applicable privacy laws for UK consumer users. For more details, we’ve provided a summary of the key changes and Frequently asked questions. And the next time that you visit Google, you’ll have the chance to review and accept the new Terms. At a glance, here’s what this update means for you: • Improved readability: While our Terms remain a legal document, we’ve done our best to make them easier to understand, including by adding links to useful information and providing definitions. • Better communication: We’ve clearly explained when we’ll make changes to our services (like adding or removing a feature) and when we’ll restrict or end a user’s access. And we’ll do more to notify you when a change negatively impacts your experience on our services. • Adding Google Chrome, Google Chrome OS and Google Drive to the Terms: Our improved Terms now cover Google Chrome, Google Chrome OS and Google Drive, which also have service-specific terms and policies to help you understand what’s unique to those services. • Your service provider and data controller is now Google LLC: Because the UK is leaving the EU, we’ve updated our Terms so that a United States-based company, Google LLC, is now your service provider instead of Google Ireland Limited. Google LLC will also become the data controller responsible for your information and complying with applicable privacy laws. We’re making similar changes to the Terms of Service for YouTube, YouTube Paid Services and Google Play. These changes to our Terms and privacy policy don’t affect your privacy settings or the way that we treat your information (see the privacy policy for details). As a reminder, you can always visit your Google Account to review your privacy settings and manage how your data is used. If you’re the guardian of a child under the age required to manage their own Google Account and you use Family Link to manage their use of Google services, please note that when you accept our new Terms, you do so on their behalf as well, and you may want to discuss these changes with them. And of course, if you don’t agree to our new Terms and what we can expect from each other as you use our services, you can find more information about your options in our Frequently asked questions. Thank you for using Google’s services. Your Google team Source
  10. Most expected this was coming, not this soon. Google got away and backed its decision to say new Edge Chromium as an unsupported browser on its sites such as Duo for web, Meet and YouTube when it was in development, once the Edge stable version has been released the Search engine giant has started to show its true colors to Microsoft who is actively contributing to Chromium. The company is now aggressively prompting Edge users on its websites such as Google Search, Google News, Google Docs and Google Translate to “Switch to the Chrome” browser. It all started with Chrome Web Store where Google began to recommend Chrome to use extensions securely. While a normal user may not understand and treat this as an ad, non-Chrome and classic Edge users will definitely find the ones appearing now on Google properties as pop-up ads. While folks over at MSPU found this on the search engine giant home page, we noticed they are doing this on their News, Docs and Translate websites as well. We’ve even got an unsupported browser warning for Google Drive albeit for Edge Canary but not in the release version. New Edge users don’t need to jump to Chrome as Microsoft has covered Edge browser which comes with automatic updates with alternate features for Safebrowsing, Google Translate, and Adblocker Chromium Edge too has got Google’s ad blocker enabled in the browser if it thinks Chrome blocks ads on websites that show abusive ads. While Chrome protects its users when they visit dangerous and malware websites via the Safe Browsing feature, new Edge has Microsoft Defender Smartscreen and Potential unwanted Application download protection integrated to guard its users. Talking about updates, Edge Chromium stable too now gets updated regularly like Chrome. If Chrome auto translates pages with its Translate service, new Edge has Microsoft Translator integrated to translate the pages. Google soon will load all websites with slogans to switch to Chrome when it detects you’re using a new Edge browser. Get accustomed to them or ignore or switch to Chrome (will you do that?) or another browser or start using search engines such as Bing, DuckDuck Go and browsers such as Firefox and Vivaldi to get a reprieve from these ads which also protects your privacy to some extent. Source
  11. There have been a lot of reports from Chrome users on the official help forum and social media sites that PDFs are no longer loaded by Chrome PDF Viewer plugin in Chrome 80 stable version on Windows and Mac, the PDFs appear gray or blank or don’t render completely. The irony is the latest Chrome update 80.0. 3987.122 which has fixed serious vulnerabilities is also affected by this issue. The issue doesn’t exist in Chrome Beta and Canary versions. A thread on Chrome Reddit confirms the “Real-time Phishing protection” feature recently introduced for Chome behind the setting “Make searches and browsing better” by sending URLs of pages user visits to Google is causing the issue when opted-in to and doesn’t occur when unchecked. After investigating, the Chromium team found out the root cause and is disabling the full URL check for PDF files. When opening a PDF link, the final check is triggered from the renderer with type kPluginResource[2]. When the feature is enabled, the safe_browsing_url_checker object of the final check is not successfully destructed. The ownership of this object is transferred in [1]. From the comment in [3], it will be deleted when a pipe connection error happens. From the log I added, the pipe connection error happens immediately, but the error doesn’t happen if the feature is enabled. We’ve been able to reproduce the issue as per steps posted by Chromium employe in the bug report. 1. Run Chrome 80.0.3987.122 with the below command-line switch --enable-features=SafeBrowsingRealTimeUrlLookupEnabled 2. Sign in to Chrome and visit this PDF URL in the address bar to notice Chrome displaying grey or black screen for PDF without loading PDF contents. Until the fix is served to you via server, if you’re facing the issue, do the following. 1. Click on Chrome menu, select Settings 2. You and Google > Sync and Google Services, uncheck “Make searches and browsing better”. When the issue is fixed, re-enable it again to stay protected against phishing sites in real-time. Are you affected? Source
  12. We couldn't have anyone upsetting the Android monopoly, could we? As Huawei takes the initiative to create its own homegrown alternative to the Play Store, Google has reportedly pleaded with the White House to offer it an exemption to again work with the Chinese tech giant. Huawei's inclusion on the Trump administration's Entity List has had dramatic consequences for the company's handset business, preventing it from using Google Mobile Services (GMS) on its latest phones and tablets. According to German wire service Deutsche Press Agentur, Android and Google Play veep Sameer Samat has confirmed that Google has applied for a licence to resume working with Huawei. It's not clear when a decision will be made, or indeed if Google will get its wish. Other firms, most notably Microsoft, have been given a pass. This has allowed Huawei to ship its latest crop of laptops, including the freshly updated Matebook X Pro, with Windows 10. Huawei has said that if Google got an exemption, it would promptly update its newest phones to use Google Mobile Services. Earlier this month, Huawei released its latest flagship, the Mate 30 Pro, in the UK. Due to the embargo, this comes with the open-source version of Android, with punters encouraged to download apps from the Huawei AppGallery, or a separate third-party app store like Amazon's. That said, Huawei's strategy has focused on hoping for the best, but preparing for the worst. These preparations have seen the firm invest over $1bn on its app ecosystem, with more than 3,000 engineers working on the AppGallery, according to a statement from the company released earlier this week. It has also made deals with Western app developers and content providers, most notably Sunday Times publisher News UK, to make its services appear less barren. We've asked Google and Huawei for comment. Huawei has also introduced the ability to download progressive web apps, dubbed "Quick Apps" by the firm, through the AppGallery, which should bump up the app availability numbers – even if they lack the sophistication of a dedicated native app. It's likely this that has motivated Google to take the initiative. Although losing Huawei as a customer is a significant financial body blow to Mountain View, given its enduring popularity in Europe and Asia, it would pale compared to the damage caused by a new product that starts to loosen its stranglehold on the Android sphere. Google Mobile Services can cost as much as $40 per device, and it's likely that many phone vendors, particularly on the cheaper end of the spectrum, would welcome a less-expensive alternative. Complicating matters for Google, the biggest Chinese phone manufacturers (Oppo, Xiaomi and Huawei) have teamed up to simplify the process of deploying apps to their in-house stores. With Google claiming a cool 30 per cent on all Play Store sales, this represents a huge threat to its bottom line. In short, Google has a lot of motivation to rekindle its relationship with Huawei, which was severed for reasons beyond its control. Whether that happens has yet to be confirmed by the current occupant of 1600 Pennsylvania Avenue. Source
  13. The search giant already has a presence in 26 states. Google says the expansion will create thousands of jobs. Google continues to expand far beyond its headquarters in the San Francisco Bay Area. Sundar Pichai, CEO of Google and Alphabet, said Wednesday that the company will invest more than $10 billion in offices and data centers across the US in 2020. "These investments will create thousands of jobs -- including roles within Google, construction jobs in data centers and renewable energy facilities, and opportunities in local businesses in surrounding towns and communities," Pichai said in a blog post. The Mountain View, California-based search giant, which already has a presence in 26 states, said its new investments will focus on 11 states: Colorado, Georgia, Massachusetts, Nebraska, New York, Oklahoma, Ohio, Pennsylvania, Texas, Washington and California. This includes opening Google's new Hudson Square campus in New York City, which the company says gives it the ability to double its local workforce by 2028. Google also said it's opening a new Google Operations Center in Mississippi to improve customer support for its users and partners. Source
  14. Google makes £77bn a year from ads and tens of millions are paid by scammers or rogue schemes Victims who have lost out to scammers promoted high up in Google search results may have a legal claim against the $1trillion internet giant, lawyers have said. Google is retaining tens of millions in profits it has made from con artists or rogue investment schemes paying for ads so they appear first when consumers are shopping around for “safe” and “protected” savings and bonds accounts. So far the internet firm has ignored calls to return the money or dedicate it to tightening its security to prevent further harm. This could amount to “unjust enrichment” and would allow victims to mount a claim to get some of their money back, according to Bambos Tsiattalou of Stokoe Partnership Solicitors, a law firm. However, taking on the world’s most powerful search engine would not be easy, he added. “Much would depend on Google’s state of knowledge. If Google knew the ads were for scams, and yet continued to publish them, then those defrauded may be in a position to take legal action. Otherwise, they would be unlikely to be able to hold Google liable. “One difficulty is that internet companies often claim that they are not publishers, legally speaking. They argue that they merely transmit information, and so are no more liable than a phone company when a fraudulent call travels down their wires,” he said. It comes as watchdogs seek to crack down on fraud in the internet age. An estimated £50.1m was stolen from almost 4,000 savers who were persuaded to pump cash into fake investments in 2018 alone, according to banking body UK Finance. The financial regulator the Financial Conduct Authority (FCA) has been in talks with Google over the problem for months, but recently admitted it had no power to police the firm, as it falls outside of its jurisdiction. Last week the FCA said it was disappointed such “financial harms” were not included in the Government's initial response to new legislation designed to protect vulnerable groups surfing the web. Mr Tsiattalou said the authorities had a long way to go in the fight for proper online regulation. “The Government claims to want to make Britain the safest place in the world to be online. Yet scams can be advertised by Google with relative impunity, once the advertiser can claim it did not actually know the ads were fraudulent,” he said. Google declined to comment, but has said it is working with the regulator on a long-term solution to the issue. Source
  15. Google has brought its popular Lighthouse extension used by over 400,000 users to Mozilla Firefox so that web developers can test the browser's performance against submitted web pages. Lighthouse is an open-source tool for testing the performance of web pages through Google's PageSpeed Insights API and was released as an extension for Google Chrome in 2016. Now that the Mozilla Firefox Lighthouse Extension has been released, Firefox users can perform pages peed tests in their preferred browser. For those not familiar with Lighthouse, it is a browser extension that allows you to generate a report about a web page's performance using the Google PageSpeed API. This API includes real-time data from Google's Chrome User Experience Report and lab data from Lighthouse. The report will display information on how fast the page loads, what issues are affecting its performance and will offer suggestions on how to increase the page's performance, accessibility, and SEO. For example, below is a Lighthouse report for a Google search results page. As you can see, it provides a score ranging from 0-100 for performance, accessibility, best practices, and SEO categories. Lighthouse report for Google Search results page The reports real value comes in the form of suggestions and optimization tips to increase each category's score and thus the speed of the web page. Lighthouse suggestions to improve performance For web developers, this is a very useful tool and while it is very difficult to achieve a high score, especially if the page display ads, it does provide numerous useful suggestions on how to optimize a web site to increase performance for its visitors. If you manage a web site and have not used Lighthouse before, you should give it a try as I am sure you will find suggestions that you can use to increase your site's performance. Source
  16. Google wants to clear things up for Huawei device users: Google’s apps and services cannot be preloaded on new Huawei devices and are not available due to U.S. government restrictions. If users try to download Google apps and services through a side door, or essentially download them from somewhere other than the Play Store, bad things can happen. The company published this information in a support article for its Android Help Community, titled “Answering your questions on Huawei devices and Google,” on Friday. It said that it had continued to receive many questions on whether Google services could work on new Huawei devices, and therefore wanted to offer guidance. The U.S. government banned Google, and all American companies, from doing business with Huawei in May of last year due to national security concerns. In the article, Google stated that it has continued to work with Huawei to provide security updates and updates to Google’s apps and services on Huawei device models available to the public on or before May 16, 2019. That’s when the company was placed on the Entity List, the U.S. government’s blacklist. The U.S. government has issued temporary general licenses that allow Google to collaborate with Huawei on these models. Google said that it would continue to provide updates to the Huawei devices mentioned above “as long as it is permitted.” The company cannot provide updates to new Huawei devices made available after May 16, 2019. These new models are not certified Play Protect devices, or devices that are vetted by Google to ensure they are secure, and they do not have the Play Protect software preloaded. Google’s Play Protect software is built-in malware protection for Android. “To protect user data privacy, security and safeguard the overall experience, the Google Play Store, Google Play Protect and Google’s core apps (including Gmail, YouTube, Maps and others) are only available on Play Protect certified devices,” wrote Tristan Ostrowski, Android and Play legal director. “Play Protect certified devices go through a rigorous security review and compatibility testing process, performed by Google, to ensure user data and app information are kept safe. They also come from the factory with our Google Play Protect software, which provides protection against the device being compromised.” However, there is another way to get Google apps and services. This is called sideloading, or downloading an app from some place other than the Play Store. In the article, Google advises users not to do this, for their own security. “In addition, sideloaded Google apps will not work reliably because we do not allow these services to run on uncertified devices where security may be compromised. Sideloading Google’s apps also carries a high risk of installing an app that has been altered or tampered with in ways that can compromise user security.” Unfortunately for Huawei device users, the situation between the U.S. and Huawei doesn’t seem to be getting any better. The U.S. government has recently claimed that it has proof that Huawei has “back doors” built-in that allow it to spy on mobile phone networks employing Huawei equipment. It also charged Huawei with three new crimes: conspiracy to steal trade secrets, conspiracy to commit wire fraud and racketeering conspiracy. At least for now, it looks like new Huawei device users will have to get used to living life without Google. Source
  17. It wants to be clearer about changes that could affect you. Terms of service still tend to read like legal alphabet soup, but Google thinks it can do better. It's notifying users of a TOS change on March 31st that, among other things, should remove some of the mystery. The internet giant said its new terms are still written in legalese, but that the company has "done [its] best" to make them easier to grasp, including definitions and links. Google is promising better overall communication, too, clarifying just when it will change services or limit access. It aims to send more notifications if changes affect service. The new terms also cover Chrome, Chrome OS and Google Drive. Google also isn't taking any chances and stresses that it's neither changing the privacy policy nor asking for restrictions on your legal rights. Google doesn't expect the new terms to have a meaningful impact on how you use its services. At first glance, this is really about ensuring that more people read the TOS and understand why Google took action against some material. This won't satisfy people unhappy with Google's choices on privacy and other key areas -- it might, however, clarify the company's position during any disputes. More info at : Google Privacy & Terms Source
  18. Update: It turns out that Facebook has been well aware for months of this private WhatsApp chat flaw. Thanks to twitter user @hackrzvijay, we know that Facebook was notified back in November 2019 about this security flaw. However, Facebook didn’t do anything about it. The Twitter user in question reported the problem to Facebook with the intention of receiving a cash bounty. In this tweet, the hacker posts a message from Facebook declining to give a bounty because the ability for anyone to find invite codes online for private WhatsApp chat groups is “an intentional product decision.” Facebook then says that it cannot control what Google and other search engines index, so its hands are tied. As far as we can tell, both Facebook and Google are still not talking publicly about this problem, but this Facebook message makes it seem as though Facebook doesn’t think there’s anything wrong with your private WhatsApp chat groups being easily accessible by anyone. Original article, According to a new report from Vice, private WhatsApp group invites might not actually be so private. Through some pretty basic Google searching, it’s relatively easy to gain access to private chat groups. Normally, private WhatsApp group chats are only accessible via an invite code that gets handed out by the moderators of the chat. These invite codes, though, are simply URLs with specific strings of text. It appears that Google is indexing at least some of these invites which enables pretty much anyone with Google access to find them. Now, before you get out the pitchforks and start storming Google’s gate, from the outset this appears to be a WhatsApp problem (or, more specifically, a Facebook problem, as it ownsWhatsApp). Google uses crawlers to index URLs across the internet and it is very easy for websites and apps to place a line of code on pages that tells these Google crawlers not to index the information there. The likely reason behind this problem is WhatsApp failing to do this. Vice reached out to both Google and Facebook about this matter but didn’t receive a response. If you want to comb through Google Search to find out if your private WhatsApp group is indexed, just start with a “chat.whatsapp.com” string and then enter in some information specific to your chat. Vice did this and was able to find several chat groups related to sharing porn as well as a chat that describes itself as being for NGOs accredited by the United Nations. These chat groups listed out members’ names as well as contact information, in some cases phone numbers. This story will no doubt make the rounds today and WhatsApp and Facebook will need to respond soon. There’s about to be a lot of angry users. Source
  19. Recently, I wrote about the Brave team’s report on the data collecting and bidding practises in government-associated websites in the United Kingdom. The conclusions in that report were shocking, as it was found that many of the nation’s council websites had data collected without permission. Almost 7 million accounts were linked to data broker LiveRamp, which until recently was associated with Cambridge Analytica. These include accounts linked to websites related to disability and addiction. Google serves 196 of UK’s council websites and the data made available through the Real-time Bidding (RTB) system includes location, interests and the media that is being consumed. I recommend you read the full report to get a sense of the degree to which our data is being used, and the extent to which the likes of Google and Facebook have a control of the market. Following up on that case, Brave has now come out with criticism and recommendations on how the failure to enact GDPR (General Data Protection Regulation) regulation has allowed Google to seize the advertising market, to the detriment of end users. So now Brave is putting more energy into the exploitation of data by tech monopolies. Brave Magnifies Google Criticism, Says it is Built on Unfair Advantages On February 18, Brave published another telling post on Google’s practices and the UK’s Competition & Markets Authority inadequate policing. The formal report revealed a lot about how there were flaws in the enforcement of GDPR regulation and the consequences of those flaws. The Brave team, led by Johnny Ryan, the company’s Chief Policy & Industry Relations Officer who had previously described to authorities how Google circumvented GDPR regulation, wrote a letter to the CMA that covered a wide range of things. It was specifically designed to address two issues: “a functional separation of digital platforms” and “internal and external enforcement” of data on RTB markets. Together, if these solutions are enforced, it would greatly enhance data protection. The post says that Google’s monopoly is built on an internal data free-for-all - essentially because Google’s various data verticals and processes gather data from several consumer facing platforms and third party apps. Similar things are said about Facebook. Their competitors are at a disadvantage, simply because they do not have as many sources of data and types of data as the two companies do. Specifically, the way the laws enforced means that there is a lack of protection when it comes to this internal free-for-all. The team looked at the GDPR regulation to see where it is that current enforcement failed. The letter points to several articles in the GDPR regulation, including Article 5(1)b and Recital 32, which in a nutshell, state that the data must be used for explicitly stated purposes only and that consent should be given for each of the processes in data collection and use respectively. Brave describing both internal and external enforcement of data protection. The second half of the report refers to the functioning of RTB markets. In this extended section, the CMA’s report is first proved incorrect for its view “that robust enforcement of EU data protection law in the real-time bidding online advertising market would advantage Google.” It offers 3 thorough arguments against the CMA’s reservations in enforcement of protection in RTBs, which I suggest you read for yourself. I would like to point out one particular section in this part, which falls under the heading “privacy harm.” Reiterating what was said in the previous report, Brave notes that the RTB systems “broadcast what Internet users read, watch, and listen to online to thousands of companies, without protection of the data once broadcast.” This can have severe ramifications. They offer a few examples to illustrate the nature of the problem: pricing products differently for customers or micro-targeting voters with misinformation. We all know what the internet has done for elections. Furthermore, this occurs on a scale of billions of times per day, without knowing how or where it will be used. So What Does Brave Recommend for Data Protection? Brave's recommendations. Brave ends the report with some recommendations on how to proceed with regulation. I’ll summarize the key takeaways of this. First, it recommends that limitations be established so that platforms do bundle their services and thereby enforce functional separation. Second, it suggests that CMA should work with European data protection authorities to enforce this purpose limitation. The European Data Protection Supervisor (EDPS) is hosting a meeting in Spring 2020 and this is an ideal opportunity to collaborate and ensure better standards. Third, it should do the same with the European Commission DG Competition and California’s Department of Justice. Fourth, it urges the CMA and the Information Commissioner’s Office (ICO) to enforce regulation on the external free-for-all, which would help end data exploitation. Lastly, it asks the CMA and ICO to work with Irish and Belgian data protection authorities to ensure robust protection against internal and external data free-for-alls in digital advertising. Conclusion In short, Brave has now examined the flaws in the advertising system in the UK and provided recommendations that can prevent monopolies from cementing their position, allowing for a freer and fairer market, both for other platforms and consumers. Brave has actually been looking into RTB markets since 2018 and they have posted a timeline of RTB complaints, which offers a comprehensive look into how it has “gathered evidence on the biggest data breach in history.” Brave is working with 21 entities to end data exploitation using GDPR. Fortunately, we now have alternatives in the selection of our online platforms, as well as a growing awareness in public consciousness that we must resist the exploitation of our data. Resources https://brave.com/ukcouncilsreport/ https://brave.com/rtb-updates/ https://brave.com/wp-content/uploads/2020/02/12-February-2020-Brave-response-to-CMA.pdf https://assets.publishing.service.gov.uk/media/5dfa0580ed915d0933009761/Interim_report.pdf Source
  20. Google tells Samsung to stop making changes in Android Rather than securing the devices, changes make them vulnerable to hacks (Image credit: Shutterstock) Google has slammed some of the leading mobile manufacturers for altering Linux kernel codes within the its Android platform. According to Google's Project Zero security team, several phone makers have tinkered with the software in order to make their devices more secure - however, in the process, have actually ended up making the phones vulnerable to serious security bugs. This includes Samsung, whose tinkering with the Android Linux kernel has resulted in exposing the company's devices to a range of threats. Creating vulnerabilities Google has suggested that manufacturers should use Android’s inbuilt security features rather than making unnecessary changes to the core kernel. Citing an example of Samsung’s Galaxy A50, Google’s Jann Horn revealed that while making these changes, Samsung added custom drivers, thus creating direct access to the kernel. While this was meant to enhance security on the device, it created a memory corruption bug. Samsung described the bug as a moderate issue consisting use-after-free and double-free vulnerabilities on devices running Android 9 Pie and Android 10 and affected company’s PROCA (Process Authenticator) security subsystem. This bug was patched with an update in the recent February update by the company. Horn’s posts also suggest that device-specific kernel changes are a frequent source of vulnerabilities and termed these them “unnecessary” which negates Google’s work in making the OS secured. He highlighted another example from Samsung stating that one of the changes in a device was aimed at restricting an attacker that gained “arbitrary kernel read/write.” Calling these changes as “futile”, he mentioned that the engineering resources could’ve been better utilized had it ensured that a hacker does not even reach this point. He concluded with an appeal that “ideally, all vendors should move towards using, and frequently applying updates from, supported upstream kernels.” Source: Google tells Samsung to stop making changes in Android (TechRadar)
  21. Should they be allowed to grab our stuff just cos it's 'popular' and it works? Not to be outdone by Google in ominous warnings over the future of software, Oracle has declared to American Supreme Court justices that no company would make an "enormous investment" like it did in Java SE if rivals get a free pass to copy code simply because it is "popular" and "functional". The firm filed a brief yesterday (PDF) to fend off Google's appeal in the highest court in the United States. The search giant is trying to overturn a Federal Circuit ruling over Google's use of Java code in the Android mobile operating system that would leave it on the hook for copyright damages estimated at $9bn+. Oracle held that the class library APIs it has been tussling with Google's Android over since August 2010 are a "literary work", countering Mountain View's assertion last month that the "declarations were highly functional, rather than expressive (PDF)". Big Red wrote in the document that there had been "creative choices – both [in] writing the declaring code and organizing the programs" that were "critical to Java SE's success", adding that Sun Microsystems and Oracle had collectively invested "hundreds of millions of dollars" attracting developers and developing the platform. It also shot down Google's merger doctrine argument, which holds that what the code does and the way it was written (the idea and its expression) have merged into one and the same thing, which Big Red acidly characterised as "an invitation" to the court to "rewrite the Copyright Act". As for Google's argument that once you dismiss Java SE's "conceptual" choices, all that remains are "unoriginal" names, Oracle snapped: "That is like saying once you choose a plot, the story writes itself." In a 70-page broadside, Big Red called Mountain View's policy arguments "legally irrelevant" to fair use, adding there was "no settled practice of pirating valuable software and incorporating it into competing products". Countering Google's holding that it used a small portion of the Java code base, Oracle retorted that Google's copying was "substantial" because of its "importance", and that the justices should disregard that Google copied "only a fraction of a large work". No company will make the enormous investment required to launch a groundbreaking work like Java SE if this Court declares that a competitor may copy it precisely because it has become so popular, or because it is functional — like all computer code. Even Andy Rubin said we were rivals Big Red characterised Google's problem – which it noted had been conceded by Android's founder Andy Rubin in earlier testimony – was that Sun's "APIs are copyrighted". It remarked: "Google could have taken the open-source license for free. But Google considered the give-back obligation 'unacceptable'." The database vendor also held in its brief that Google's use of the code in question was "commercial" – which would weigh against the fair use ruling – and claimed that "Google's concededly 'competing' product harmed Java SE in actual and potential markets", pointing to Oracle CEO Safra Catz's testimony back in 2016 (PDF) about a discount given to Amazon for its Paperwhite e-reader: Amazon switched from the Java platform to Android, then leveraged its ability to use Android for free to secure a 97.5 per cent price concession from Oracle. (A San Francisco jury ruled in favour of Mountain View's fair use argument soon after the Oracle's boss's testimony.) Big Red also added in yesterday's brief that Google could have licensed its code, but chose not to, opining: "Developers offer open-source licenses because it is in their business interest. Market forces likewise foster interoperability. Consumers demand products that work together, so software vendors 'wall off' their products at their peril." It also said that, seemingly in opposition to its own argument, Google had "admitted that it purposely made Android incompatible with Java". The case is Google LLC (Petitioner) v Oracle America, Inc and interested readers can follow the action here. Source
  22. Google Project Zero scolds Samsung and other vendors for adding features that undermine Android security. Samsung's attempt to prevent attacks on Galaxy phones by modifying kernel code ended up exposing it to more security bugs, according to Google Project Zero (GPZ). Not only are smartphone makers like Samsung creating more vulnerabilities by adding downstream custom drivers for direct hardware access to Android's Linux kernel, vendors would be better off using security features that already exist in the Linux kernel, according to GPZ researcher Jann Horn. It was this type of mistake that Horn found in the Android kernel on the Samsung Galaxy A50. But as he notes, what Samsung did is pretty common among all smartphone vendors. That is, adding code to the Linux kernel code downstream that upstream kernel developers haven't reviewed. Even when these downstream customizations are meant to add security to a device, they also introduce security bugs. Samsung's intended kernel security mitigations introduced a memory corruption bug that Google reported to Samsung in November. It was patched in Samsung's just-released February update for Galaxy phones. The issue affects Samsung's extra security subsystem called PROCA or Process Authenticator. Samsung describes the bug, SVE-2019-16132, as a moderate issue consisting of use-after-free and double-free vulnerabilities in PROCA that allow "possible arbitrary code execution" on some Galaxy devices running Android 9.0 and 10.0. Incidentally, the February update also includes a patch for critical flaw in "TEEGRIS devices", referring to Trusted Execution Environment (TEE) on newer Galaxy phones that contain Samsung's proprietary TEE operating system. The Galaxy S10 is among TEEGRIS devices. But Horn's new blogpost is focused on efforts in Android to reduce the security impact of vendors adding unique code to the kernel. "Android has been reducing the security impact of such code by locking down which processes have access to device drivers, which are often vendor-specific," explains Horn. An example is that newer Android phones access hardware through dedicated helper processes, collectively known as the Hardware Abstraction Layer (HAL) in Android. But Horn says vendors modifying how core parts of the Linux kernel work undermines efforts to "lock down the attack surface". Instead, he suggests handset makers use direct hardware access features already supported in Linux, rather than customizing Linux kernel code. Horn says some of the custom features that Samsung added are "unnecessary" and wouldn't affect the device if they were removed. He speculated that PROCA is meant to restrict an attacker who has already gained read and write access on the kernel. But he reckons Samsung could be more efficient by directing engineering resources to preventing an attacker from getting this access in the first place. "I believe that device-specific kernel modifications would be better off either being upstreamed or moved into userspace drivers, where they can be implemented in safer programming languages and/or sandboxed, and at the same time won't complicate updates to newer kernel releases," explained Horn. Source
  23. Google will block files from being downloaded via HTTP when the website domain shows HTTPS In April 2019, ZDNet reported about a proposal Google had made to other browser makers in an attempt to get everyone on board. The plan, at the time, was that browsers block file downloads that take place via HTTP, when the user initiated the file download from a site loaded via HTTPS. Today, Google announced it was formally moving ahead with last year's proposal, and would be making changes to the Chrome browser going forward. What exactly is Google blocking? According to a release schedule Google published today, starting with Chrome 83, which will be released in June, Chrome will begin blocking "risky downloads." Google will not be banning all HTTP downloads, but only some. The browser maker said last year it did not intend to block HTTP downloads started from HTTP sites, since Chrome is already warning users about the site's poor security via the "Not Secure" indicator in the URL bar. The plan is to block insecure downloads on sites that appear to be secure (loaded via HTTPS) but where the downloads aren't (loaded via HTTP). Google said that the presence of the HTTPS in the site's URL was tricking users into thinking the download was also via HTTPS, but in some cases it was not. It's this cases that Google is trying to stop. The new change in Chrome's behavior won't be enforced all of a sudden. Google has published today a six-step process during which it will slowly ban HTTP downloads on HTTPS sites: Chrome 81 (March 2020) - Chrome will print a console message warning about all mixed content downloads. Chrome 82 (April 2020) - Chrome will warn on mixed content downloads of executables (e.g. .exe). Chrome 83 (June 2020) - Chrome will block mixed content executables. Chrome will warn on mixed content archives (.zip) and disk images (.iso). Chrome 84 (August 2020) - Chrome will block mixed content executables, archives and disk images. Chrome will warn on all other mixed content downloads except image, audio, video and text formats. Chrome 85 (September 2020) - Chrome will warn on mixed content downloads of images, audio, video, and text. Chrome will block all other mixed content downloads. Chrome 86 (October 2020) - Chrome will block all mixed content downloads. But Google said it also understands that in some controlled conditions, like intranets, HTTP downloads may have a lower risk. For this situations, Google said there's a Google Chrome policy (InsecureContentAllowedForUrls) that can allow HTTP downloads in controlled environments. Webmasters who want to test if their sites comply with this new policy can do it right now in Google Chrome Canary, Chrome's testing version. To do so, they'll need enable the following Chrome flag: chrome://flags/#treat-unsafe-downloads-as-active-content Last year, Mozilla also expressed interest in implementing a similar block, however, the Firefox maker has not published any further plans on the matter. Source
  24. Google Addressed A Critical Vulnerability In Its Android OS That Affects The Bluetooth Subsystem And Could Be Exploited Without User Interaction Google has addressed a critical flaw in Android OS that affects the Bluetooth subsystem and could be exploited without user interaction. The vulnerability tracked as CVE-2020-0022 is a remote code execution flaw that could allow attackers to execute code on the device with the elevated privileges of the Bluetooth daemon when the wireless module is active. The critical vulnerability impact Android Oreo (8.0 and 8.1) and Pie (9), while it is not exploitable on Android 10 for technical reasons and only trigger a DoS condition of the Bluetooth daemon. “The most severe vulnerability in this section could enable a remote attacker using a specially crafted transmission to execute arbitrary code within the context of a privileged process.” reads the security bulletin published by Android. The flaw was reported to Google by Jan Ruge from the Technische Universität Darmstadt, Secure Mobile Networking Lab. The risk of exploitation of such kind of vulnerabilities is that they could be used to implement a ‘wormable‘ behavior in mobile malware that could rapidly spread from one infected device to another device that is in its proximity and reachable via Bluetooth. The issue could be exploited only if the attacker knows the Bluetooth MAC address of the target, but this is quite easy to retrieve. “On Android 8.0 to 9.0, a remote attacker within proximity can silently execute arbitrary code with the privileges of the Bluetooth daemon as long as Bluetooth is enabled.” the researcher wrote on a blog post on the site of IT security consultant ERNW. “No user interaction is required and only the Bluetooth MAC address of the target devices has to be known. For some devices, the Bluetooth MAC address can be deduced from the WiFi MAC address. This vulnerability can lead to theft of personal data and could potentially be used to spread malware (Short-Distance Worm).” To mitigate the flaw, Ruge recommends disabling Bluetooth and enable it only “if strictly necessary.” If you need to activate Bluetooth, it is recommended to set the device non-discoverable for pairing with other devices. Android users should apply the latest security patches as soon as possible. Source
  25. In an interview with CBS This Morning, Clearview AI's founder says it's his right to collect photos for the facial recognition app. Clearview AI CEO Hoan Ton-That tells CBS correspondent Errol Barnett that the First Amendment allows his company to scrape the internet for people's photos. Google, YouTube and Facebook have sent a cease-and-desist letters to Clearview AI, the facial recognition company that has been scraping billions of photos off the internet and using it to help more than 600 police departments identify people within seconds. That follows a similar action by Twitter, which sent Clearview AI a cease-and-desist letter for its data scraping in January. The letter from Google-owned YouTube was first seen by CBS News. (Note: CBS News and CNET share the same parent company, ViacomCBS.) The CEO of Clearview AI, a controversial and secretive facial recognition startup, is defending his company's massive database of searchable faces, saying in an interview on CBS This Morning Wednesday that it's his First Amendment right to collect public photos. He also has compared the practices to what Google does with its search engine. Facial recognition technology, which proponents argue helps with security and makes your devices more convenient, has drawn scrutiny from lawmakers and advocacy groups. Microsoft, IBM and Amazon, which sells its Rekognition system to law enforcement agencies in the US, have said facial recognition should be regulated by the government, and a few cities, including San Francisco, have banned its use, but there aren't yet any federal laws addressing the issue. Here is YouTube's full statement: "YouTube's Terms of Service explicitly forbid collecting data that can be used to identify a person. Clearview has publicly admitted to doing exactly that, and in response we sent them a cease and desist letter. And comparisons to Google Search are inaccurate. Most websites want to be included in Google Search, and we give webmasters control over what information from their site is included in our search results, including the option to opt-out entirely. Clearview secretly collected image data of individuals without their consent, and in violation of rules explicitly forbidding them from doing so." Facebook has also said that it's reviewing Clearview AI's practices and that it would take action if it learns the company is violating its terms of services. "We have serious concerns with Clearview's practices, which is why we've requested information as part of our ongoing review. How they respond will determine the next steps we take," a Facebook spokesperson told CBS News on Tuesday. Facebook later said it demanded the company stop scraping photos because the activity violates its policies. Clearview AI attracted wide attention in January after The New York Times reported how the company's app can identify people by comparing their photo to a database of more than 3 billion pictures that Clearview says it's scraped off social media and other sites. The app is used by hundreds of law enforcement agencies in the US to identify those suspected of criminal activities. BuzzFeed News reported that in pitches to law enforcement agencies, Clearview AI had told police to "run wild" with its facial recognition, despite saying that it had restrictions to protect privacy. Critics have called the app a threat to individuals' civil liberties, but Clearview CEO and founder Hoan Ton-That sees things differently. In an interview with correspondent Errol Barnett on CBS This Morning airing Wednesday, Ton-That compared his company's widespread collection of people's photos to Google's search engine. "Google can pull in information from all different websites," Ton-That said. "So if it's public, you know, and it's out there, it could be inside Google search engine, it can be inside ours as well." Google disagreed with the comparison, calling it misleading and noting several differences between its search engine and Clearview AI. The tech giant argued that Clearview is not a public search engine and gathers data without people's consent while websites have always been able to request not to be found on Google. Clearview AI's founder intends to challenge the cease-and-desist letters from Google and Twitter, arguing that he has a constitutional right to harvest people's public photos. "Our legal counsel has reached out to [Twitter] and are handling it accordingly," Ton-That said. "But there is also a First Amendment right to public information. So the way we have built our system is to only take publicly available information and index it that way." Clearview AI would not be the first tech company to use this defense to justify its data scraping practices, as technology attorney Tiffany C.Li pointed out on Twitter. In 2017, HiQ, a data analytics company, sued LinkedIn for the right to continue scraping public data from the Microsoft-owned social network, claiming that the First Amendment protects that access. The size of the Clearview database dwarfs others in use by law enforcement. The FBI's own database, which taps passport and driver's license photos, is one of the largest, with over 641 million images of US citizens. Clearview also keeps all the images collected, even when the original upload has been deleted. Law enforcement agencies say they've used the app to solve crimes ranging from shoplifting to child sexual exploitation to murder. But privacy advocates warn that the app could return false matches to police and that it could also be used by stalkers and others. They've also warned that facial recognition technologies in general could be used to conduct mass surveillance. A lawsuit filed in Illinois after the Times' report called Clearview AI's software an "insidious encroachment on an individual's liberty" and accused the company of violating the privacy rights of residents in that state. The lawsuit followed Democratic Sen. Edward Markey saying Clearview's app may pose a "chilling" privacy risk. Source
×
×
  • Create New...