Jump to content

Search the Community

Showing results for tags 'Google'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 879 results

  1. Google is giving developers tools to push users into updating their apps. Google is giving Android developers a new application programming interface (API) that lets them force users into updating to the latest version of their app. Google revealed at its Android Dev Summit this week that developers will be able to use the new In-app Updates API to create an immediate or flexible in-app update process, which either forces or nudges a user to update an app. Developers can choose which approach is more suitable, depending on how urgent it is that users should update. For example, developers could use the immediate process for critical security updates. After the user opens an app, a full-screen prompt to update is launched that prevents users from using the app until it has been updated. The flexible update allows the user to continue using the app while the update is downloaded. The API should help developers address serious issues if they've rolled out an app with a major bug. However, it also offers developers a way to bump users up to the latest version of an app if they've built a bunch of new features and want all users to have access to them. Google also provided some updates to Kotlin, which according to GitHub, is the fastest-growing language in terms of contributors. This month over 118,000 new projects using Kotlin started in Android Studio, marking a 10-fold increase on last year, according to Google. Google yesterday also revealed how Android would support different types of 'foldables'. It will support two broad classes: foldable devices like Samsung's Infinity Flex device, which has one screen when folded, and devices like the new FlexPai, which has two-screens when folded. Android will offer developers "screen continuity" so that when users start a video on the small screen, it seamlessly transfers to the larger screen as it's unfolded. Source
  2. There are many problems with web advertising in general, including annoying features like autoplay video ads and pop-ups and also problems like “click fraud” which matter to advertisers. This essay will however be focusing on the privacy issues with some of the kinds of ads that Google produces and the history behind them, and why Larry/Sergey didn’t consider them when buying DoubleClick for example. Also discussed is Mozilla and how they are involved (like in the Google/Mozilla search deal), including Brendan Eich who created JavaScript that eventually left Mozilla to found Brave. There is also the difficulty of solving these issues, which will also be discussed. Of course, advertising is not limited to the web and there are often many benefits and risks (like deceptive advertising) to advertising in general, most of which will not be discussed here. The history of Google and its advertising will be discussed first. Google was founded in 1998 by Larry Page and Sergey Brin while at Stanford, and took VC funding from KP and other partners. Google was founded with the search engine (with the PageRank algorithm) as the first product, but later added products like Gmail. Eric Schmidt was bought in as CEO in 2001 and recently left but are still on the board. Google IPOed in 2004, using dual class stock for example. The first kind of ads that Google did was AdWords, dating back to 2000. AdWords was based on search keywords, and the text ads were displayed at the top of the search results (labelled as ads) and were relatively simple. Typically the highest bidder was shown, and the advertiser paid Google when the user clicked on the ads. AdWords involved relatively little tracking at least initially and will not be mentioned much here. At this time Google was also taking a stand against popup ads. AdSense was ads shown on webpages themselves, based on JavaScript. It was invented in 2003. AdSense at least initially was based on keywords on webpages themselves (which Google fetched from its cache for example), which advertisers could bid on. Like with AdWords, Google and websites gets paid when users click on the ads. It also involved little tracking at least initially. Google bought DoubleClick in 2008. DoubleClick was invented in 1995. It made more sophisticated ad tracking via cookies and the like famous (which was often called “retargeting”), and the problems will be described here. DoubleClick themselves called its product “Dynamic Advertising Reporting and Targeting” at one point for example. Initially DoubleClick was mostly banner ads, and many users developed so-called banner-blindness from these ads. Cookies were itself invented in Netscape in 1994, and the IETF group that developed RFC 2109 and 2965 already know that tracking with “third-party cookies” were a problem (and it was mentioned in these RFCs). Those attempts at IETF cookie standards ultimately failed partly because they were incompatible with current browsers, and led to RFC 6265 that is closer to how cookies are implemented in browsers today. It also led to W3C P3P which was famously implemented in IE6, which also of course failed (partly because it was too complex) and was removed from Windows 10 but was an attempt to get the tracking under control. Google bought Urchin in 2005, turning it into Google Analytics. Urchin was founded in 1998. Initially its product was to analyze web server log files, with JavaScript tags being added in Urchin 4 (called “Urchin Traffic Monitor”). The hosted version based entirely on JavaScript that was created later was initially called “Urchin on Demand” and was introduced in 2004. Of course, the original software that was sold receive little attention once Google bought it and it became Google Analytics and it was discontinued in 2012. One problem with the ads is tracking. The current economy is a debt-based economy based on consumption. The more money advertisers can extract from consumers, the more they are willing to spend on ads. This results in tracking getting creepier and creepier, and encourage consolidation of data for example. Most of the ad tracking is called “retargeting” and it is often based on cookies and JavaScript, and DoubleClick was one of the first to do it. All ads encourages consumption by definition, but tracking ads are particularly bad for these reasons. For example, DoubleClick has cross-device retargeting introduced in 2015. Of course, it is limited to logged-in users tracking via the user account at least initially (which any websites can do), but it illustrated the trend. Google changed the privacy policy to allow Google accounts to be used for such logged-in user tracking in 2016. Recently Google signed an agreement with MasterCard to obtain credit card sales data. Of course, credit cards directly ties an increase in debt to consumer spending, which in turn can go to Google as ad dollars. According to http://adage.com/article/digital/google-turns-behavioral-targeting-beef-display-ads/135152/, “In December 2008 Google added DoubleClick cookies to AdSense ads”, tying the DoubleClick cookie-based tracking (dating long before Google bought it) to AdSense. I assume that AdSense tracking probably did not exist before Google bought DoubleClick. Google Analytics added AdWords and AdSense support in 2009. In 2012, Google changed its privacy policy to allow data to be consolidated, which was also very controversial. In 2014, Google Analytics integrated with DoubleClick, allowing things like remarketing lists to be shared according to https://analytics.googleblog.com/2014/05/google-analytics-summit-2014-whats-next.html. Remarketing lists are basically lists of website visitors that can be uniquely identified by things like cookies, and it is one of the ways of targeting ads to users. It can probably be assumed that sharing remarketing lists basically ties the tracking together. Sharing of Google Analytics remarketing lists with AdWords was introduced in 2015, along with linking of Google Analytics and AdWords “manager” accounts, according to https://adwords.googleblog.com/2015/11/share-google-analytics-data-and.html. “Google Analytics 365” came in 2016, according to https://analytics.googleblog.com/2016/03/introducing-google-analytics-360-suite.html. Remarketing lists for search ads was introduced in 2012 and was tied to Google Analytics in 2015 (though not all data from Google Analytics can be used). It allowed different search ads to be targeted to different visitors based on cookie-based tracking on websites (with sites using special tags for this purpose). For example, you can show different search ads to visitors that visit the site every day. Of course, users often has little control and benefit over storage of user data and ad retargeting by trackers too, especially when many parties are involved. This was mentioned during the Google/DoubleClick acquisition for example. Of course, some provides more control than others, such as AdChoices for example. AdChoices was an attempt at self-regulation for ad publishers, and used an icon to indicate that data was being collected. You can click the icon to display the privacy policy for the ads or opt-out of ad targeting. It was not the same as blocking ads completely though, and did not solve all of the problems of ads either. There was also an attempt at a Do-Not-Track HTTP header, which was probably too simple (and thus was also very vague in its meaning) and there was no guarantee that a site would comply either obviously since it was just an HTTP header (IE11 enabling it by default was also controversial and Windows 10 no longer does so by default). Some of the problems with the opt-out methods are similar to the problems of a national “do not email” registry proposed in the US CAN-SPAM Act of 2003 for spam messages, and such lists to “opt out” of spam are widely considered to be unacceptable in general. Even “opt-out” or “unsubscribe” links in spam is widely considered untrustworthy for obvious reasons, though legitimate mailing lists will also have them. That idea came from the similar “do not call” registry for telephone marketing (to stop annoying marketing phone calls which were considered more annoying than spam of course), but email and internet advertising ended up being very different from telephone calls making these laws difficult to enforce. It is far easier to send an email than to call someone for example, and email is also more difficult to trace to the origin especially given that the Internet is global. FTC has a report at https://www.ftc.gov/reports/can-spam-act-2003-national-do-not-email-registy-federal-trade-commission-report-congress describing these problems (it was a report to Congress that was required by CAN-SPAM), including the possibility that such a list can be abused by spammers for example. “Closed-loop opt-in” using confirmation emails for mailing lists on the other hand is widely accepted, but it is not mentioned in CAN-SPAM. One example includes the tracking of “opt-out” using cookies in things like AdChoices, which themselves can be used for other purposes obviously. There are some reasons why these problems were not apparent (for example to Larry/Sergey) when Google bought DoubleClick, or when remarketing lists was shared, or for that matter when Urchin became Google Analytics and the data was merged with ad data. The difficulty of researching things like the tying of remarketing lists during the writing of this essay shows some of the problems. It seems that no one cared about the privacy implications when remarketing lists in AdSense and DoubleClick was shared for example. In many cases, advertisers managed “remarketing” lists of “anonymous” visitors that was being tracked by cookies from a central console without thinking of the privacy problems, treating visitors almost as numbers. This ties in with the idea of treating people as “consumers” to be extracted from that are also fundamentally flawed. Another example of this is AOL that famously made it difficult to cancel at one point, partly because measuring “customer loyalty” as numbers to be extracted from consumers was part of their culture. To make it worse, they once charged consumers by the time spent on AOL, so the longer they stay the more revenue they made. The Google-DoubleClick acquisitions was also controversial, with EPIC, CDD and US PIRG for example filing complaints with the FTC in April 2007, a “first supplement” to the complaint in June 2007, and a “second supplement” in September 2007. There was also a Senate hearing on Sept 27, 2007 with testimonies from a variety of sources regarding that issue. One of the concerns back then was aggregation of tracking data and lack of control by users, though other issues unrelated to ads like storage of IP addresses by search engines were also mentioned. Ultimately it took the FTC until the end of 2007 to approve the deals, after a “second request”. Before the Google-DoubleClick acquisition, DoubleClick was once planned to merge with Abacus. FTC blocked the merger because of the privacy problems and it never happened. Abacus Direct seems to be a market researching company targeting consumer buying behavior. As a result, Abacus had a lot of personal info about consumers, and there were concerns that this data could be merged with DoubleClick data and may be used to deanonymize them. In 2012, Jonathan Mayer discovered that Google used some tricks in JavaScript to allow tracking in Safari. It involved how Google was able to bypass cookie blocking policy in Safari by using an invisible form to fool Safari into allowing cookies. FTC fined Google $22.5 million over this behaviour, and more recently there has been lawsuits about it in the UK. There has been also a class action lawsuit about this in the US. Google argued the tracking was unintentional at the time and that it was related to Google+ “Plus” buttons on DoubleClick ads (for logged-in users I believe). It is probably worth mentioning here that a lot of these kind of buttons (like Facebook’s Like buttons, to name another example) do their own tracking too (they generally worked by using IFRAMEs to the website involved), and this has been well known for years. For example, according to https://www.technologyreview.com/s/541351/facebooks-like-buttons-will-soon-track-your-web-browsing-to-target-ads/ Facebook started using the tracking Like buttons to target ads in 2015. I think the Facebook-WhatsApp acquisition story is also famous by now BTW, including how they eventually allowed data sharing between the two (presumably after years of losses). It is worth mentioning how even the WhatsApp founders now recommend deleting Facebook (especially after the Cambridge Analytica debacle). Now, let’s discuss Mozilla. Brendan Eich was the creator of JavaScript at Netscape when it was invented in 1995 and was the CTO of Mozilla Corporation from 2005 to 2014. After he stepped down from Mozilla in 2014 (just after he became CEO and after bad publicity stemming from his political donations about things like gay marriage), he was one of the founders of Brave with its Basic Attention Token etc. Andreas Gal joined Mozilla in 2008 and was the CTO from 2014 until 2015 when he left Mozilla. Mozilla signed the Google search deal in 2004, before Google even IPOed (let alone things like DoubleClick). Mozilla switched to a Yahoo search deal in late 2014 (by then the search engine was based on MS’s Bing I think), which was part of Marissa Mayer’s attempt to fix Yahoo before it was sold to Verizon. Recently Mozilla switched back to Google as the default search engine. BrendanEich mentioned in https://twitter.com/BrendanEich/status/932747825833680897 that “It's not a simple Newtonian-physics (or fake economics based on same) problem.” This was about the history of the Google search deal with Mozilla and the fact that it was signed before Google IPOed (when it was being funded by VCs). It is worth mentioning here that Google was founded in 1998 when the now famous dot-com bubble was at the peak and VC funding was common (allowing many startups to grow fast which was considered more important than profits). Many other dot-com startups at the time had problems and ended up failing when the bubble collapsed around 2001. It is worth mentioning that the DoubleClick acquisition dates back to 2007 which was just before the housing bubble famously collapsed leading to another recession, and that bubble probably started just after the dot-com bubble. BrendanEich mentioned in https://twitter.com/BrendanEich/status/932473969625595904 that “A friend said in 2003 that Sergey declared G would not acquire display ads & arb. Search vs. Display as that would be “evil”.”, before Google even IPOed (in 2004). Unfortunately no other source was given. It was mentioned on Twitter that Firefox OS enabled tracking protection by default unlike desktop Firefox. It was mentioned in https://twitter.com/andreasgal/status/932757853504339968 that “Yup. I was able to sneak that past management”. I then asked “I wonder if you ever talked to Larry/Sergey.” and Brendan then answered that Andreas didn’t of course. I wonder what would have happened if they did. https://pagefair.com/blog/2017/gdpr_risk_to_the_duopoly/ has some information on the effect of EU GDPR on Google ads. Notice that AdWords comply if all “personalization” features are removed for example. This included things like “remarketing”. I suspect that AdWords when it was first created in 2000 did not have these features. Other features like “remarketing lists for search ads” are also listed as not compliant, which was of course probably added later too. There was also the infamous cookie law that required notification for placing cookies, which was not that effective but a major step in the direction given that most ad tracking (including DoubleClick) were based on cookies. Google’s implementation of GDPR caused some concerns with publishers (http://adage.com/article/digital/tensions-flare-google-publishers-gdpr-looms/313592/), and some publishers blocked EU IP addresses in response to GDPR. Data breaches are also a problem. The AOL search data breach from 2006 is pretty famous. The data was “anonymized” but the search terms was often enough to deanonymize users. Ad tracking data is likely similar, including browsing history and the like. Anonymizing data is a useful technique to avoid accidental abuse, but some kinds of data are hard to anonymize in a way that prevent all abuse. For example, various techniques for anonymizing IP addresses and MAC addresses has been developed, including hashing and truncation. Of course, the more data that is consolidated and collected, the higher the risk and impact of a breach. Of course, it is worth noting that Google/DoubleClick isn’t the only one involved in the ad bubble (though DoubleClick was one of the first to do ad tracking I think). I think Taboola is often considered even worse than Google for example. The same fundamental problems with tracking however tends to apply to all of the ad networks. Some of the worse ones may use browser fingerpointing via things like JavaScript, which is even worse than the tracking via cookies that is most commonly used. Browser fingerpointing is generally difficult to prevent on the browser side, but it is so famous that the WHATWG HTML spec mentions it and marks the parts of the spec where there is a risk. For example the list of browser plugins (navigator.plugins in JavaScript) could be used at one point (in Firefox it used not to be sorted so it would be unique for each user, which made the fingerpointing even easier), but fortunately plug-ins are dying off anyway because of other problems. EFF created Panopticlick which illustrated some of the fingerpointing that was possible, and other examples that became famous included Evercookie by Samy Kamkar. To make things worse, many plugins like Flash had their own cookies as well (though browsers have been getting better at clearing them). It is also worth noting that the current tracking ads are not the only kind of web advertising. There are so-called “first-party” and “third-party” ads and cookies. Example of first-party ads includes Twitter and Reddit ads. Example of third-party ads includes DoubleClick and Taboola ads. First-party ads don’t have the issues described here. Recently, Google’s ad blocking and “better ads” (including so-called Better Ad Alliance) involves annoying ads, but don’t fix the fundamental issues described here. Apple’s ad blocking targets retargeting by limiting the life of cookies for example (making them less effective for tracking), but does not change the display of ads or make ads less annoying (for example, autoplay video ads are pretty famous as well, especially with Flash). Now, fixing the problems might be difficult. Obviously it would affect not only shareholders but pretty much everyone else if Google completely got rid of tracking ads. This includes sites depending on Google ads for revenue as well as Google itself. One example here is that both Microsoft and Novell used Client Access Licenses (CALs). CALs (called node licenses by Novell I think) are per user or per computer licenses common in server software like NetWare and Windows Server. Of course, when Novell moved to Linux, it was open source software that didn’t have CALs (Like with Red Hat, the company only paid for support) meaning that Novell could not expect the same level of revenue as in the NetWare days (they moved to Linux by buying SUSE). The story about Sun’s open source projects and Jonathan Schwartz (the former “ponytail” CEO), and how they eventually had to sell to Oracle is probably pretty famous as well (some examples of open source projects from that period included OpenSolaris, OpenOffice, and OpenJDK). The ad bubble will probably not last forever though. Bubbles like this one is part of the problem of the current debt-based economy (the main problem is that it allows almost infinite amounts of “debt” in US dollars since we got off the gold standard in 1971, including most commonly government debt), especially it encourage extracting as much money as possible from so-called “consumers” (another example is Adobe Creative Cloud subscriptions and how Adobe’s stock price rose after it was implemented). Google in 2015 hired Ruth Porat as CFO to bring financial discipline to Google. This included cutting unprofitable projects, especially “Google X” research projects and failed projects like Google Glass. According to https://www.bloomberg.com/news/features/2016-12-08/google-makes-so-much-money-it-never-had-to-worry-about-financial-discipline, one of the things they did was “to force the Other Bets to begin paying for the shared Google services they used”. It is probably reasonable to suspect that the increase in ad revenue due to DoubleClick etc is part of why they were able to start so many of these projects in the first place. One recent example is the recent changes in pricing of of Google Maps, mentioned in https://www.inderapotheke.de/blog/farewell-google-maps For Mozilla, a good example to illustrate the problems with funding browser development is the Opera browser. It was founded in 1995 in Norway. First browser was released in 1996. It IPOed in 2004. The browser used its own engine and it had a lot of unique features, like relatively good CSS support early on (unlike Netscape 4 at the time which famously had relatively poor support and was a problem for web developers for years). At first it was officially a paid browser with a trial version (like Netscape was before 1998), but later they used ads (choices included banner ads or text-based Google ads) for non-paying customers. They eventually signed a search deal with Google which removed the ads and instead just used Google as the default search engine (like Mozilla’s). Of course, there wasn’t much profit margin in a web browser, and so they had to cut costs to keep stocks and quarterly earnings going up (so planning for the future was difficult for example). It was strong in the mobile world before WebKit became dominant there though (before things like iPhone and Android and when things like WML was common) and may still be strong in some embedded applications, with products like Opera Mini that was basically remote rendering of web pages (useful when devices had less processing power). Opera never had much market share (though it had plenty of fans back in the day), and in the end Opera had to switch to Chromium (with the Blink engine) instead of their own engine and codebase in the desktop browser (though they did release last updates for the old one that included for example TLS enhancements). Opera was eventually sold to a Chinese consortium, which eventually renamed the company Otello. The founders eventually started the Vivaldi browser, which is also based on Chromium/Blink but has many differences. In contrast, the Mozilla Foundation was created as a non-profit organization in around 2003 as the old Netscape was dying off with AOL’s help (AOL bought Netscape in 1998 BTW). It owns a for-profit Mozilla Corporation for tax reasons (non-profits are not subject to taxes that for-profits have in the US). I think the corporation owns the search deals like Yahoo and Google for example. You can still donate to the Mozilla Foundation today. Mozilla Firefox 1.0 was released in 2004 after the Foundation was created (and after the branded Netscape 6/7 releases) and quickly took market share from the dominant IE6 that was stagnating the web (by being virtually unchanged for a long time without any real development) and was also well known for security problems like the Download.Ject attacks. MS was forced to respond with IE6 in Windows XP SP2 which in addition to security enhancements also added a few features like pop-up blocking and IE7 which finally bought real enhancements to the core engine that help web developers (especially in places like CSS). The old Netscape search deal with Google dates back to 1999 (obviously Netscape.com was Netscape’s home page at the time), and the success of the deal probably inspired the later Google search deal that Mozilla did. One alternative to the current tracking ads is called Basic Attention Token. Basic Attention Token is based on the Ethereum cryptocurrency and blockchain (this is like Bitcoin but it is GPU minable for example using a different algorithm and it is one of the most popular GPU minable coins). It was created by the Brave browser, which supports it directly. It is intended to “directly measure” attention. “Attention” is measured on the client side (based on local browser history) and tokens are rewarded for them (called “basic attention metrics”), eliminating the privacy issues. This is often called a “zero-knowledge proof”. There are also other benefits like reducing so-called “click fraud” that hurts advertisers that is a common problem with current ads and removing the need for intermediaries that do tracking like DoubleClick and Taboola (so advertisers also gets more of the money too since they don’t have to pay them). Many other kinds of tokens and “smart contracts” has been created on Ethereum, and so-called initial coin offerings (ICOs) has been the most common use of Ethereum (helping the price to rise). Of course, there is little to no regulation for them at the moment which results in many scam ICOs too (they tends to raise money very quickly, partly since it is so easy to give coins to them). There are also systems for paying authors directly like Patreon, though it is also trivial to use PayPal or cryptocurrencies for this purpose (though also harder to donate). Patreon allow money to be “pledged” to specific authors. There are also many kinds of “paywalls” implemented on websites, many of which has their own problems like relying on cookies to track how many times people visited a site (to limit the number before the user have to pay of course) or making it difficult to post links on Slashdot, Reddit, and Hacker News that often dislike paywalls for obvious reasons (though some are better than others). Of course, the problems described in the essay as well as other problems of ads (including annoyance and performance cost of ads) led to more use of ad blockers, which also have their own history. Banner ad blindness has also been known for years now, and Google’s ads tends to be simple text-based ads at least initially. One of the first type of blocking was popup blockers, and Google was taking a stand against popups in the early days (they were well known to be annoying). They became common in browsers by the mid-2000s (even IE6 in XP SP2 had them). At one point circa 2002, AOL/Netscape was disabling the popup blocker from Netscape-branded Mozilla releases (at one time there was the Mozilla source code/binaries and the official Netscape-branded builds based on the Mozilla source). Of course after user backlash they backed off from doing so. This was long before Google bought DoubleClick for example. Later more sophisticated ad and cookie blockers like AdBlock Plus and uBlock Origin came out as add-ons to browsers like Firefox, and one is built into Brave of course (along with BAT as a replacement for the lost ad revenue). Many other browsers have also similar tracking protection including Firefox and IE, but they just disable them by default and may require that ad blocking lists (such as EasyList) be manually loaded. Of course, some sites has been attempting to detect ad blockers and ask users to turn them off (even Ars Technica did it at one point though it only lasted one day), which is also ineffective and not a good idea for obvious reasons (including the fact that it reflects badly on the sites that are doing it). Lawsuits against ad blockers was also tried in some countries, which was obviously mostly unsuccessful (like a lawsuit against AdBlock Plus in Germany by publishers there). Source
  3. In a New York Times interview, Google CEO Sundar Pichai said when the company follow's Europe's "right to be forgotten" laws, "we are censoring search results because we're complying with the law." Google faced internal and public backlash earlier this year when the Intercept reported the company was working on a censored version of its search engine in China. Europe's "right to be forgotten" laws generally focus on the right to request a company delete personal data in some circumstances, while the Chinese government is known to censor factual historical information. Google CEO Sundar Pichai offered a new justification for the company's exploration of a censored version of its search engine for people in China: it already censors information elsewhere. In a New York Times interview published Thursday, Pichai compared Europe's "right to be forgotten laws" to censorship when asked about launching a search product in China. "One of the things that's not well understood, I think, is that we operate in many countries where there is censorship. When we follow 'right to be forgotten' laws, we are censoring search results because we're complying with the law," Pichai told the Times. Google has been grappling with how it could reach China's 800 million Internet users since it withdrew its service in 2010 amid censorship and security concerns. Earlier this year, Google faced backlash both internally and from the public when the Intercept reported its apparent plans to build a censored version of its search engine in China. Europe's "right to be forgotten" laws are distinct in important ways from censorship of information by the Chinese government. While "right to be forgotten" laws mainly center on the right of individuals to request personal data be deleted from the internet or search results, the Chinese government has been found to suppress factual information that would not be subject to the "right to be forgotten" laws. Through tight control over its media and internet access, China has created the "Great Firewall" that prevents people living there from accessing certain websites or searching some historical events , like the protests at Tiananmen Square in 1989. The "right to be forgotten" came from a 2014 case decided against Google by the European Court of Justice. The case centered around a Spanish man who wanted Google to remove an old newspaper article about a real estate auction the government ordered to recover his social security debts. The court decided that Google had to remove the article from its index even though the newspaper could keep it on its site. Now, the "right to be forgotten" is codified in the European Union's General Data Protection Regulation (GDPR), which went into effect earlier this year. Under this part of the regulation, EU citizens have the right to request that internet businesses delete certain personal data under some circumstances . Pichai told the Times he's not convinced a move into China is a top priority. "I'm committed to serving users in China," he said. "Whatever form it takes, I actually don't know the answer. It's not even clear to me that search in China is the product we need to do today." A Google spokesperson did not respond to a request for comment. Source
  4. Today, Google released a report of its latest progress on the anti-piracy front. Among other things, it stresses that pirate site demotion in search results has helped the company to stay well within the thresholds agreed with the UK Intellectual Property Office (IPO), which are part of the voluntary deal with UK rightsholders. The entertainment industries have repeatedly accused Google of not doing enough to limit piracy while demanding tougher action. For its part, Google regularly publishes updates on the extensive measures it takes to limit piracy on its platforms. The company has today released the latest iteration of its “How Google Fights Piracy” report. It highlights how the company generates billions in revenue for the entertainment industries while at the same time takes measures to counter copyright infringement. The company explains that its anti-piracy efforts are guided by five principles, starting with more and better legal alternatives. “Piracy often arises when consumer demand goes unmet by legitimate supply. The best way to battle piracy is with better, more convenient, legitimate alternatives to piracy, which can do far more than attempts at enforcement can,” Google writes. The other principles include a “follow-the-money” approach, effective and scalable anti-piracy solutions, protection against abuse such as fabricated copyright infringement allegations, and transparency. A large portion of the report describes Google’s policies and results regarding web search. The company stresses that it doesn’t want to link to any pirated content, but that it relies on copyright holders to pinpoint these URLs. “Google does not want to include links to infringing material in our search results, and we make significant efforts to prevent infringing webpages from appearing,” the company writes. “The heart of those efforts is cooperation with creators and rightsholders to identify and remove results that link to infringing content and to present legitimate alternatives.” Aside from removing more than three billion URLs in recent years, the search engine also helps to promote legal alternatives. This includes “knowledge cards” (which, incidentally, have featured pirate links too), as well as offering copyright holders SEO advice. Earlier this year we reported that the number of takedown notices was starting to decrease for the first time in years, and Google confirms that observation in its report. “The number of URLs listed in takedown requests decreased by 9%, reversing a long-term trend where the number of URLs requested for removal increased year-over-year,” the company writes. Last year, Google was asked to remove 882 million URLs in total, of which 95% were removed. In addition, more than 65,000 sites that were flagged persistently have been demoted in search results, lowering their visibility. This demotion measure is “extremely effective” according to the search giant. “Immediately upon launching improvements to our demotion signal in 2014, one major torrent site acknowledged traffic from search engines had dropped by 50% within the first week,” Google notes, citing a TorrentFreak report. Perhaps more importantly, Google’s demotion measures also passed the tests that were carried out under the Voluntary Code of Practice that Google entered into alongside Microsoft and major UK rightsholder organizations. This agreement was signaled by the rightsholders as a landmark deal and, reportedly, Google is doing well. Thus far, four rounds of tests have been carried out to check whether search engines sufficiently limit the availability of infringing content. These are based on guidelines set by the UK’s Intellectual Property Office (IPO). Google passed them all. “Thanks to the demotion signal and our other efforts to surface legitimate results in response to media-related queries, Google Search has passed the test every time with flying colors — scoring considerably under the thresholds agreed with the IPO,” Google reveals. This suggests that the search engine doesn’t have much to fear from the UK Government, which previously warned that “legislative” measures could follow if search engines didn’t step up their game. While Google says that it’s doing its best, the company is convinced that search is not a major driver to pirate sites and stresses that they don’t control what is on the web. The company reiterates its earlier position that removing entire domains from search results is unacceptable, as that would restrict access to legitimate content as well. Similarly, “filtering” the entire web for pirated content is not an option either. “It is a myth that Google could create a tool to filter the web for allegedly infringing material and remove images, video, and text from our search results proactively. Such a system is both infeasible and unnecessary,” Google writes. Aside from search, Google has also removed content from its other services including YouTube, Google Drive, and Google images. Some of these services were extensively abused by streaming sites last year, but Google says it has taken steps to counter this. Finally, no anti-piracy report these days would be complete without a Kodi mention. The streaming software, which is perfectly legal in its own right, is regularly used in combination with third-party piracy add-ons. Google, which banned the term Kodi from its auto-complete feature, says it removed several set-top boxes with “suspicious” add-ons from Google Shopping. In addition, the Play Store is closely monitored to flag apps with pre-installed pirate Kodi add-ons before they appear online. In closing, Google notes that it remains committed to fighting piracy on all fronts, albeit not at all costs. “Through continued innovation and partnership, we’re committed to rolling back bad actors while empowering the creative communities who make everything we love about the internet today.” A copy of the most recent “How Google Fights Piracy” report is available here (pdf). source
  5. WASHINGTON (Reuters) - Google’s top lobbyist in Washington is stepping aside as the U.S. technology company faces criticism on Capitol Hill on issues including privacy protections and its investment plans in China, the Alphabet Inc unit said on Friday. Former U.S. Representative Susan Molinari, who has run Google’s Washington office and its Americas Policy team for nearly seven years, will move to a new job as senior advisor in January, the company said in a statement. Google is seeking a new head of Americas policy, it added. “I am comfortable in making the transition,” said Molinari, 60, who had served as vice chair of the House Republican Conference before resigning from Congress in 1997 to become a Saturday morning news anchor on CBS. She added in a statement that she had been “looking for the right time to step back.” Alphabet faced criticism from Republicans and Democrats for refusing to send parent company Chief Executive Larry Page or Google CEO Sundar Pichai to a Senate hearing in September, where senators left an empty chair next to Twitter Inc’s CEO Jack Dorsey and Facebook Inc’s chief operating officer. Pichai in September canceled a trip to Asia to meet with lawmakers and agreed to testify before Congress later this year. Google also has faced this year numerous accusations from President Donald Trump and other Republican leaders that its search results promote content critical of conservatives and demote right-leaning news outlets, a charge that Google denies. Lawmakers have questioned whether it would accept China’s censorship demands as it considers reentering the search engine market there. Last month, Vice President Mike Pence called on Google to abandon the Chinese project. Pichai said at a forum on Thursday that the project was “more of an experiment” and reiterated that there is “nothing imminent” on a whether it will launch a search engine in China. In June, Google hired Karan Bhatia as global head of policy from General Electric Co Bhatia served as deputy U.S. Trade Representative for former President George W. Bush. The company also named Pablo Chavez, a Microsoft Corp lobbyist and former senior aide to Republican John McCain, as another senior lobbyist in June. Alphabet said last month it would shut down the consumer version of its failed social network Google+ and tighten its data-sharing policies after announcing that the private profile data of at least 500,000 users might have been exposed to hundreds of external developers. “Google must be more forthcoming with the public and lawmakers if the company is to maintain or regain the trust of the users of its services,” three senior Republicans told Google in an Oct. 11 letter. They said they were “especially disappointed” that Google did not disclose the issue at a privacy hearing two weeks earlier. In 2012, Google agreed to pay a then-record $22.5 million civil penalty to settle Federal Trade Commission charges that it misrepresented to users of Apple Inc’s Safari internet browser that it would not place tracking “cookies” or serve them targeted ads. Source
  6. Google users who have disabled JavaScript in the browsers that they are using to browse the Internet won't be able to sign-in to their Google accounts anymore soon unless they enable JavaScript for the login process. Google announced yesterday that it will make JavaScript mandatory on sign-in pages and that it will display a "couldn't sign you in" message to users who have it disabled. Internet users disable JavaScript for a number of reasons and most are well aware of the issues associated with that. A browser extension like NoScript blocks JavaScript execution by default to improve user privacy and security on the Internet. Scripts don't run without JavaScript which reduces or even eliminates tracking, advertisement and malicious attacks. Websites may load faster and users may save bandwidth if JavaScript is disabled or blocked in the browser. Some sites, however, will break if JavaScript is disabled as they use scripts for some or even all functionality provided. Google explains that it wants to run a risk assessment during sign-in to Google accounts and that it requires JavaScript for that. When your username and password are entered on Google’s sign-in page, we’ll run a risk assessment and only allow the sign-in if nothing looks suspicious. We’re always working to improve this analysis, and we’ll now require that JavaScript is enabled on the Google sign-in page, without which we can’t run this assessment. The company goes on to explain that only 0.01% of Internet users run browsers in which JavaScript is disabled. While Google does not mention it explicitly, most bots on the Internet run with JavaScript disabled to improve performance and avoid detection mechanisms. Google announced the launch of reCAPTCHA version 3 recently which promises to do away with annoying captchas by running risk assessments and giving sites control over what happens when scores below a set threshold are given. Google changed the sign-in process in 2013 from the traditional username and password form to a multi-page form. The company enabled a link between sign-ins in its Chrome web browser and Google services on the Internet in 2018. Closing Words Some may suggest that Google's motivation for making JavaScript a requirement for account sign-ins is not based entirely on the desire to better protect Google accounts from login-related attacks. Google is an advertisement company first and foremost, and the bulk of advertisement on the Internet relies on JavaScript. Source: Google sign-ins will require JavaScript soon (Ghacks - Martin Brinkmann)
  7. You shouldn't profit from punishment The lawyer leading the complaints against Alphabet in the EU Android case doesn’t sound impressed by giant ad-slinger’s proposed remedy. Not one bit. While appealing the verdict, Google has also proposed separating its Android bundle into two parts, charging for the part that includes the Play app store. And this is the bit that vexes Thomas Vinje, the Clifford Chance lawyer and the legal counsel and spokesperson for FairSearch, a group representing Google’s critics. Since access to a broad range of apps is vital to making a competitive phone - see the fate of Windows Phone and BB10 - and no other App Store has Google’s depth, phone-makers view involvement with the Google Play Store as mandatory. “By tying the remainder of these apps to the Play store, Google is still leveraging the Play store’s dominance into the market for those other apps,” Vinje wrote in a blog post. “This directly goes against the aim of the decision - to ensure that consumers are exposed to a wider variety of apps which compete with Google’s. In addition, the remedy is unlikely to put an end to Google’s access to data generated by using apps.” The latter refers to the fact that researchers have found that over 88.4 per cent of Android apps studied transmit data back to the Google mothership (pdf). Not so surprising as Google is built around an advertising machine, and this machine helps app developers monetise their apps. Vinje also noted that this remedy proposal, like the “auction” remedy Google proposed in response to a complaint about vertical search - creates a new revenue stream for Google. A leaked schedule of prices indicates that Google wants as much as $40 per device for flagships. This would be offset if phone-makers then agreed to bundle the “free” portion of the bundle - that includes Google Search. “Further enforcement action may be necessary to end Google’s leveraging of its mobile dominance to increase its data hegemony,” Vinje first suggested back in July. He’s likely to raise the volume of that request now. Source
  8. reCAPTCHA v3 assigns incoming site visitors a risk score and lets webmasters takes custom actions based on this score. Google today launched an update to its reCAPTCHA technology that the company has been offering since 2007 to fight off bots on the world wide web. reCAPTCHA v3, as the new version has been branded, is a complete overhaul of the reCAPTCHA technology that we know and... most of the time hate. The good news is that the new system does not require any user interaction anymore. Gone are the days of reCAPTCHA v1 when everyone was trying to decipher in garbled text, and gone are the days of v2 when everyone was getting annoyed at clicking on endless image streams of "store fronts," "roads," and "cars" for up to 2-3 minutes. Instead, reCAPTCHA v3 will use a secret new Google proprietary technology to learn a website's normal traffic and user behavior. Google says that by observing how regular users interact with the website and its sections, it would be able to detect abnormalities and detect bots or undesirable actions. Incoming visitors will be assigned "risk scores" based on their source or the action they want to take on a site. Scores will go from 0.1 (bad) to 1 (good). Site admins can decide how their website reacts based on the risk score. The way they can do this is by adding a new "action" tag to the pages or page sections they want to protect. These "action" tags, in correlation with risk scores, will enable Google to take automated actions, when possible, such as requiring the user/bot to go through a phone verification to validate his identity before being allowed on a page or approving an action (such as posting a comment). But Google says these "action" tags can also be used with a local site's internal data, such as profile or transaction histories, as an alternative user validation system. The entire system is very complex, compared to reCAPTCHA v2, but website owners have had a long time to test out the kinks thanks to a beta period that started last May. The biggest benefit of reCAPTCHA v3 is that website owners can now control and decide how their website reacts to bots and bad traffic, and not let Google take these decisions for them, as it was the norm with v1 and v2. reCAPTCHA v3 will be generally available later this week, and users can find out more on its official website and Google's official announcement. Source
  9. WASHINGTON (Reuters) - Two U.S. senators said Alphabet Inc’s (GOOGL.O) disclosure of user data vulnerabilities at Google+ raised “serious questions” over whether it violated a 2011 consent decree with the Federal Trade Commission, potentially exposing Google to penalties. Alphabet said this month it would shut down the consumer version of its failed social network Google+ and tighten its data-sharing policies after announcing the private profile data of at least 500,000 users may have been exposed to hundreds of external developers. The issue, the latest in a run of privacy issues to hit big U.S. tech companies, was discovered and patched in March. The Wall Street Journal reported that Google opted not to disclose the security issue due to fears of regulatory scrutiny, citing unnamed sources and a memo prepared by Google’s legal and policy staff for senior executives. Senators Amy Klobuchar and Catherine Cortez Masto wrote to Google Chief Executive Sundar Pichai on Wednesday asking why the company had failed to disclose the issue for six months. The incident raises “serious questions” about whether the company violated a 2011 consent decree with the FTC, they wrote, adding that Google failed to “protect consumers’ data and kept consumers in the dark about serious security risks.” The company agreed to 20 years of audits to ensure consumer privacy as part of the consent decree with the FTC over botched rollouts of the social network Buzz, which is now defunct. In 2012, Google paid $22.5 million to settle charges it bypassed the privacy settings of customers using Apple Inc’s Safari browser and violated the 2011 decree. Google declined to comment. On Oct. 11, three Republican senators also asked the Google unit to explain why it delayed disclosing vulnerabilities with its Google+ social network. “Google must be more forthcoming with the public and lawmakers if the company is to maintain or regain the trust of the users of its services,” the Republican letter said. The Republican letter asked whether the Google+ vulnerability had been revealed previously to any federal agencies, including the FTC, and if there were “similar incidents which have not been publicly disclosed?” Pichai agreed last month to testify on privacy and other issues before a House of Representatives panel in November after meeting with lawmakers. Three other Democratic senators also wrote to the FTC this month asking them to investigate Google+ and calling for a “renewed investigation into (Google’s) privacy practices across its range of products and activities.” Source
  10. Google CEO Sundar Pichai was upbeat Monday when he told WIRED about internal tests of a censored search engine designed to win approval from Chinese officials. It will take more than a government nod for Google to succeed, however. That’s not only because of the political tensions raised by President Trump’s tariffs on Chinese goods, which analysts say make Google’s expansion unlikely. China’s competitive—and cooling—search market doesn’t seem to offer much space for a US entrant. “Because Google has been absent for years, it has a lot of distance to make up,” says Raymond Feng, director of research at Pacific Epoch in Shanghai, which tracks China’s internet markets. Google declined to comment on its strategy around search in China. Google offered a censored version of its search engine for China’s tightly regulated internet between 2006 and 2010. The company shut it down after complaining that its corporate network was subject to a sophisticated attack from inside China, targeting the Gmail accounts of human rights activists. During Google’s absence, China’s internet population swelled by nearly 70 percent, to 772 million. One challenge for Google is that it knows relatively little about those consumers. The US company is famed for collecting broad data about online activity, and mining it smartly to sell ads and improve its services. In China, Google would start with a mostly blank slate compared with local rivals such as the dominant Baidu, which serves about 60 percent of all searches, says Feng. In recent years, Baidu’s closest challengers, Haosou and Sogou, have succeeded in wresting a portion of China’s search market for themselves. But the strategies they’ve used to do that would not be easy for Google to emulate, says Kai-Fu Lee, who was president of Google in China between 2005 and 2009. The two challengers have grown their userbases by leveraging companion internet products, and partnerships with other companies. Haosou has benefited from integration with parent company Qihoo 360’s web browser, much like Google supports its search offering by integrating it into its own Chrome browser. Chrome dominates outside China, but the browser's download page is blocked by Chinese authorities, like other Google services such as Youtube and Gmail. Sogou has grown in part thanks to an alliance with Tencent, which owns a major stake in the company, and integrated the search engine into China’s dominant mobile messaging service WeChat. WeChat has become a powerful multi-functional platform used for shopping, banking, and much more. “I spend 90 percent of my time in WeChat, and my wife spends 98 percent,” Lee says. “If you want to do a search, Sogou is right there.” Sogou also indexes some WeChat content, giving it a powerful lens into what Chinese internet users are doing. Analyst eMarketer estimates that $25 billion will be spent on search ads in China in 2018, about half as much as in the US. However, an English-language Chinese government report on internet trends in 2017 noted that that “the search engine industry was faced with great pressure of competition” and that revenue and profit growth have been falling. On Monday, Pichai said Google could win Chinese users by providing more reliable information than existing Chinese search engines. He suggested that a cancer patient researching treatments in China today might be poorly served. “Today people either get fake cancer treatments or they actually get useful information,” he said. The remark was a pointed reference to a 2016 scandal in which Baidu was investigated by China’s government, because a student with cancer died after paying for an unproven treatment discovered in a search ad. It’s a smart strategy, but not original. When Sogou listed on the New York Stock Exchange in 2017, its filings touted a dedicated healthcare search feature, plugged into government-backed health information. Google might benefit from a perception in China that overseas companies can be more trustworthy, one reason many families import baby formula instead of buying local brands. Pichai didn’t mention another case study from China’s search market his team has probably thought about. While Google has been responding to internal and external protests over its China project, Microsoft’s Bing search engine has been quietly serving—and censoring—search results inside China. Asked about the company's willingness to work with censorship, a Microsoft spokesperson said: "We work to comply with laws around the world while adhering to our strong commitments to human rights." Microsoft launched Bing in China in 2009, tweaking its brand to something closer to “biying” (必应) to avoid sounding like a Mandarin word for sickness. Despite the change, and Microsoft’s large and long-established research lab in Beijing, the project has not been healthy. Biying has never won more than a tiny fraction of Chinese searches; in 2015 Microsoft struck a deal that makes Baidu the default search engine for the Edge browser built into Windows. Charlie Smith, the pseudonymous co-founder of GreatFire.org, a nonprofit that monitors Chinese internet censorship, says it shows that China’s internet users don’t necessarily see American search engines as an upgrade. “The Chinese already have that alternative and it's terrible,” he says. Source
  11. Add one more voice to calls for regulation of Google: Barry Diller, chairman and senior executive of Expedia Group. Diller, interviewed by CNBC’s Andrew Ross Sorkin during an event at the Economic Club of New York, was asked how he felt about the large technology companies. Diller, who holds the titles of chairman and senior executive for both Expedia and IAC, immediately responded that Facebook and Google “own, basically, advertising business worldwide.” “Inevitably, all monopolies behave the same, and you’ve got to have regulation of what they do, once they get to that stage,” Diller said. “I would stop them from going into businesses to compete with their own advertisers.” “We spend $3.5 billion a year on Google advertising,” Diller explained, referring to Expedia. While, he said, at one point some of the Google traffic came for free, “they now say, ‘No, you have to pay for everything, and we’re going to compete with you directly in that travel business to offer travel services that essentially will disintermediate you.'” Diller, alluding to the recent European Commission actions against Google, said while regulation may not lead to a fully level playing field, regulation of some sort would eliminate “real bad practices.” But Diller did not elaborate on what that regulation should look like. Google has gradually been expanding into the travel market since 2011, when it first launched Google Flights, later adding booking capabilities. Google Maps and Search have added hotel details and prices, bypassing more traditional online travel agencies. For its part, Bellevue, Wash.-based Expedia Group has a large number of travel businesses, from its flagship Expedia site to Hotels.com, Orbitz, Travelocity, Hotwire, and HomeAway. A survey earlier this year by market research firm HarrisX found that 53 percent of Americans thought that large technology companies should be regulated by the federal government in the way big banks are, but a plurality of 38 percent didn’t think the government was capable of doing so. Google was not the top choice for the highest level of regulation: just 34 percent thought Google should be “heavily regulated,” but that result was 49 percent for Facebook. You can hear Sorkin grill Diller on tech giants and more in the Economic Club of New York interview video, with the regulation discussion starting at 21:20. Source
  12. Sundar Pichai, in his first public acknowledgement of Project Dragonfly, says the effort was an internal project to find out what was possible for Google in China. Sundar Pichai, the head of Google, has been under intense scrutiny recently. Google has been experimenting with a censored search engine that would work in China, but it's not sure if it will ever launch the service, CEO Sundar Pichai said Monday. Pichai, speaking during the Wired25 conference at the SFJazz Center in San Francisco, said Google started the internal project -- dubbed Project Butterfly -- to see what was possible in China, a country with such strict censorship laws that many US companies, including Google, don't operate their services there. The company has been roiled by reports about Project Dragonfly, the company's apparent plan to build a censored search engine for China, eight years after initially retreating from the country. At the time of the departure, Google co-founder Sergey Brin, who grew up in the Soviet Union, cited the "totalitarianism" of Chinese policies. The new search project has also drawn criticism from Google's workforce. A handful of employees have reportedly quit over the initiative. And about 1,000 employees signed an open letter asking the company to be transparent about the project and to create an ethical review process for it that includes rank-and-file employees, not just high-level executives. Google has said little about the project. However, last month, Keith Enright, Google's chief privacy officer, confirmed during a hearing with the Senate Commerce Committee that there is indeed a Project Dragonfly, but he wouldn't elaborate. Monday's remarks are the first public acknowledgment by Pichai that Google has been working on such a project. Pichai noted Google is constantly "balancing our set of values of providing users access of information, freedom of expression, user privacy, but we also follow the rule of law in every country." China has been a particular challenge, he said. "That's the reason we did the internal project," he said. "We wanted to learn what it would look like if Google were in China." After building the project internally, Google found that it would "be able to sell well over 99 percent of queries," Pichai said. "There are many, many areas we'd be able to provide info that's better than what's available." But he said that Google wants "to balance it with what the conditions would be. It's very early. We don't know whether we would or could do this in China, but we felt it was important for us to explore ... given how important the market is and how many users there are." Pichai's interview at Wired25 comes as Google, which turned 20 years old last month, faces some of the biggest challenges in the company's history. Google and Pichai have been under intense scrutiny recently, especially from Washington, DC. Last month, Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey testified before the Senate over election security, disinformation and the perceived biases of the companies' algorithms. Larry Page, CEO of Google's parent company Alphabet, and Pichai, CEO of Google itself, were invited, but both declined, spurring widespread anger from lawmakers. Pichai is expected to give his own congressional testimony next month after the midterm elections. Google has also been hit with allegations of political bias. In August, President Donald Trump accused Google of political bias and having a liberal bent. He tweeted that Google's search results are "RIGGED," saying the company is "suppressing voices of Conservatives." He also tweeted a video claiming Google promoted former President Barack Obama's State of the Union addresses every January but not his. Trump added the hashtag #StopTheBias. Google rejected the president's claim, saying its homepage did promote Trump's address in January. The company also explained it didn't promote either Trump's or Obama's address from their first years in office because those speeches aren't technically considered State of the Union addresses. A screenshot from the Internet Archive, which keeps a record of what appears on web domains, backs up Google's explanation. Working with the Defense Department Another controversy is Google's decision earlier Google earlier this month to pull out of bidding for a $10 billion Pentagon contract after employee protests. Google said that the project may conflict with its principles for ethical use of AI. Pichai on Monday said Google plans to keep working with the Defense Department but not when it comes to the use of artificial intelligence for autonomous weaponry. He noted it's not just Google employees who have been concerned about the use. "If you talk about senior researchers working in the field, they're worried that when you're so early with powerful tech, how do you thoughtfully work your way through it?" Pichai said. "Once we started working on AI, we realized it was different from other things we've worked on," Pichai added. "We commited ourselves to a set of AI principles -- kind of articulated our goals on how we would do it. ... and things we would not pursue. ... It's something that will evolve over time, but we need to take it very, very seriously." He said that while Google is a much bigger company than it was 20 years ago, it still has the same values. But because it has so many users, "with that comes a sense of responsibility now," he said. "We're much more deliberate about what we do and how we think about it. When we think about impact, we don't think about users alone." It also considers societies, nonprofits, for-profit businesses and other entities, he said. Source
  13. Yesterday, the lawmakers’ Democratic counterparts requested an FTC investigation into the vulnerability Republican leaders from the Senate Commerce Committee are demanding answers from Google CEO Sundar Pichai about a recently unveiled Google+ vulnerability, requesting the company’s internal communications regarding the issue in a letter today. This past March, Google had discovered a flaw in its Google+ API that had the potential to expose the private information of hundreds of thousands of users. In the internal memo first obtained by The Wall Street Journal, officials at Google opted not to disclose the vulnerability to its users or the public for fear of bad press and potential regulatory action. Now, lawmakers are asking to see those communications firsthand. “As the Senate Commerce Committee works toward legislation that establishes a nationwide privacy framework to protect consumer data, improving transparency will be an essential pillar of the effort to restore Americans’ faith in the services they use,” the lawmakers wrote. “It is for this reason that the reported contents of Google’s internal memo are so troubling.” On Wednesday, some of the senators’ Democratic counterparts on the committee reached out to the Federal Trade Commission to demand that the agency investigate the Google+ security flaw, saying in a letter that if agency officials discover “problematic conduct, we encourage you to act decisively to end this pattern of behavior through substantial financial penalties and strong legal remedies.” The Google+ privacy flaw comes amid a heated debate over consumer data privacy kicked off by Facebook’s ongoing Cambridge Analytica scandal. Over the past few weeks, lawmakers have repeatedly heard from tech executives, policy heads, and advocates on how to craft an overarching federal privacy bill. Pichai has stayed away from those discussions, even leaving the company’s seat vacant at a recent Senate Intelligence Committee hearing in which executives from Facebook and Twitter faced off with lawmakers. At the same time, some senators are expressing a new openness to anti-monopoly action against modern tech companies like Google. In an interview published today in The Atlantic, Sen. Mark Warner (D-VA) expressed concern that both Google and Facebook may be too large for effective competitors to emerge. “Is there ever an ability to really break up their market dominance?” Warner said. “Even if you’ve got a better app, you can never match them on data” By sending these letters and requesting investigations, Congress is beginning to take what they’ve heard in hearings to start to take action on behalf of consumers. “Particularly in the wake of the Cambridge Analytica controversy, consumers’ trust in the companies that operate those services to keep their private data secure has been shaken,” today’s letter reads. “At the same time that Facebook was learning the important lesson that tech firms must be forthright with the public about privacy issues, Google apparently elected to withhold information about a relevant vulnerability for fear of public scrutiny.” Google has until October 30th to respond to the senators’ inquiries, just weeks before Pichai is scheduled to testify in front of the House Judiciary Committee following the November midterm elections. An exact date for that hearing has yet to be announced. Source
  14. Our Founding Fathers drafted the Bill of Rights to safeguard our freedoms in the physical world. Today, as Americans are living more of their lives online, the digital age demands that we have new rights to protect our freedoms in the cyber world. To secure these rights, we will have to overcome gridlock and a knowledge gap in Congress. Following the Equifax breach nearly a year ago and the Facebook hearings on Cambridge Analytica six months back, Congress still hasn’t acted. Besides a few hearings that exposed our Senators’ lack of knowledge of the Internet, Congress adjourned two weeks early to extend the midterm campaigns, instead of staying to work on passing an Internet-reform bill. The lack of urgency in Congress has persisted even in the wake of recent revelations that a Facebook security breach exposed 50 million users’ personal information to attackers and Google let third-party app developers access information on users who did not give them permission. The truth is that most elected officials and their legislative staff on Capitol Hill simply lack the necessary expertise to write rules for the Internet. Since I represent Silicon Valley, Democratic Leader Nancy Pelosi tapped me in April to draft a set of principles for an Internet Bill of Rights. Instead of only focusing on privacy and the right to protect one’s own identity and data, I included principles ensuring net neutrality and universal access to the Internet. In total, with the help of consumer groups and World Wide Web founder Tim Berners-Lee, we came up with ten principles that can help define rights in the digital age. I imagine thoughtful Republicans such as U.S. Representatives Mike Coffman and Will Hurd, along with Matt Lira from the White House’s Office of American Innovation, could collaborate on legislation based on these principles. They are as follows: First, you should be able to know and access what personal data of yours companies collect. Instead of reading a long and convoluted legal document, it should be clear and in plain language what information of yours is being collected. Second, you should be able to opt-in and consent when that personal data is being collected and shared. It should be clear exactly what you are consenting to, but such prompts shouldn’t be relentless to the point of fatigue. Third, you should be able to correct or delete incorrect personal data, assuming such action does not violate the First Amendment. This right is not the same as the European Union’s “Right to be Forgotten,” given that we have the First Amendment protecting the press’ free speech in the U.S. In the 2014 case Garcia v. Google, the Ninth U.S. Circuit Court of Appeals wrote that “such a ‘right to be forgotten,’ although recently affirmed by the Court of Justice for the European Union, is not recognized in the United States.” Fourth, if you allow a company to collect your personal data, that data should be properly secured. If for some reason there is a breach, that company must notify you in a timely manner, not only when it’s financially convenient. Last year, despite knowing about the security breach on July 29, Equifax waited until Sept. 7 until they notified their customers. Similarly, Facebook shouldn’t have been able to wait years to publicly announce its Cambridge Analytica breach. Fifth, you should be able to have data-portability and move your personal data from network to network. It’s your data and you should have the right to move it if you want — including moving your personal network from Facebook or Snapchat to any other social media platform. Sixth, you should have access to a free and open Internet despite efforts by the Trump Administration and FCC Chairman Ajit Pai to dismantle net-neutrality protections. Internet service providers should not be permitted to block, throttle and unfairly favor certain content, applications, services or devices. Seventh, you should be able to access the Internet without the collection of data that is unnecessary for providing the requested service. An Internet service provider reasonably needs to know your name and address. But it’s hard to imagine why a provider would need to collect your Internet browsing habits other than to sell your data. Eighth, you should be able to access multiple viable, affordable Internet platforms, services and providers with clear and transparent pricing. According to the FCC, 30% of Americans have only one choice for broadband service. Thirteen percent don’t have access to a provider at all. All Americans must have access to the Internet in today’s digital world, and the market needs competition to drive affordable prices. Ninth, just like you can no longer be discriminated against at the lunch counter, you should have the right to not be exploited or unfairly discriminated against based on your personal data. For instance, advertisements for high-paying jobs should not be disproportionately shown to men, and if you search for black names and fraternities, you shouldn’t be more likely to see advertisements for arrest records. Tenth, in the case where an entity collects your personal data, it must adopt cybersecurity best practices. There should be an understanding and trust that your privacy and data will be protected. Entities need to be held legally responsible for not implementing reasonable business practices. My hope is that these ten rights will begin the much-needed and long-overdue conversation in Congress to guide a legislative solution that restores our privacy and protection online. The American people can no longer wait while their data is being collected, shared and stolen on the web. The Internet can be a tool for more freedom and prosperity, but only if proper rules and guidelines exist. Our constituents tasked us to make those rules. It is now up to Congress to answer that call and bring our laws into the 21st Century. Source
  15. Internet giant Google sees itself as “The Good Censor” upholding “safety” on the internet against totally-unfettered speech, according to a leaked internal briefing of the same name obtained by Breitbart. On Tuesday, Breitbart revealed the 85-page document, dated March 2018 and billed as a presentation on how the company can “reassure the world that it protects users from harmful content while still supporting free speech,” identifies Google, Facebook, and Twitter as “control(ling) the majority of online conversations.” Its intended audience is unknown, but substantial production values are apparent in its graphics and visual aids, as well as its self-declared “several layers of research.” The briefing says Google consulted 35 “cultural observers” and seven “cultural leaders” from seven countries on five continents, and interviewed MIT Tech Review editor-in-chief Jason Pontin, Atlantic staff writer Franklin Foer, and George Washington University cybersecurity expert Kalev Leetaru. The briefing, which can be read in its entirety here, opens by discussing how “free speech” has “become a social, economic, and political weapon,” leading internet users to ask “if the openness of the internet should be celebrated after all.” Recent global events such as online “fake news” in the United States’ 2016 presidential election and the “rise of the alt-right” have “undermined” the original “utopian narrative” of the internet as a place for unfettered competition of ideas. It identifies the ease, accessibility, and anonymity of online communication, as well as the ease of joining like-minded communities as wearing down social norms, reinforcing groupthink, and all-but eliminating consequences for hostile and dishonest behavior. “The ‘little guys and girls’ can now be heard - emerging talent, revolutionaries, whistleblowers and campaigners. But ‘everyone else’ can shout loudly too - including terrorists, racists, misogynists and oppressors,” the document says. “And because ‘everything looks like the New York Times’ on the net, it’s harder to separate fact from fiction, legitimacy from illegitimacy, novelty from history, and positivity from destructivity.” Notably, it identifies several ways tech firms have been “behaving badly,” such as “incubating fake news,” letting automated review systems innocently censor legitimate content, insufficiently explaining how their algorithms work, selective enforcement, and agreeing to help foreign governments censor their people. But while superficially conceding several common grievances, conservatives doubt Google’s “inadvertent error” explanations in light of previous leaks of top Google insiders’ political biases, and argue that social media’s concern with selectively-defined “fake news” is a pretext for silencing truthful dissenters from conventional wisdom. Additionally, Google cites Facebook’s complicity in Turkish and Pakistani censorship, but omits its own cooperation with Chinese state censors. Other hints of the company’s partisan leanings include listing President Donald Trump’s 2016 claim that Google searches were biased toward his competitor Hillary Clinton as a “conspiracy theory,” and placing a picture of a Trump campaign above a passage about Russian election interference. “Tech firms are performing a balancing act between two incompatible positions,” the presentation claims: an “unmediated ‘marketplace of ideas,’” and “well-ordered spaces for safety and civility.” It admits that tech companies have “gradually shifted” toward “censorship and moderation, a more “European tradition” that “favors dignity over liberty and civility over freedom.” Notably, it admits that Google, YouTube, Facebook, and Twitter have shifted away from their roles as “aggregators” in favor of acting more like “editors” and “publishers.” It does not, however, address the fact that the Congressionally-granted immunity tech firms currently enjoy from liability for the content they allow – something hailed as a key to their growth early in the document – is predicated on behaving like the former instead of the later. Ultimately, the document doesn’t decide whether companies should continue toward the European model or reverse course, but simply calls for greater transparency, better communications, and clearer rules, as well as for policing “tone instead of content” without “tak(ing) sides.” A Google spokesperson told Breitbart the document is mere “internal research” rather than any official position, but such assurances come as little comfort to those who accuse Google, Facebook, and Twitter of discriminating against conservatives on their platforms and services. "This story confirms our worst fear,” responded Media Research Center president Brent Bozell, who has been working to organize conservatives against online censorship. “Contrary to Google's public statements and what they have said to us in private discussions, Google is in the censorship business and apparently the lying business as well. We're going to be meeting with our coalition partners immediately and we will announce next moves very soon." Bozell’s coalition argues that tensions between speech and abuse can be resolved by simply mirroring the First Amendment to the U.S. Constitution, as currently interpreted by the U.S. Supreme Court. “That standard, the result of centuries of American jurisprudence, would enable the rightful blocking of content that threatens violence or spews obscenity, without trampling on free speech liberties that have long made the United States a beacon for freedom,” the coalition says. The Good Censor - GOOGLE LEAK https://www.scribd.com/document/390521673/The-Good-Censor-GOOGLE-LEAK Source
  16. I Am Negan

    Google and vpn

    I had my VPN set on Canada and then when I went to Google the address was www.google.com.br instead of www.google.com. Why does it do that?
  17. Google's $100 billion digital ad business is coming under new leadership. After more than five years as Google's head of advertising and commerce, Sridhar Ramaswamy is leaving the internet giant to join a VC firm. Prabhakar Raghavan, who currently leads Google's business applications unit, will take over the role, Bloomberg reports. Sridhar Ramaswamy Ad sales bring in, by far, the most revenue for Google and its parent company Alphabet. In the second quarter of this year, Google's total advertising revenue was $28.1 billion, up from $22.7 billion a year ago. Alphabet's total quarterly revenue came to $32.7 billion. Raghavan will now be responsible for all product development and engineering related to ads. Meanwhile, ad sales and business partnerships fall under the purview of Google's Chief Business Officer Philipp Schindler. Prabhakar Raghavan Raghavan told Bloomberg that his priority is maintaining continuity within the ad businesses. "The ecosystem remains strong. The business remains strong. The team is fantastic," he said. "My focus is on how to take that fantastic machine and keep it going rather than being a bull in a china shop." Source
  18. Part of Google’s growing effort to build search-based ad tools outside its core search engine Google is expanding its use of lucrative search-based advertising tools on YouTube, to help advertisers target potential customers as they search for everything from products to movie trailers on the video site. The news, announced this morning at Advertising Week and reported by CNBC, marks a shift in how Google treats YouTube. Increasingly, the company is relying on YouTube as an extension of its core search engine instead of a separate entity. To help drive home the point, Google representatives told the crowd at Advertising Week that YouTube is the second most popular search engine in America, behind Google Search. The logic makes sense, and Google says it has data to prove that many people who search for products, movies, and other items on Google Search then head over to YouTube to watch reviews, unboxing videos, and other content related to the product. From there, Google says it can effectively target those customers. For instance, searching for movie reviews on Google Search and then heading over to YouTube to watch a trailer may trigger an ad for showtimes at your local AMC theater. Google is calling the tool “ad extensions for video.” For Google, expanding its ad business is a key component to staving off competition from Facebook and, increasingly, Amazon, which has been building a powerful, product-based ad business based off Amazon product searches. Today, Google makes nearly $100 billion a year. A majority of that revenue comes from ads, a majority of that ad revenue is search-based advertising powered by Google’s AdWords, AdSense, and DoubleClick technologies. However, Google’s dominance in web advertising is tied to the strength of the web, and more companies, like Amazon and Facebook, are cutting into that by locking customers and the behaviors that would drive targeted ads into their own ecosystems. Every time an internet user spends time on Facebook or searches for products directly on Amazon instead of Google is a potential loss for the search giant’s ad business. So it makes sense that Google would turn to YouTube, which is the country’s most popular video site and could help power a new breed of search advertising that takes into account behaviors across a bigger swath of Google’s network. Only last year did Google start letting advertisers target ads on YouTube based on that individual user’s search habits on the site itself, as opposed to just running ads against the type of content a user was watching. Now, it looks like Google is integrating its search ad tech more directly into the YouTube ecosystem. Of course, that won’t come without some concerns about privacy, as users who like to keep their Google and YouTube habits separate from one another might find it ever more difficult to use the company’s products without feeling like ads are following them everywhere. Just last month, Google found itself mired in a controversy over Chrome version 69, which would automatically log users into the browser if they logged into an accompanying Gmail or YouTube account through Chrome. The change would allow Google to more easily track behaviors across all of those sites, tie them to a single account, and more effectively target ads. The shift caused an outcry from critics and privacy advocates, and Google has since announced a way to disable the feature in Chrome 70, due out later this month. Still, it was a telling demonstration of Google’s eagerness to turn its web of products, nearly all of which command more than 1 billion users each, into a more cohesive network to rival the amount of data and personalization that Facebook has been able to offer advertisers for years now. It looks like that won’t slow down any time soon, and that we should expect more ad-based Google products to increasingly work together in the future. Source
  19. Sign up to potentially get access to the test starting Oct. 5. In a surprise announcement Monday, Google revealed a partnership with Ubisoft to bring the upcoming Assassin's Creed Odyssey to your Chrome Browser. On the same day the game comes out on Playstation 4, Xbox One and PC, a limited number of users will be able to put Google's new streaming technology, Project Stream, to the test in what could be a big step forward for efforts to bring AAA games to streaming platforms. "The idea of streaming such graphically-rich content that requires near-instant interaction between the game controller and the graphics on the screen poses a number of challenges," Google said in its blog post announcing Project Stream. "When streaming TV or movies, consumers are comfortable with a few seconds of buffering at the start, but streaming high-quality games requires latency measured in milliseconds, with no graphic degradation." Google isn't alone in bringing Assassin's Creed Odyssey to streaming platforms, with Ubisoft also experimenting with bringing the game to the Nintendo Switch in Japan via cloud servers. The announcement follows longstanding rumors about Google making a more serious foray into the gaming industry, dubbed "Yeti" by early reports. You can watch a 1080p, 60fps video of Assassin's Creed Odyssey captured from Project Stream below, and sign up for the limited beta here. And a word of caution for this with slow internet connections, the test is geared toward participants with a home internet connection of at least 25 megabits per second. Source
  20. Google today announced a number of upcoming changes to how Chrome will handle extensions that request a lot of permissions, as well as new requirements for developers who want to publish their extensions in the Chrome Web Store. It’s no secret that no matter which browser you use, extensions are one of the main vectors that malicious developers use to gain access to your data. Over the years, Google has improved its ability to automatically detect malicious extensions before they ever make it into the store. The company also made quite a few changes to the browser itself to ensure that extensions can wreak havoc once they have been installed. Now, it’s taking this a bit further. Starting with Chrome 70, users can restrict host access to their own custom list of sites. That’s important because, by default, most extensions can see and manipulate any website you go to. Whitelists are hard to maintain, though, so users can also opt to only provide an extension with access to the current page after a click. “While host permissions have enabled thousands of powerful and creative extension use cases, they have also led to a broad range of misuse — both malicious and unintentional — because they allow extensions to automatically read and change data on websites,” Google explains in today’s announcement. Any extensions that request what Google calls “powerful permissions” will now also be subject to a more extensive review process. In addition, Google will also take a closer look at extensions that use remotely hosted code (since that code could be changed at any time, after all). As far as permissions go, Google also notes that in 2019, it’ll introduce new mechanisms and more narrowly scoped APIs that will reduce the need for broader permissions and that will give users more control over the access that they grant to their extensions. Starting in 2019, Google will also require two-factor authentication for access to Chrome Web Store developer accounts to make sure that a malicious actor can’t take over a developer’s account and publish a hacked extension. While that change is still a few months out, starting today, developers are no longer allowed to publish extensions with obfuscated code. By default, obfuscated code isn’t a bad thing. Developers often use this method of scrambling their JavaScript source code to hide their code, which would otherwise be in clear text and easy to steal. That also makes it very hard to figure out what exactly the code does, and 70 percent of malicious extensions and those that try to circumvent Google’s policies use obfuscated code. Google will remove all existing extensions with obfuscated code in 90 days. It’s worth noting that developers will still be allowed to minify their code to remove whitespace, comments and newlines, for example. Source
  21. GOP lawmakers raised questions about alleged bias in Google’s search function, such as the criteria for determining rankings of results Google CEO Sundar Pichai Google chief executive Sundar Pichai paid a rare visit to Washington on Friday to defend the search giant against allegations that it silences conservatives online, part of an effort to defuse political tensions between the company, Congress and the Trump administration ahead of a key hearing on Capitol Hill later this year. Weeks after President Trump accused Google of having “rigged” search results, the company’s leader paid the White House a visit, meeting on Friday with Larry Kudlow, one of the president’s top economic advisors, a spokeswoman for the White House confirmed. During the private session, which focused on “issues impacting internet platforms and the economy in general,” Pichai agreed to attend an upcoming “roundtable with the President and other internet stakeholders,” the White House announced. The spokeswoman said details would be forthcoming, including other tech giants invited to the meeting. Previously, Kudlow said the Trump administration was open to regulating search results but the president later seemed to distance himself from the idea. Earlier Friday, Pichai also sought to face his critics on Capitol Hill. In a meeting with a dozen House Republicans, GOP Leader Kevin McCarthy (Calif.) stressed to Pichai that party lawmakers are concerned about “what’s going on with transparency and the power of social media today,” particularly given the fact that Google processes 90 percent of the world’s searches. Google has long denied that it censors conservatives. Pichai explained during the roughly hour-long meeting with lawmakers how the company sets up its teams and codes its algorithms to prevent bias, according to a person who attended the meeting and spoke on the condition of anonymity. Pichai’s trip to Washington comes in anticipation of his appearance at a hearing later this fall, at which lawmakers have stressed they will press him not only on charges of censorship but also on other issues facing the company — including the privacy protections it affords users and its ambitions to relaunch its search engine in heavily censored China. After huddling with lawmakers, Pichai described his conversations as “constructive and informative,” adding in a statement that Google is “committed to continuing an active dialogue with members from both sides of the aisle, working proactively with Congress on a variety of issues, explaining how our products help millions of American consumers and businesses, and answering questions as they arise." But Pichai’s personal outreach — the beginning of more to come — caps off a bruising month for Google in the nation’s capital. Along with Trump’s recent attacks, fears about the tech industry’s size and power dominated a meeting this week between the Justice Department and state attorneys general, where some officials expressed an openness to investigating Google and its tech industry peers on privacy and antitrust grounds. Others in Washington question whether Google and the rest of the tech industry are prepared to stop foreign governments, like Russia, from spreading propaganda online ahead of November’s elections. Yet Google infuriated lawmakers when it opted against sending Pichai or Larry Page, the chief executive of parent company Alphabet, to testify at a Senate hearing this month on the matter. Instead, lawmakers left an empty chair at the witness table to reflect Google’s absence and pilloried the company anyway on a range of issues. In a sign that some Democrats and Republicans remain miffed at Google, Sens. Richard Burr (R-N.C.) and Mark Warner (D-Va.) — the leaders of the panel that had asked Google to testify — declined to meet with Pichai this week, according to two people familiar with the matter who were not authorized to speak on the record. Burr’s office declined to comment; a spokesman for Warner confirmed the matter. Instead, Pichai huddled beginning Thursday with lawmakers like House Democratic Leader Nancy Pelosi (Calif.) and Sen. Brian Schatz (D-Hawaii), spokesmen confirmed. Schatz used the opportunity to press Google on its privacy practices, his aide said, as he and other lawmakers weigh whether they should pass new regulations restricting the way tech giants collect and monetize users' data. At Friday’s meeting, Rep. Bob Goodlatte (R-Va.), the chairman of the House Judiciary Committee, said he and his peers had “served notice” to Pichai to expect questions on topics including “antitrust issues” and allegations of conservative bias. The date of the hearing in front of the panel has not been announced. “There's a lot of interest in their algorithm, how those algorithms work, how those algorithms are supervised,” Goodlatte said. Some Republicans also pressed Pichai on Google’s ambitions in China, though Pichai stressed that Google is far from a final decision on whether to launch a censored version of its search engine there, according to Goodlatte. Republicans dialed up pressure again on tech giants over alleged anti-conservative bias and other issues on Friday, as the White House announced a roundtable to be attended by President Trump, Google Chief Executive Sundar Pichai and other “internet stakeholders.” A White House spokeswoman said Mr. Trump’s top economic adviser, Larry Kudlow, invited Mr. Pichai to attend the roundtable during a meeting on Friday, and Mr. Pichai accepted. Details of the roundtable will be announced later. The White House described the meeting Friday as positive and productive and, according to a person familiar with the matter, is aiming to bring executives from other tech companies including Facebook Inc., Amazon.com Inc. and Apple Inc. to the roundtable. The president in late August launched a barrage of criticism at Google and other big internet platforms. “Google search results for `Trump News’ shows only the viewing/reporting of Fake News Media. In other words, they have it RIGGED, for me & others, so that almost all stories & news is BAD,” the president tweeted. “Google & others are suppressing voices of Conservatives and hiding information and news that is good. They are controlling what we can & cannot see.” On Friday, Messrs. Kudlow and Pichai “discussed a range of issues impacting internet platforms and the economy in general,” said deputy White House press secretary Lindsay Walters. “Mr. Kudlow enjoyed meeting Mr. Pichai and exchanging views.” Earlier Friday, a top House Republican leader vowed to seek more transparency from Google and other big tech companies after a meeting with Mr. Pichai at which GOP lawmakers aired concerns about anti-conservative bias. “As big tech’s business grows, we have not had enough transparency and that has led to an erosion of trust and perhaps worse—harm to consumers,” said Rep. Kevin McCarthy (R., Calif.), the House majority leader, following the closed meeting, which included about a dozen Republicans. Mr. McCarthy promised to “continue to work toward that goal over the coming weeks and months.” Mr. Pichai, for his part, said Google, a unit of Alphabet Inc., will stay engaged with lawmakers. He said he would personally appear before the House Judiciary Committee, a panel that already has held several hearings on alleged anti-conservative bias among tech platforms. The latest was on Thursday. “We remain committed to continuing an active dialogue with members from both sides of the aisle, working proactively with Congress on a variety of issues, explaining how our products help millions of American consumers and businesses, and answering questions as they arise,” Mr. Pichai said. During the meeting, which attendees described as respectful, GOP lawmakers raised a number of questions relating to alleged bias in Google’s search function, such as the criteria used to determine rankings of results. “People want a greater level of understanding of how these products are constructed,” said one person who attended the meeting. There was “interest in hearing and learning more about that.” For Mr. McCarthy, who is battling to hold on to the GOP majority in the House in the November congressional elections, the meeting represented a chance to show he can make an impact on an issue that worries many conservatives—the influence tech companies can wield in shaping publicly available information and debate. For Google, which has been criticized for being too closely aligned with Democrats and dismissive of conservatives, the meeting represented a chance to show it recognizes it needs to do more outreach. Google was criticized recently for failing to make a senior executive available for a recent hearing. Following the meeting, Mr. McCarthy went out of his way to praise Mr. Pichai’s appearance and said Google also would be appearing at a Judiciary Committee hearing this fall to discuss alleged bias and other concerns. “This conversation is an important one,“ Mr. McCarthy said. “It will continue after today.” It remains unclear how much lawmakers can do to address the alleged bias issue, given First Amendment constraints on government regulation of speech on the internet. Google and other platforms have denied they use their search, news and other functions to promote particular political points of view. Sources Here and Here
  22. Google pays Apple to be the default search engine on the iPhone. Goldman Sachs estimates these payments are now over $9 billion per year. According to the estimate, Apple makes more money on these payments than iCloud or Apple Music. Apple CEO Tim Cook Google pays Apple so that it remains the default search engine on the iPhone's Safari browser. Although neither Google nor Apple discuss the terms of the agreement, most analysts believe the payments are billions of dollars per year. In fact, the so-called "traffic acquisition cost" payments may be bigger than anyone on Wall Street thinks, Goldman Sachs analyst Rod Hall wrote in a note distributed to clients on Friday. Google could pay Apple $9 billion in 2018, and $12 billion in 2019, according to the Goldman estimate. "We believe this revenue is charged ratably based on the number of searches that users on Apple’s platform originate from Siri or within the Safari browser," Hall wrote. "We believe Apple is one of the biggest channels of traffic acquisition for Google," he continued. Goldman's report models Google's payments to Apple as a fraction of the money it makes on iOS through paid searches, and worked backwards from iOS market share, added a premium, and used a rate based on previous Google disclosures. Goldman also noted that Google said last year that it had renegotiated TAC terms, which some guessed meant that Apple's rate had increased. In 2017, Bernstein analyst Toni Sacconaghi estimated that Google was paying Apple $3 billion per year. The only hard number we know for sure is that Google paid Apple $1 billion in 2014, thanks to court filings. Apple has recently been drawing investor focus to its growing "services" line item, which accounted for 13% of total revenue in Apple's fiscal 2017. "We believe the transformation to services, led by growth in both installed devices and service revenue per device, is tracking better than investor expectation," JPMorgan analyst Samik Chatterjee wrote in a note on Thursday. When Apple executives talk about Services, they like to focus on the fee Apple collects from software sold on the App Store or the money the company makes through subscriptions like Apple Music and iCloud. But according to the Goldman model, TAC fees account for 24% of the services business, and AppleCare, Apple's repair and warranty program, accounts for 17% of the $31.3 billion in services revenue that Apple collected last year. "We don’t believe Apple Services should be valued standalone at a higher multiple than the combined company," Hall concludes. Goldman Sachs has a 12-month price target of $240 on Apple. Source
  23. Belgium plans to sue Google over the tech giant’s refusal to blur sensitive military sites and nuclear power plants on the company’s various mapping platforms. Military leaders in Belgium have not yet filed a formal complaint but confirmed to Reuters that they intend to sue. File photo of the nuclear power plant of Tihange in Belgium “It’s a shame the Belgium Department of Defense have decided to take this decision. We have been working closely with them for more than two years, making changes to our maps where asked and legal under Belgian law. We plan to continue working with them in that spirit of cooperation,” Michiel Sallaets, Google’s spokesperson in Belgium, told Gizmodo in a statement via email. The platforms at issue are Google Earth, Google Maps, and Street View—all of which rely on elements of third-party imaging, which appears to be the crux of Google’s pushback. In short, Google would have to alter images that they’re receiving from other technology companies that do satellite imaging. Google refuses to do that for reasons still unknown. The delicate issues surrounding privacy and mapping software aren’t new. American law generally finds that citizens don’t have an expectation of privacy from anything that can be seen from the street—a standard largely upheld in state courts. But European laws are different, and countries on the other side of the Atlantic Ocean generally have stricter laws regarding tech and privacy. The only American municipalities that have successfully kept Google’s mapping services at bay have been cities with private roads. The roads of North Oaks, Minnesota are considered private property and successfully got Street View images taken down in 2008 because Google’s vehicles were considered to be “trespassing.” But Google is also eager to work with countries and militaries to protect their sensitive information if the only alternative is being denied access to their market. China is perhaps the most high-profile example. Google is reportedly working on a censored search engine for the country of almost 1.4 billion people. Some see this as a compromise of its supposed dedication to the ideals of liberal democracy and freedom of speech. But when you’re talking about acquiring massive amounts of money, Google has shown that it isn’t going to let high-minded ideals get in the way. Source
  24. Chrome is a Google Service that happens to include a Browser Engine Starting with Chrome 69, logging into a Google Site is tied to logging into Chrome. This is typically the topic where things are complex enough that tweets or 500 character Mastodon toots don’t do it justice. I’d also mention that I prefer to avoid directly linking people’s posts on this, because I dislike the practice of taking discussions out of their original audience and treating them as official or semi-official communications from a given company. So what changed with Chrome 69? From that version, any time someone using Chrome logs into a Google service or site, they are also logged into Chrome-as-a-browser with that user account. Any time someone logs out of a Google service, they are also logged out of the browser. Before Chrome 69, Chrome users could decline to be logged into Chrome entirely, skipping the use of Sync and other features that are tied to the login and they could use Chrome in a logged-out state while still making use of GMail for example. Just to spell it out: this means Google logins for Chrome are now de-facto mandatory if you ever login to a Google site. When someone in the security community raised this, it turned out that apparently this is intended behaviour from Google’s side as confirmed by multiple googlers and they were wondering why the new behaviour might feel abusive to some people. Some folks working on Chrome pointed out that most people can’t differentiate between logging into a Google Site and logging into Chrome and this has lead to problems with shared computers, where person A logs into GMail, but person B is logged into Chrome. This prompted Chrome developers to come up with the change that erases the distinction entirely. It is at this point that I should note that I don’t personally use Chrome, as I felt it was too closely corporate Google even before this change. This is also not a post arguing that “some users can tell the difference, therefor…”, I do believe software should be written with the common users in mind. Interestingly, the common user belief that strongly equates Chrome with a Google Service (and not an application or tool) is probably the more accurate view of Chrome, post release 69. It’s worth wondering from where users got that impression and why. So if this change is just about bringing Chrome in line with what most users believe anyway, what’s the fuss? Perhaps it’s not about what people believe, but what is right. Perhaps Google doesn’t want Chrome, currently having majority browser market share, to be a neutral platform. A lot of people, developers especially, believe that Chrome is a Google-influenced but more or less neutral tool and then this widespread belief has to be reconciled with the Chrome-as-a-service thinking. Violating the content vs browser separation layer doesn’t just conform to what a lot of users believe, it also ties what’s happening inside the browser to Google on an unprecedented level, throwing the neutrality of Chrome as a platform into question. What’s the next thing that Google and only Google can make Chrome do? Concerned about shared computers but you’re not Google? There is no neutral API to log someone out from Chrome and prevent data from being synced if it’s about person A logging into Facebook in person B’s Chrome profile. Sidenote: Most Google services have for me this in common with Facebook: these services are too deeply integrated and impossible to use in part or isolation. It’s either the entire system or nothing, based on how the question of consent is approached. You would like to use GMail (logged in obviously) but Google search, Youtube, Chrome etc without a login? No can do. You selected strict settings in Facebook for your profile data? You’re just an API/permission redesign away from having your choices nullified. Part of me feels that this Chrome shared computer issue that Googlers mentioned is real, but it’s also just too convenient to solve this by tieing Chrome closer to Google, you know? update: - Compare the basic (local) and signed-in mode in Google Chrome’s privacy policy. Silently upgrading from basic mode to signed-in mode makes quite a large difference. - Chromium is apparently also affected by this. - There is a workaround to disable this behaviour. I deliberately don’t include it here, as that relies on internal flags and the point of this post wasn’t to try to revert this change, but rather to think about Chrome’s direction in general. Source
  25. Google is rolling back its ban on cryptocurrency advertisements – following a similar move made by Facebook earlier this summer, CNBC reports. Google in March was among the first of the major platforms to announce it would no longer run ban cryptocurrency ads, due to an abundance of caution around an industry where there’s so much potential for consumer harm. Facebook, Twitter, and even Snapchat had also banned cryptocurrency ads, for similar reasons. But Facebook moved away from its blanket ban this June, when it said it would no longer ban all cryptocurrency ads, but would rather allow those from “pre-approved advertisers” instead. It excluded ads that promoted binary options and initial coin offerings (ICOs), however. Google is now following suit with its own policy change. The update was announced today, we’ve confirmed. Google’s policy still bans ICOs, wallets and trading advice, CNBC reports, citing Google’s updated policy page which points to a list of banned products. But the October 2018 policy update says that “regulated cryptocurrency exchanges” will be allowed to advertise in the U.S. and Japan. To do so, advertisers will have to be certified with Google for the specific country where their ads will run, a process that begins in October. The policy will apply to all accounts that advertise these types of financial products, Google says. Banning cryptocurrency ads on the part of the major platforms was a good step in terms of consumer protection, due to the amount of fraud and spam in the industry. According to the FTC, consumers lost $532 million to cryptocurrency-related scams in the first two months of 2018. An agency official also warned that consumers could lose more than $3 billion by the end of the year, because of these problems. But for ad-dependent platforms like Facebook and Google, there’s so much money to be made here. It’s clear they wanted to find a way to let some of these advertisers back in. Google parent Alphabet makes around 86% of its total revenue from ads, CNBC noted, and booked over $54 billion in ad revenue in the first half of the year. Google has not yet responded to a request for comment. Source
×