Jump to content

Search the Community

Showing results for tags 'facebook'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 470 results

  1. Facebook and the Folly of Self-Regulation The company's new review board is designed to move slowly and keep things intact. Photo-Illustration: Sam Whitney; Getty Images My late colleague, Neil Postman, used to ask about any new proposal or technology, “What problem does it propose to solve?” When it comes to Facebook, that problem was maintaining relationships over vast time and space. And the company has solved it, spectacularly. Along the way, as Postman would have predicted, it created many more problems. Last week, Facebook revealed the leaders and first 20 members of its new review board. They are an august collection of some of the sharpest minds who have considered questions of free expression, human rights, and legal processes. They represent a stratum of cosmopolitan intelligentsia quite well, while appearing to generate some semblance of global diversity. These distinguished scholars, lawyers, and activists are charged with generating high-minded deliberation about what is fit and proper for Facebook to host. It’s a good look for Facebook—as long as no one looks too closely. What problems does the new Facebook review board propose to solve? In an op-ed in The New York Times, the board’s new leadership declared: “The oversight board will focus on the most challenging content issues for Facebook, including in areas such as hate speech, harassment, and protecting people’s safety and privacy. It will make final and binding decisions on whether specific content should be allowed or removed from Facebook and Instagram (which Facebook owns).” Only in the narrowest and most trivial of ways does this board have any such power. The new Facebook review board will have no influence over anything that really matters in the world. It will hear only individual appeals about specific content that the company has removed from the service—and only a fraction of those appeals. The board can’t say anything about the toxic content that Facebook allows and promotes on the site. It will have no authority over advertising or the massive surveillance that makes Facebook ads so valuable. It won’t curb disinformation campaigns or dangerous conspiracies. It has no influence on the sorts of harassment that regularly occur on Facebook or (Facebook-owned) WhatsApp. It won’t dictate policy for Facebook Groups, where much of the most dangerous content thrives. And most importantly, the board will have no say over how the algorithms work and thus what gets amplified or muffled by the real power of Facebook. This board has been hailed as a grand experiment in creative corporate governance. St. John’s University Law Professor Kate Klonick, the scholar most familiar with the process that generated this board, said, “This is the first time a private transnational company has voluntarily assigned a part of its policies to an external body like this.” That’s not exactly the case. Industry groups have long practiced such self-regulation through outside bodies, with infamously mixed results. But there is no industry group to set standards and rules for Facebook. One-third of humanity uses the platform regularly. No other company has ever come close to having that level of power and influence. Facebook is an industry—and thus an industry group—unto itself. This is unprecedented, though, because Facebook ultimately controls the board, not the other way around. We have seen this movie before. In the 1930s the Motion Picture Association of America, under the leadership of former U.S. Postmaster General Will Hays, instituted a strict code that prohibited major Hollywood studios from showing, among other things, “dances which emphasize indecent movements.” The code also ensured that “the use of the [U.S.] flag shall be consistently respected.” By the 1960s, American cultural mores had broadened and directors demanded more freedom to display sex and violence. So the MPAA abandoned the Hays code and adopted the ratings system familiar to American moviegoers (G, PG, PG-13, R, NC-17). One reason the MPAA moved from strict prohibitions to consumer warnings was that American courts had expanded First Amendment protection for films, limiting how local governments could censor them. But all along, the MPAA practiced an explicit form of self-regulation, using a cartel that represented the interests of the most powerful studios to police behavior and represent the industry as a whole to regulators and the public. No one can look at the history of American film and seriously argue that either method of self regulation really served the public. Standards have been sloppily and inconsistently enforced. Through both the Hays code and the rating system, the MPAA limited artistic expression and the representation of lesbian, gay, and transgender issues and stories. But it sure helped Hollywood by keeping regulators at bay. Relevant to the Facebook comparison, the MPAA applies American standards of decency to set its ratings, while the motion picture industry is a transnational power. Studios are much more sensitive to the demands of the government of the People’s Republic of China than they are to the U.S. Senate. The same can be said of Facebook: Using American diction about “free expression” and American ways of thinking to influence a global company is folly. It’s one of the core errors that Facebook made internally years ago. Many industries and professional associations have used cartel power to self-regulate, or at least create the appearance of doing so. The American Bar Association grants accreditation to law schools and thus dictates the content and quality of legal education. It also establishes an ethical code for practicing lawyers. This is substantial power beyond the reach of the state. But, as we have seen in the global mining and textile industries, declarations of labor safety and wage standards don’t mean much in practice. Self-regulation is an excellent way to appear to promote particular values and keep scrutiny and regulation to a minimum. When self-regulation succeeds at improving conditions for consumers, citizens or workers, it does so by establishing deliberative bodies that can act swiftly and firmly, and generate clear, enforceable codes of conduct. If one movie studio starts dodging the ratings process, the MPAA and its other members can pressure theatres and other delivery channels to stop showing that studio’s films. The MPAA can also expel a studio, depriving it of the political capital generated by the association’s decades of campaign contributions and lobbying. The Facebook board has no such power. It can’t generate a general code of conduct on its own, or consider worst-case scenarios to advise the company how to minimize the risk of harm. That would mean acting like a real advisory board. This one is neutered from the start because someone had the stupid idea that it should perform a quasi-judiciary role, examining cases one-by-one. We know the process will be slow and plodding. Faux-judicial processes might seem deliberative, but they are narrow by design. The core attribute of the common law is conservatism. Nothing can change quickly. Law is set by courts through the act of cohering to previous decisions. Tradition and predictability are paramount values. So is stability for stability’s sake. But on Facebook, as in global and ethnic conflict, the environment is tumultuous and changing all the time. Calls for mass violence spring up, seemingly out of nowhere. They take new forms as cultures and conditions shift. Facebook moves fast and breaks things like democracy. This review board is designed to move slowly and preserve things like Facebook. This review board will provide a creaking, idealistic, simplistic solution to a trivial problem. The stuff that Facebook deletes creates an inconvenience to some people. Facebook makes a lot of mistakes, and dealing with the Facebook bureaucracy is next to impossible. But Facebook is not the whole Internet, let alone the whole information ecosystem. And Facebook is not the only way people communicate and learn things (yet). The most notable anecdote that inspired the idea for this board involved the 1972 photograph of nine-year-old Kim Phúc running away from a U.S. napalm attack in Vietnam. When, in 2016, the Norweigian newspaper Aftenposten included the image in a story, Facebook asked the newspaper to remove or pixelize the image because it violated the general rule against nudity on the site. After much uproar, Facebook restored the image. So, ultimately, the controversy did not matter. Problem solved. And even without Facebook, there are hundreds of sources of the same image and deep accounts of its historical significance. Since then, Facebook has tried to be both more aggressive in its content removal practices and more thoughtful about the standards it uses. The review board is a high-profile extension of that effort. The Boolean question of whether, say, a photograph that someone posted remains “on Facebook” is trivial. That question is a vestige of an 18th-century model of “free speech,” and ignores differences of power and how speech works in the real world. It was a bad model for assessing the health of communication more than 200 years ago. It’s absurd now, in the age of opaque algorithms. The initial review board includes no one with expertise in confronting the algorithmic amplification of propaganda, disinformation, or misinformation. It has no anthropologists or linguists. Of the 20 members, only one, Nicolas Suzor of Queensland University of Technology in Australia, is an internationally regarded scholarly expert on social media. In other words, it was established and appointed to favor one and only one value: Free expression. As important as this value is, the duty of protecting both Facebook users and the company itself demands attention to competing values such as safety and dignity. This board is also stacked with a disproportionate number of Americans who tend to view these issues through American legal history and conflicts. The original 20 includes five Americans, none of whom have any deep knowledge of how social media operate around the world. In contrast, the board has only one member from India—the country with more Facebook users than any other. India is home to more than 22 major languages and 700 dialects. The majority-Hindu nation has more Muslim citizens than any other country except Indonesia, along with millions of Buddhists, Christians, Jews, and Bahai. Facebook and WhatsApp have been deployed by violent Hindu nationalists (aligned closely with the ruling BJP Party of Prime Minister Narendara Modi, the most popular politician on Facebook) to terrorize Muslims, Christians, journalists, scholars, and anyone who criticize the central government’s efforts to make India a brutal, nationalistic theocracy. Is this board prepared to consider the breadth and depth of the problems that Facebook amplifies in India, let alone in Pakistan, Sri Lanka, Bangladesh or Myanmar? The lone board member from India, Sudhir Krishnaswamy, is an esteemed legal scholar and civil rights advocate. But how many of those 22 languages does he know? Would he be able to parse the linguistic and cultural nuance of an ethnic slur expressed in Marathi, the language of 83 million people in the state of Maharashtra; or Sinhalese, the major language of 17 million in the Republic of Sri Lanka? Given there are almost 300 million regular Facebook users in a country with 1.2 billion people, how would Krishnaswamy guide the process of choosing among the thousands of complaints that are sure to come from this growing and agitated population? The very idea that the board could make the slightest bit of difference to any of the life-or-death conflicts that play out on Facebook every day is absurd. Just ask yourself, “What about this board’s authority could save lives in Myanmar?” The answer is, nothing. “What about this board’s authority could minimize coordinated attacks on the workings of democracies around the world?” The answer is, nothing. “What about this board’s authority could limit coordinated harassment of activists, journalists, and scholars by major political parties?” The answer is, nothing. “What about this board’s authority could circumscribe Facebook’s ability to record and use every aspect of your movements and interests?” The answer is, of course, nothing. Ultimately, this board will influence none of the things that make Facebook Facebook: Global scale (2.5 billion users in more than 100 languages); targeted ads (enabled by surveillance); and algorithmic amplification of some content rather than other content. The problem with Facebook is not that a photograph came down that one time. The problem with Facebook is Facebook. Source: Facebook and the Folly of Self-Regulation (Wired)
  2. Facebook’s redesigned desktop site with dark mode is now available everywhere Toggle on the new Facebook now if you haven’t already Image: Facebook Facebook’s redesigned desktop site is now available globally, the company announced on Friday. Prior to this change, the new version of Facebook was available only if you opted into it starting in March and only in some markets, although the company said at the time that a majority of people would get access that month. In a blog post published today, Facebook says the new site will “now be the web experience for Facebook globally,” adding that “it’s faster, easier to use and gives your eyes a break with dark mode.” Dark mode is indeed the standout of the new version, but the update also delivers another pretty dramatic overhaul. The home layout features a new skinnier News Feed, ample empty space on the left and right rails, and larger icons and a menu bar that lets you easily jump to various parts of the app. For those who want to turn on the new design and enable dark mode immediately, here’s how: Click on the down arrow at the end of the upper menu bar to pull up old Facebook’s settings menu. Click “Switch to new Facebook.” Click the same down arrow and toggle dark mode from off to on. The goal when Facebook first unveiled the desktop redesign at its F8 developer conference in 2019 was to refocus the core web experience around the areas most people still enjoy using: events, groups, and messaging. It wasn’t quite the end of the News Feed per se, but more Facebook leadership admitted that it’s now an antiquated concept to have an algorithmic feed filled mostly with junk from pages and public posts. People are retreating more and more into private groups and messaging, and Facebook’s only core utility right now is that it’s still the best way to organize a large event with friends or reach out to someone you know whose contact info you don’t have. CEO Mark Zuckerberg said as much when, a few months prior to the redesign reveal, he announced a shift for his entire company toward privacy-focused products and features. And unless you’re in Facebook’s older demographic or you’re particularly into the site’s unique brand of hyper-partisan political content, most people have moved on to spending more of their time on Instagram, TikTok, Twitch, YouTube, and other platforms. The new Facebook reflects the company’s priorities, with a focus on videos for the News Feed (because video ads still earn the company a lot of money) and easy access to events and groups alongside a redesigned Messenger panel. It can be a jarring visual change, but it is a much more pleasant Facebook to use once you know your way around the new interface. Source: Facebook’s redesigned desktop site with dark mode is now available everywhere (The Verge)
  3. Why a small Facebook bug wreaked havoc on some of the most popular iOS apps Facebook’s near-ubiquitous SDK broke yesterday, taking major mobile apps with it Illustration by William Joel / The Verge Sometime around 6:30PM ET on May 6th, popular iOS apps from major companies like DoorDash, Spotify, TikTok, and Venmo suddenly starting crashing. The culprit didn’t remain a mystery for long. Developers on Twitter and GitHub quickly discovered the cause to be an issue with the software development kit (SDK) from Facebook, which is interwoven into the operation of countless mobile apps from companies large and small. The problem, while resolved rather quickly by Facebook, illustrates the scope of the social network’s platform and how even minor issues can have major ripple effects throughout the mobile software industry. “Earlier today, a new release of Facebook included a change that triggered crashes for some users in some apps using the Facebook iOS SDK,” a Facebook spokesperson told The Verge yesterday in a statement. “We identified the issue quickly and resolved it. We apologize for any inconvenience.” The Facebook SDK is a bundle of software tools for developers that helps power features like signing in with a Facebook account and providing share to Facebook buttons. So the issue was not unique to iOS; it could have happened to the Android SDK and, in this case, simply affected Apple’s platform. Yet Facebook didn’t exactly say what the issue was or how the new release of the SDK could have triggered the crashes. It also wasn’t clear why so many apps were so detrimentally affected, even when the user experiencing the crash didn’t log in with Facebook or even when the app itself didn’t make ample use of the SDK or rely on Facebook features. According to app developer Guilherme Rambo, the issue lies with the way Facebook markets its developer toolset. “Facebook really pushes developers into installing their SDK, likely because they want the very rich data they can collect on those app’s users. The SDK is offered as a convenience for both developers and marketing teams, since it can also be used to track the conversions of ads run through Facebook,” he explained to The Verge over email. (Rambo also has an analysis of his own posted to his website here.) For instance, he says, if you want to run an ad campaign for your mobile app through Facebook, the only way to get valuable insight into the campaign’s performance is to install the company’s SDK. “Another major reason is the infamous ‘sign in with Facebook’ we see in many apps, which can be implemented without using their SDK at all, but since using the SDK is more convenient, many companies end up going through that route instead,” he says. But if there’s an issue with the SDK, as was the case yesterday, then it has the potential to take everything down with it. Facebook pushed a server-side change to its SDK, which meant no developer had any say in whether their app would be communicating with the older, stable version or the newer broken one. And because an app communicates with the SDK every time it is opened by a user, the result was a cascading series of errors that led to full-blown app crashes. “The issue was that the SDK was expecting a server reply in a certain format, which on Wednesday, the Facebook servers were not providing,” wrote ZDNet’s Catalin Cimpanu, who cited technical analyses of the situation on GitHub and HackerNews. “Without the proper response, the Facebook SDK crashed, also bringing down all the apps that used it.” It also appears that, once affected, there was little any developer could do to restore service until Facebook fixed the issue on its end. Rambo says there should be ways to prevent this from happening, including developers deciding to implement sign-in with Facebook without using the company’s SDK. But other system-level protections are decisions Apple would have to make regarding the permissions it grants third-party SDKs. “The way it works today is if you install an app and that app includes third-party code (such as the Facebook SDK), that third-party code has the same level of permissions and access as the app itself does,” he says. “If you grant the app permission to access your location, contacts or calendar, the third-party code it embeds can also get that information. The only way to fix that would be to implement some form of sandboxing model that separates third-party SDKs from an app’s own code,” he adds. “It’s a big challenge, but I hope Apple’s engineers are working on something like that.” Apple did not respond to a request for comment. That said, developers did not seem especially pleased about the situation. “From what I’ve seen, developers are really frustrated about this, especially because the engineers who have to deal with these types of problems are usually not the ones who have decided to add such an SDK to the app they work on,” Rambo says. He adds that the decision to integrate with Facebook’s developer tools is usually a top-down decision, “many times from the marketing or product teams who only see the benefit of using those types of SDKs (more data, more analytics).” But those types of employees at tech companies “don’t see the enormous amount of engineering hours spent dealing with the problems they can cause in an app,” he says. “Crashes caused by SDKs in major apps are not that uncommon, but I’ve never seen something of this magnitude where an SDK affected so many apps at the same time. I’d say this was an unprecedented event and it shows that something must be changed in the way apps integrate third-party code.” Source: Why a small Facebook bug wreaked havoc on some of the most popular iOS apps (The Verge)
  4. Facebook to let employees work remotely through the end of 2020 The social media giant doesn’t expect to have offices reopened before July Illustration by James Bareham / The Verge Facebook will allow most of its employees to continue working from home through the end of 2020, and the company doesn’t expect to reopen most offices before July 6th, a Facebook spokesperson confirmed to The Verge. CNBC first reported the news and said CEO Mark Zuckerberg would be making a formal announcement today. The company said there were a variety of factors involved in the decision, including information from public health agencies, like the Centers for Disease Control and Johns Hopkins, as well as government guidance. California is beginning to ease some social distancing rules in a multistage process outlined by Gov. Gavin Newsom. Facebook employees have been working from home since March, and the company said it would continue to pay hourly employees who may not be able to work because of reduced staffing, office closures, or if they’re sick. Last month, Zuckerberg announced that Facebook was canceling physical events with more than 50 people through June 2021, including the Oculus Connect VR conference, which was to take place this fall and will now be an online-only event. While some employees’ jobs can’t be done remotely, Zuckerberg said in April, “Overall, we don’t expect to have everyone back in our offices for some time.” Facebook’s decision to allow remote work for most employees through the end of 2020 home may be a bellwether for other tech companies, as the social media giant was one of the first tech firms to begin asking employees to work remotely to help prevent the spread of the novel coronavirus. Source: Facebook to let employees work remotely through the end of 2020 (The Verge)
  5. Messenger Rooms are Facebook’s answer to Zoom and Houseparty for the pandemic Facebook is greatly expanding its video chat offerings to keep up with rising demand Facebook is rolling out a suite of new products to expand its capabilities in video chat. The company today announced Messenger Rooms, a tool for starting virtual hangouts with up to 50 people and allowing friends to drop in on you whenever they like. It’s also doubling the capacity of video calls on WhatsApp from four people to eight, adding video calls to Facebook Dating, and adding new live-streaming features to both Facebook and Instagram. CEO Mark Zuckerberg announced the features in a live stream today. In an interview with The Verge, Zuckerberg said the new video features were built in line with the company’s shift toward creating more private messaging tools. “Video presence isn’t a new area for us,” he said. “But it’s an area that we want to go deeper in, and it fits the overall theme, which is that we’re shifting more resources in the company to focus on private communication and private social platforms, rather than just the traditional broader ones. So this is a good mix: we’re building tools into Facebook and Instagram that are helping people find smaller groups of people to then go have more intimate connections with, and be able to have private sessions with.” The moves come as the global pandemic has forced hundreds of millions of people to stay indoors and rely on digital tools for nearly all of their work, school, and play. More than 700 million people are now making calls on Facebook Messenger and WhatsApp every day. But competitors are also surging. Zoom, which began life as a simple tool for business videoconferencing, rocketed from 10 million users in December to more than 300 million today. Houseparty, an app for virtual hangouts with friends that Facebook had previously cloned before abandoning the project last year, now routinely hovers at the top of app store download charts. It gained 50 million users over the past month. The rapid growth of alternative social products has always been cause for concern at the famously paranoid Facebook, which devotes significant resources to monitoring emerging social products and then acquiring the companies behind them or copying their features. While we are still in the first few months of the COVID-19 pandemic, it’s already clear that consumer behavior is changing to cope with it — and that Facebook’s existing product lineup has not met demand. Of everything announced today, Messenger Rooms promises to be the most significant. The feature, which Facebook says will be available in the company’s products globally sometime in the next few weeks, will allow up to 50 people to join a call. The room’s creator can decide whether it’s open to all or lock it to prevent uninvited guests from joining. You’ll be able to start a room from Messenger and Facebook to start. Later, rooms will come to Instagram Direct, WhatsApp, and Portal. Guests can join a room regardless of whether they have a Facebook account. While in a room, you can play with Facebook’s augmented reality filters or swap out your real-life background for a virtual one. Some backgrounds offer 360-degree views of exotic locales, the company said. And a new slate of AR filters will help brighten up dark rooms or touch up users’ appearances. Room calls are not end-to-end encrypted, but Facebook says it does not view or listen to calls. The creator of a room can remove participants at any time, and rooms where illicit behavior is taking place can be reported to Facebook. (WhatsApp video calls are end-to-end encrypted, offering an extra layer of protection to users.) Zoom saw a surge in malicious behavior as it became the world’s default meeting app, with racist, bigoted, and pornographic “Zoombombings” roiling meetings all over the world. Zuckerberg said Messenger Rooms were designed with strong privacy controls, and that the feature’s reliance on connections with your real-life friends and family make it less likely that it will be used to harass people. For groups where people don’t know each other as well, moderators will be able to kick people out of rooms. “A lot of the time that I’ve spent on this over the last few weeks as we’ve been building this out and getting ready to ship has been on privacy, security, integrity reviews, and how do we make sure that a lot of the use cases that that have been problematic around Zoom are not going to be things that are replicated here,” he said. Facebook Live will add back a feature called Live With that enables users to invite another person to stream with them. The donate button will become available on live streams, allowing users to raise money directly from their broadcasts in the countries where fundraisers are available. Instagram will begin allowing users to post live streams to IGTV as well as to Instagram stories after they finish a stream, and Instagram Live broadcasts will become available on the desktop for the first time. Users with Facebook’s Portal display will also get the ability to go live to pages and groups, the company said. Portal users can already go live from their own profiles. But live-streaming also has a dark side, and Facebook faced criticism after introducing live-streaming when it was used to broadcast acts of violence. The company removed 1.5 million copies of the Christchurch terror attack in the days after the video, which was broadcast live on Facebook. Rooms will be available in Messenger today in nearly all countries where Facebook is available, the company said. It will become available inside the Facebook app in a handful of unspecified countries today and roll out globally within coming weeks. Source: Messenger Rooms are Facebook’s answer to Zoom and Houseparty for the pandemic (The Verge)
  6. Facebook is adding a Quiet Mode that silences push notifications on mobile A new digital well-being feature for Facebook’s main mobile app Photo by Amelia Holowaty Krales / The Verge Facebook announced an all-new “Quiet Mode” for its main mobile app on Thursday, which will pause “most” push notifications and remind you that it’s turned on when you try to open the software on your phone while the mode is still active. It’s not clear exactly what notifications will be exempted from the new mode; the company says some, like privacy updates, it is legally required to send out. This new mode is also not to be confused with the existing “mute push notifications” setting that lets you stop only push notifications, but not those within the app, for a designated amount of time. Instead, this new Quiet Mode will be found under Facebook’s “Your Time on Facebook” dashboard, which it added back in November 2018, following a push for major platforms and device makers like Apple and Google to promote digital wellness apps. Image: Facebook It’s part of a larger update to the dashboard that Facebook says will add week-over-week trends, usage tracking for daytime versus night, and a counter for total number of visits. It’s rolling out now to iOS users and will arrive for Android users in May, the company says. The new Quiet Mode will work both manually and on a set schedule if you so choose. It will pause notifications from within the app, like those obnoxious Facebook Watch badges, and on a system level, so you won’t see numbered badges on iOS either. “As we all adjust to new routines and staying home, setting boundaries for how you spend your time online can be helpful. Whether it’s to help you focus on your family and friends, sleep without distraction or manage how you spend your time at home, we have tools that can help you find the right balance for how you use Facebook,” reads a new update to the company’s ongoing COVID-19 information blog post. In addition to Quiet Mode, Facebook says it’s also added new shortcuts to the notification settings and News Feed preferences panel, so “you can make the most of your time on Facebook by controlling the type of posts you see in your News Feed as well as the updates you receive.” Source: Facebook is adding a Quiet Mode that silences push notifications on mobile (The Verge)
  7. In 2018, following the Cambridge Analytica scandal, Facebook announced the “Download Your Information” feature allowing users to download all the information that the company have on them since the creation of the account. All of it? It doesn’t seem so. Concerns were quickly raised when Facebook released the feature, that the information was inaccurate and incomplete. Privacy International recently tested the feature to download all ‘Ads and Business’ related information (You can accessed it by Clicking on Settings > Your Facebook Information > Download Your Information). This is meant to tell users which advertisers have been targeting them with ads and under which circumstances. We found that information provided is less than accurate. To put it simply, this tool is not what Facebook claims. The list of advertisers is incomplete and changes over time. Ads section on Facebook “Download Your Information” page. “Advertisers Who Uploaded a Contact List With Your Information” Among the data downloaded, users can see which advertisers have uploaded a list that contains their personal data. On the advertiser side, Facebook calls this the “Custom Audience Tool”, it allows a company to upload a list of “customers” in order to target them (or avoid them). To be effective, this contact list needs to contain a unique identifier for the person (see Facebook’s documentation). This is usually an email address, a phone number, an Advertising ID or a users’ Facebook UID, but it can go way beyond that. In the template provided by Facebook, advertisers can also include all sorts of information they might have, such as name, gender, country, street, zip code, age or even date of birth. Contact list template provided by Facebook This advertiser-facing feature assumes that companies are in posession of this information because of an existing relationship with you. Under GDPR, this means that the company uploading this information must have a legal basis for pocessing this data. From a user’s perspective, it means we should be able to properly check which companies think we are their “customers”. This is cleary part of our rights under data protection law to information about what data a company has on us. Convenience for advertisers, opacity for users Unfortunately for the users, how this tool is exactly used to exploit their data is obscure and opaque. Facebook’s "Download Your Information" tool is supposed to let you see the companies that uploaded a list containing your data from the moment you created your account. Yet, after testing this feature at different points in time, we found out that it is not the case. Download your information page with dates going from account creation to date of download Indeed, we’ve found that the list provided by Facebook varies from one month to the other, removing some companies that were appearing previously and adding new ones. This goes hand in hand with a recent move by Facebook to limit the number of advertisers displayed on the site (without downloading anything, by visiting your Ads preferences when your are signed in) to the last 7 days. Altogether, this means that there is no way for users to know all the companies who uploaded a list with their data to Facebook. This prevents people from exercising/using their rights under GDPR - despite there being a clear obligation to facilitate the excercise of these right… This is an issue we see both at the back end (in the download your informaiton tool) and the front end in the ads that we see , where there is a lack of transparency from Facebook on how advertisers target you. The lack of information and difficulties in exercising rights, renforces an opaque environment where people are unaware of how their data is gathered, shared and used to profile and target them. Off-Facebook activity, a new widow to the opacity On 28 January 2020, Facebook announced the roll-out of another new feature called Off-Facebook activity. This tool is meant to let you “see and control the data that other apps and websites share with Facebook”. Unfortunately, this tool is once again a tiny sticking plaster on a much wider problem. While we welcome the effort to offer more transparency to users by showing the companies from which Facebook is receiving personal data, the tool offers little way for users to take any action. Do we Facebook? Do we? First, the information provided about the companies is again limited to a simple name, preventing the user from exercising thier right to seek more information about how this data was collected. As users we are entitled to know the name/contact details of companies that claim to have interacted with us. If the only thing we see, for example, is the random name of an artist we’ve never heard before (true story), how are we supposed to know whether it is their record label, agent, marketing company or even them personally targeting us with ads? Second, the details about the data transfer are extremely limited. Using Facebook’s Download your Information once again only provides limited information about what Facebook receives (some event are marked under a cryptic CUSTOM). There is also no information regarding how the data was collected by the advertiser (Facebook SDK, tracking pixel, like button…) and on what device, leaving users in the dark regarding the circumstances under which this data collection took place. Example of an off-Facebook activity detail in a downloaded archive Finally, this tool illustrates just how impossible it is for users to prevent external data from being shared with Facebook. Without meaningful information about what data is collected and shared, and what are the ways for the user to opt-out from such collection, Off-Facebook activity is just another incomplete glimpse into Facebook’s opaque practices when it comes to tracking users and consolidating their profiles. So what? Social media, and Facebook in particular, are going through a trust crisis. Years of failures and abuses have raised concerns about their ability to harness the power they give to advertisers (be they policital or private) and to be transparent with their users in any meaningful way. In this context, failure to provide meaningful information doubled with innacurate and thus misleading information about the “Download Your Information” tool (the information you download is supposed to be exhaustive) is yet another breach of trust. The online advertising is already a hot mess (see our work on AdTech) and while regulators are expected to take action we need the companies dominating this business to lead by example. Facebook should ensure the tools it offers to its users are accurate and effective. The current state is unacceptable. Source
  8. old man yells at cloud dot jpeg — Facebook’s new design turns your PC into an enormous phone It's hard to like most of the new layout—but having a dark mode is nice. Enlarge / The new design does at least include a dark mode. I generally prefer bright layouts, but if you have a bad habit of Facebooking in bed late at night, this is less likely to prevent sleepiness. Jim Salter 140 with 100 posters participating, including story author Sometime last night, Facebook's new design layout rolled out to my personal account. It assured me that I could switch back if I didn't like it, so I immediately tried it out. I just as immediately switched it off and never looked back. At least, I never looked back until this afternoon, when the powers that be at Ars said—and I quote—"feel free to hate review it, if you want." I am a professional if nothing else, so this is not a hate review. But I must admit it's a "visceral dislike" review, and perhaps some readers will appreciate—or at least not mind—the things that turn me off so strongly about the new layout. If you like landscape browsing on a smartphone, you’ll like the new Facebook I do the majority of my Facebook browsing on a desktop PC—a serious desktop PC, for serious people, with dual 24-inch monitors in 1080P. I strongly dislike layouts that present me with less information and waste a ton of real estate, and Facebook's new layout does exactly that, in spades. The old Facebook layout generally fits at least two or three posts per page, at least once you scroll past the giant Stories banner. It also allows you to keep multiple Facebook Messenger chats open, in separate floating divs at the bottom of the screen. Once you switch to the new Facebook layout, the first overwhelming impression is one of supreme embiggenment. The text sizes are larger, the elements are more widely spaced, and very little fits on the screen at once. The Stories banner is, thankfully, less obnoxiously huge than it was on the old layout. That's a matter of sheer necessity, unfortunately; it has to be smaller in order to fit even half of a post underneath it on the new scale of things. I was unable to find a single pair of posts on my timeline that would fit on one page under the new layout—and at least half of them were at least two pages tall, all by themselves. The Messenger functionality is also diminished, with only a single chat visible at any one time. If you click another contact or group on your contacts list, your current chat disappears, to be replaced with the new one. Notifications are less clunky There isn't much that I like about the new layout, and there's quite a bit that I detest. I will say that the new notification list is nice, though. The old notifications list just sort of sprawled over whatever parts of the page it happened to land on, basically making the whole thing useless while notifications are visible. In the new layout, the notification area is pinned all the way to the right, leaving posts uncovered. In the new design, you can also continue to scroll, browse, and comment on posts in your feed while Notifications are up and pinned to the far right. Messages, unfortunately, are covered—and you only get a single message visible in the new layout, whether the notification list is covering it or not. So you'll need to close the notifications if you want to chat with your friends—and it's considerably more difficult to chat with multiple friends simultaneously, since only one message can be visible at a time. Conclusions I suspect that this new layout—much like the auto-moderator bug that began pseudo-randomly eating valid news posts this week—is another instance of founder and CEO Mark Zuckerberg's famous motto, "Move fast and break things." The design feels like what happens when you try to unify a layout codebase between phone-based apps and the desktop website but don't bother with much in the way of validation or solicitation of user feedback. The new design is probably here to stay and will eventually supplant the current version of "classic" entirely—but I really hope it sees some more refinement first. Until then, I'll be sticking with the old layout. The Good Now with dark mode! The notifications list is much better pinned to the far right instead of occupying roughly the third-fifth from the left Switching to the new layout and back again is instantaneous and easy Within the new layout, switching dark mode on and off again is equally quick and easy The Bad Only one post on the screen at a time. Or less—usually, less Only one message on the screen at a time—no matter how high your resolution The Ugly Huge text is huge Wasted space is wasted The layout itself, basically Source: Facebook’s new design turns your PC into an enormous phone (Ars Technica)
  9. Facebook’s redesign for its Messenger app proves two things: 1) Facebook’s leaning hard into its Stories format after the feature’s success on Instagram; and 2) Yes, those chatbots really were the worst. Facebook’s ditching the Discover tab, a hub for various business tools and games, among other updates to help streamline the private messaging app’s design. The company first announced plans for the change in August, and now TechCrunch reports that Facebook has commenced the switchover this week, so you might have already noticed the update. It’s a complete 180 from Facebook’s previous push to make the app a one-stop-shop for “connecting with all the people and businesses you care about,” with added bloat like games, shopping, and confusing chatbots to serve as automated middlemen between users and businesses. Now Facebook’s boiled it down to just two tabs, Chats and People, with the latter defaulting to a page for Stories, the 24-hour photo and video highlight reel Facebook more or less ripped off Snapchat (albeit to greater success). A sub-tab adjacent to Stories shows which friends are currently online. Given the success Facebook’s had so far with Instagram’s Stories feature, which averages 500 million daily users as of January, it’s no wonder they’re bumping up this feature’s real estate. And, of course, since Stories also display ads, more eyes on users’ Stories means more revenue for Facebook. Though demoted, any content previously on the Discover tab isn’t gone for good. Now you’ll just have to access it via the Messenger’s search bar. Though, with users now having to purposefully seek them out, businesses may be turned off of any ideas about expanding on the platform. Facebook did not immediately respond to Gizmodo’s request for comment. That said, a company spokesperson told TechCrunch that Facebook remains committed to promoting businesses on its platforms and that they still play an important part of Messenger. Source
  10. When we think about companies “harvesting our data,” chances are we’re thinking of The Big Tech Names doing that harvesting. But sometimes, even these companies get taken for a bit of a ride. Case in point: earlier today, the adtech company OneAudience got slammed with a hefty Facebook lawsuit, on charges that the company with claims that it peddled “malicious” software to app developers that would pull sensitive intel from an app downloader’s Facebook, Twitter and Google accounts—behind all of these company’s backs. “These apps were distributed online to app users on various app stores, including the Google Play Store, and included shopping, gaming, and utility-type apps, the lawsuit states. In Facebook’s case, users that logged in using their Facebook account handed over their “name, email address, locale (i.e. the country that the user logged in from), time zone, Facebook ID, and, in limited instances, gender.” This is the latest leg of the Facebook versus OneAudience saga, after the company was first found to be harvesting this data late last year. At the time, Facebook requested “an audit” of the company’s data-sucking behavior, which the lawsuit states OneAudience didn’t comply with. In lieu of this “compliance”, OneAudience shut down this software, stressing that the data of the hundreds of users effected was “never intended to be collected” and was never used. “We believe that consumers should have the opportunity to choose who they share their data with and in what context,” OneAudience said at the time. Naturally, this data was collected for the purposes of targeted advertising. While the software in question—OneAudience’s mobile-specific SDK, or software device kit—might be no more, the company is still touting its ability to target “real, verified users” to be pelted with a given ad campaign. Older pitch decks from the company suggest that aside from the mobile-specific intel, it also profiled users based on where they lived and the language they spoke. Ultimately, as pointed out by Recode, this lawsuit opens a can of worms about the complexity of the data-sharing chain of command. While companies like Facebook can control the ways their own ecosystem operates, its sheer reach means that it can’t keep an eye on every partner at every time—and just short of filing lawsuits or plugging up obvious loopholes as they arise, it’s unclear whether their commitment lies more with advertisers, or the consumers that they’re targeting. Source
  11. A good rule of thumb is to be skeptical of the privacy-forward changes Facebook touts to the public, and to deeply interrogate any of the quieter changes it rolls out behind the scenes since those—surprisingly—often mark the real efforts that the company’s taking to be a little bit less of an invasive shitshow. In the latest change, Facebook is tightening its rules around the use of raw, device-level data used for measuring ad campaigns that Facebook shares with an elite group of advertising technology partners. As first spotted by AdAge, the company recently tweaked the terms of service that apply to its “advanced mobile measurement partner” program, which advertisers tap into to track the performance of their ads on Facebook. Those mobile measurement partners (MMPs) were, until now, free to share the raw data they accessed from Facebook with advertisers. These metrics drilled down to the individual device level, which advertisers could then reportedly connect to any device IDs they might already have on tap. Facebook reportedly began notifying affected partners on February 5 and all advertising partners must agree to the updated terms of the program before April 22, according to Tencent. While Facebook didn’t deliver the device IDs themselves, passing granular insights like the way a given consumer shops or browses the web—and then giving an advertiser free rein to link that data to, well, just about anyone—smacks hard of something that could easily turn Cambridge Analytica-y if the wrong actors got their hands on the data. As AdAge put it: The program had safeguards that bound advertisers to act responsibly, but there were always concerns that advertisers could misuse the data, according to people familiar with the program. Facebook says that it did not uncover any wrongdoing on the part of advertisers when it decided to update the measurement program. However, the program under its older configuration came with clear risks, according to marketing partners. Gizmodo reached out to Facebook for comment about the changes—we’ll update this story if they respond. A bit of background here: When you see ads on Facebook for—I don’t know, a giant furry suit—there’s a chance that the person advertising that furry suit didn’t do it alone. The company works with literally hundreds of marketing partners that can help that fur-vertiser every step of the way. A chunk of these partners specializes in “measurement” and “attribution”—in making sure that the right ad for the right fursuit gets seen by the right Instagram user at the right time. Folks in the attribution space are plugged into every major platform and a ton of major ad networks themselves, aside from Facebook. An advertiser could go to one of these measurement partners and, to stick with our example, figure out which fursuit is driving the most e-commerce sales, or whether the way a retailer worded its ad might be scaring potential customers off. Device-level data can be a huge part of the appeal of working with MMPs. In the case of mobile measurement, an advertiser could use that data to figure out which members of his target market respond best to which kind of fursuit, how long it took these target members to buy one of these things after seeing the ad, and where they made the eventual purchase. That same device-level data could also give an advertiser a heads up if a person, say, isn’t really feeling furry ads in their feed all the time, or if they’re really feeling these ads and is in danger of potential bankruptcy from buying out a warehouse of merch. Until now, this raw data could be passed freely from Facebook to its trusted ad tech partners, which could then share it with advertisers. Now, its partners can only use that data “on an aggregate and anonymous basis,” according to Facebook’s new terms of service for MMPs. While the data here wasn’t as personal as names or addresses, it provided insights into the way an individual Facebook user responded to a piece of content, which could be just as useful for fursuit enthusiasts and political pundits alike—especially when they could potentially connect that to a given mobile device ID, which is unique to each phone. As one marketing exec told AdAge, “Facebook saw this as potentially a really big data leakage problem. Nothing was stopping the advertiser from syndicating this data; Facebook couldn’t control whether or not the advertiser leaked it.” With the ToS update, Facebook’s quashing that chain of command and keeping advertisers from getting their mitts on potentially sensitive user data. The changes also prohibit those advertisers—or the marketing partners, ostensibly—from taking these raw data points to create entirely new profiles of people off of the data that Facebook provided. It’s worth noting that this isn’t the first time that Facebook’s floated this idea. Way back in 2015, mobile marketers revolted when the company approached them with the idea of throttling the amount of device-level data they had access to, causing them to drop the proposal. Likewise, the new update is leaving a lot of these same parties less than chipper about their on-Facebook targeting aspects, but it looks like Facebook’s been beaten down by enough congressional hearings to hold strong this time around. Source
  12. Update: It turns out that Facebook has been well aware for months of this private WhatsApp chat flaw. Thanks to twitter user @hackrzvijay, we know that Facebook was notified back in November 2019 about this security flaw. However, Facebook didn’t do anything about it. The Twitter user in question reported the problem to Facebook with the intention of receiving a cash bounty. In this tweet, the hacker posts a message from Facebook declining to give a bounty because the ability for anyone to find invite codes online for private WhatsApp chat groups is “an intentional product decision.” Facebook then says that it cannot control what Google and other search engines index, so its hands are tied. As far as we can tell, both Facebook and Google are still not talking publicly about this problem, but this Facebook message makes it seem as though Facebook doesn’t think there’s anything wrong with your private WhatsApp chat groups being easily accessible by anyone. Original article, According to a new report from Vice, private WhatsApp group invites might not actually be so private. Through some pretty basic Google searching, it’s relatively easy to gain access to private chat groups. Normally, private WhatsApp group chats are only accessible via an invite code that gets handed out by the moderators of the chat. These invite codes, though, are simply URLs with specific strings of text. It appears that Google is indexing at least some of these invites which enables pretty much anyone with Google access to find them. Now, before you get out the pitchforks and start storming Google’s gate, from the outset this appears to be a WhatsApp problem (or, more specifically, a Facebook problem, as it ownsWhatsApp). Google uses crawlers to index URLs across the internet and it is very easy for websites and apps to place a line of code on pages that tells these Google crawlers not to index the information there. The likely reason behind this problem is WhatsApp failing to do this. Vice reached out to both Google and Facebook about this matter but didn’t receive a response. If you want to comb through Google Search to find out if your private WhatsApp group is indexed, just start with a “chat.whatsapp.com” string and then enter in some information specific to your chat. Vice did this and was able to find several chat groups related to sharing porn as well as a chat that describes itself as being for NGOs accredited by the United Nations. These chat groups listed out members’ names as well as contact information, in some cases phone numbers. This story will no doubt make the rounds today and WhatsApp and Facebook will need to respond soon. There’s about to be a lot of angry users. Source
  13. The Internal Revenue Service has gone to court to sue Facebook, claiming the social media giant owes in excess of $9 billion in taxes by selling its intellectual property to a subsidiary in Ireland —a transaction the IRS believes Facebook severely undervalued. The trial, expected to last between three and four weeks in total after it commenced on Tuesday, has the IRS attempting to convince a judge in San Francisco that Facebook owes billions of dollars of unpaid tax. According to a document filed by Facebook in January, the witness list can include a number of major executives, though seemingly not Facebook CEO Mark Zuckerberg. Reuters reports the list of witnesses includes chief revenue officer David Fischer, CTO Mike Schroepher, head of hardware Andrew Bosworth, and Naomi Gleit and Javier Olivan of Facebook's aggressive growth team. The entire affair is based around Facebook's decision to hand its intellectual property to a subsidiary in Ireland, and the perceived value of that property. In 2010, Facebook sold its IP to the Irish entity to cut its overall tax bill, a process that other major firms have undertaken. Facebook's subsidiaries pay a royalty to the US parent company for its trademark, the user base, platform technologies, and other elements, with Facebook Ireland paying its US counterpart more than $14 billion from 2010 to 2016. The IRS argues the valuation of the IP was too low and should be taxed accordingly. Facebook believes the low valuation was reflective of the risks involved with its international expansion, and predated its IPO and development of its advertising systems. The social network stands by its 2010 valuation, as at the time it "had no mobile advertising revenue, its international business was nascent, and its digital advertising products were unproven," according to Facebook spokeswoman Bertie Thompson. In the event the IRS wins, Facebook would have to pay a tax liability of up to $9 billion, as well as interest and penalties. While the tax trial may seem similar to one involving Apple, it is a completely different issue at hand. In the case of Apple, the 2016 European Commission ruling related to Apple being charged too little tax by Ireland, with the extremely low tax rates offered by the country deemed to be unlawful "state aid," whereas Facebook's situation involved telling the IRS a potentially incorrect valuation of its IP sale. Both situations do involve the companies setting up a subsidiary in Ireland to take advantage of the low tax rates, which is quite a common accounting trick for large multinationals. Source
  14. MOSCOW (AP) — A court in Moscow fined Twitter and Facebook 4 million rubles each Thursday for refusing to store the personal data of Russian citizens on servers in Russia, the largest penalties imposed on Western technology companies under internet use laws. The fines of nearly $63,000 are the first five-figure fines levied on tech companies since Russia adopted a flurry of legislation starting in 2012 designed to tighten the government’s grip on online activity. One provision required tech companies to keep servers in Russia for storing personal information they gather from Russian citizens. Russia’s internet regulator, Roskomnadzor, has tried unsuccessfully for several years to force large companies like Facebook, Twitter and Google to move Russian user data to Russia. Commenting on Thursday’s court rulings, Roskomnadzor said Twitter and Facebook would be fined 18 million rubles ($283,000) each if they don’t comply this year. Last year, Twitter and Facebook were fined the equivalent of $47 each for violating the same personal data regulation. The punishment had no effect on the two companies, so in December Russian authorities increased the fines. The law allows online services that don’t follow the data storage requirement to be banned from Russia. Only the LinkedIn social network has been blocked so far. It is widely understood that blocking Facebook or Google would elicit too much public outrage for authorities to take the step. Source
  15. This is how Facebook's new desktop design looks (and how you can restore the old Facebook)) Someone at Facebook seems to have flipped a switch as users from all over the world are starting to get invited to try the new Facebook design on the desktop. Facebook revealed plans to redesign the desktop version of the social media site last year. The company wanted to make things less convoluted, easier to use, and introduce new features such as a dark mode for the entire site. Back then, Facebook wanted to launch the new version before March 2020 and it appears that the rollout has started. Invited users see a small notification at the top on Facebook that invites them to try the new Facebook design. Note that the notification vanishes when you reload or navigate away. Facebook displays a short intro about some of the new features (light and dark mode can be selected right then and there). The new interface uses a responsive design that displays content based on screen width and other parameters. If the width of the browser window is sufficient, Facebook displays a traditional but heavily modified three column design. The design features rounded corners and is more colorful than the current design of desktop Facebook. Text appears larger and there is more grayspace; this may lead to extended scrolling sessions as less content is displayed at a time on the visible part of the screen. Zooming in or out, or changing the size of the browser window may help display more content on the screen at the same time. The icon bar at the top provides quick access to various sections on the site including the homepage, videos, marketplace, groups, and gaming. The new design looks like this when you open a Facebook Page. Notice that Facebook displays a two column design when the browser width is too small. Which column it displays depends on the page you are on. If you are browsing your home feed, you get to see the feed and the right column with its contacts. A click on the new menu icon (three horizontal bars) displays the links of the missing left sidebar. Switch between light and dark mode The new Facebook design supports a light and dark mode. You can switch between both modes easily at any time doing the following: Select Menu. Toggle Dark Mode which is displayed in the menu as an option. The change is instant and the page is either displayed using a dark color scheme or a light one depending on the state of the Dark Mode switch. How to go back to Classic Facebook Facebook users who have tried the new design of the site already may want to go back to the classic version; this is possible currently but it is very likely that Facebook will remove that option eventually from the site. To restore classic Facebook, do the following: Select Menu (a click or tap on the profile icon in the top right corner). There you should see a "Switch to Classic Facebook" option. Activate that to go back. Source: This is how Facebook's new desktop design looks (and how you can restore the old Facebook)) (gHacks - Martin Brinkmann)
  16. In an interview with CBS This Morning, Clearview AI's founder says it's his right to collect photos for the facial recognition app. Clearview AI CEO Hoan Ton-That tells CBS correspondent Errol Barnett that the First Amendment allows his company to scrape the internet for people's photos. Google, YouTube and Facebook have sent a cease-and-desist letters to Clearview AI, the facial recognition company that has been scraping billions of photos off the internet and using it to help more than 600 police departments identify people within seconds. That follows a similar action by Twitter, which sent Clearview AI a cease-and-desist letter for its data scraping in January. The letter from Google-owned YouTube was first seen by CBS News. (Note: CBS News and CNET share the same parent company, ViacomCBS.) The CEO of Clearview AI, a controversial and secretive facial recognition startup, is defending his company's massive database of searchable faces, saying in an interview on CBS This Morning Wednesday that it's his First Amendment right to collect public photos. He also has compared the practices to what Google does with its search engine. Facial recognition technology, which proponents argue helps with security and makes your devices more convenient, has drawn scrutiny from lawmakers and advocacy groups. Microsoft, IBM and Amazon, which sells its Rekognition system to law enforcement agencies in the US, have said facial recognition should be regulated by the government, and a few cities, including San Francisco, have banned its use, but there aren't yet any federal laws addressing the issue. Here is YouTube's full statement: "YouTube's Terms of Service explicitly forbid collecting data that can be used to identify a person. Clearview has publicly admitted to doing exactly that, and in response we sent them a cease and desist letter. And comparisons to Google Search are inaccurate. Most websites want to be included in Google Search, and we give webmasters control over what information from their site is included in our search results, including the option to opt-out entirely. Clearview secretly collected image data of individuals without their consent, and in violation of rules explicitly forbidding them from doing so." Facebook has also said that it's reviewing Clearview AI's practices and that it would take action if it learns the company is violating its terms of services. "We have serious concerns with Clearview's practices, which is why we've requested information as part of our ongoing review. How they respond will determine the next steps we take," a Facebook spokesperson told CBS News on Tuesday. Facebook later said it demanded the company stop scraping photos because the activity violates its policies. Clearview AI attracted wide attention in January after The New York Times reported how the company's app can identify people by comparing their photo to a database of more than 3 billion pictures that Clearview says it's scraped off social media and other sites. The app is used by hundreds of law enforcement agencies in the US to identify those suspected of criminal activities. BuzzFeed News reported that in pitches to law enforcement agencies, Clearview AI had told police to "run wild" with its facial recognition, despite saying that it had restrictions to protect privacy. Critics have called the app a threat to individuals' civil liberties, but Clearview CEO and founder Hoan Ton-That sees things differently. In an interview with correspondent Errol Barnett on CBS This Morning airing Wednesday, Ton-That compared his company's widespread collection of people's photos to Google's search engine. "Google can pull in information from all different websites," Ton-That said. "So if it's public, you know, and it's out there, it could be inside Google search engine, it can be inside ours as well." Google disagreed with the comparison, calling it misleading and noting several differences between its search engine and Clearview AI. The tech giant argued that Clearview is not a public search engine and gathers data without people's consent while websites have always been able to request not to be found on Google. Clearview AI's founder intends to challenge the cease-and-desist letters from Google and Twitter, arguing that he has a constitutional right to harvest people's public photos. "Our legal counsel has reached out to [Twitter] and are handling it accordingly," Ton-That said. "But there is also a First Amendment right to public information. So the way we have built our system is to only take publicly available information and index it that way." Clearview AI would not be the first tech company to use this defense to justify its data scraping practices, as technology attorney Tiffany C.Li pointed out on Twitter. In 2017, HiQ, a data analytics company, sued LinkedIn for the right to continue scraping public data from the Microsoft-owned social network, claiming that the First Amendment protects that access. The size of the Clearview database dwarfs others in use by law enforcement. The FBI's own database, which taps passport and driver's license photos, is one of the largest, with over 641 million images of US citizens. Clearview also keeps all the images collected, even when the original upload has been deleted. Law enforcement agencies say they've used the app to solve crimes ranging from shoplifting to child sexual exploitation to murder. But privacy advocates warn that the app could return false matches to police and that it could also be used by stalkers and others. They've also warned that facial recognition technologies in general could be used to conduct mass surveillance. A lawsuit filed in Illinois after the Times' report called Clearview AI's software an "insidious encroachment on an individual's liberty" and accused the company of violating the privacy rights of residents in that state. The lawsuit followed Democratic Sen. Edward Markey saying Clearview's app may pose a "chilling" privacy risk. Source
  17. Facebook's Twitter and Instagram accounts hacked, 'OurMine' claims responsibility Facebook’s Twitter and Instagram handles were compromised earlier today, as tweets and posts began showing up that said: “Well, even Facebook is hackable but at least their security better than Twitter”. A group called OurMine claimed responsibility for the hack, which reportedly was also responsible for the NFL’s Twitter account hack last month. The hackers began posting tweets from Facebook and Messenger accounts, which were constantly being deleted by the company (as seen in Jane Manchun Wong’s tweet here). The accounts were compromised for about 30 minutes, after which they were locked. Twitter confirmed in a statement to some journalists that the accounts were indeed compromised and that it was working with Facebook to restore the accounts: As soon as we were made aware of the issue, we locked the compromised accounts and are working closely with our partners at Facebook to restore them. Facebook later posted in a tweet that it had “secured and restored” access. Interestingly, the hackers seem to have had taken control of Facebook and Messenger Instagram handles (spotted by The Verge). Though the hackers claimed that “Facebook” was hackable, it wasn’t Facebook that was hacked, but its social media accounts alone, such as Twitter and Instagram. The tweets by 'OurMine' were posted from 'Khoros', a third-party service that helps its customers interact with and post on social media – including Instagram and Twitter. From the tweets, it looks like the hackers were promoting their security services and did not seem to have any malicious intent. Source: Facebook's Twitter and Instagram accounts hacked, 'OurMine' claims responsibility (Neowin)
  18. The official Facebook app for Windows 10 is not the best-supported app in the world, and it therefore comes as no surprise that Facebook is planning to kill the app. Facebook has been sending a message to active users of the Windows 10 app informing them of their intention to discontinue the app. What is somewhat surprising is how soon the app is leaving the platform, and that there is no indication Facebook is planning to replace it. Their personalized email reads: Since you use Facebook for Windows desktop app, we wanted to make sure you’re aware this app will stop working on Friday, February 28, 2020. You can still access all of your friends and favorite Facebook features by logging in through your browser at www.facebook.com. For the best experience, make sure you’re using the most current version of our supported browsers including the new Microsoft Edge. You can still access Messenger through the Facebook website or by logging in through your browser at www.messenger.com. If you prefer a desktop app for your conversations, try the new Messenger for Windows which you can download now in the Microsoft Store. Thank you for using Facebook for Windows desktop app, The Facebook Team Facebook recently replaced their old Messenger UWP app with a new Electron app, but it seems when it comes to the full Facebook app the company decided it would make more sense to simply direct users to their website. Source
  19. Tech giants’ high-speed internet link to Hong Kong has become politically toxic Google and Facebook seem to have resigned themselves to losing part of the longest and highest profile internet cable they have invested in to date. In a filing with the Federal Communications Commission last week, the two companies requested permission to activate the Pacific Light Cable Network (PLCN) between the US and the Philippines and Taiwan, leaving its controversial Hong Kong and Chinese sections dormant. Globally, around 380 submarine cables carry over 99.5 percent of all transoceanic data traffic. Every time you visit a foreign website or send an email abroad, you are using a fiber-optic cable on the seabed. Satellites, even large planned networks like SpaceX’s Starlink system, cannot move data as quickly and cheaply as underwater cables. When it was announced in 2017, the 13,000-kilometer PLCN was touted as the first subsea cable directly connecting Hong Kong and the United States, allowing Google and Facebook to connect speedily and securely with data centers in Asia and unlock new markets. The 120 terabit-per-second cable was due to begin commercial operation in the summer of 2018. “PLCN will help connect US businesses and internet users with a strong and growing internet community in Asia,” they wrote. “PLCN will interconnect … with many of the existing and planned regional and international cables, thus providing additional transmission options in the event of disruptions to other systems, whether natural or manmade.” Instead, it has been PLCN itself that has been disrupted, by an ongoing regulatory battle in the US that has become politicized by trade and technology spats with China. Team Telecom, a shadowy US national security unit comprised of representatives from the departments of Defense, Homeland Security, and Justice (including the FBI), is tasked with protecting America’s telecommunications systems, including international fiber optic cables. Its regulatory processes can be tortuously slow. Team Telecom took nearly seven years to decide whether to allow China Mobile, a state-owned company, access to the US telecoms market, before coming down against it in 2018 on the grounds of “substantial and serious national security and law enforcement risks.” Although subsidiaries of Google and Facebook have been the public face of PLCN in filings to the FCC, four of the six fiber-optic pairs in the cable actually belong to a company called Pacific Light Data Communication (PLDC). When the project was first planned, PLDC was controlled by Wei Junkang, a Hong Kong businessman who had made his fortune in steel and real estate. In December 2017, Wei sold most of his stake in PLDC to Dr Peng Telecom & Media Group, a private broadband provider based in Beijing. That sent alarm bells ringing in Washington, according to a report in the Wall Street Journal last year. While Dr Peng is not itself state-owned or controlled, it works closely with Huawei, a telecoms company the Trump administration has accused of espionage and trade secret theft. Dr Peng has also worked on Chinese government projects, including a surveillance network for the Beijing police. PLCN has been legal limbo ever since, with Google complaining bitterly to the FCC about the expense of the ongoing uncertainty. In 2018, it wrote, “[any further holdup] would impose significant economic costs. Depending on the length of the delay, the financial viability of the project could be at risk.” Google and Facebook finally secured special permission to lay the cable in US waters last year, and to construct, connect and temporarily test a cable landing station in Los Angeles. But while the network itself is now essentially complete, Team Telecom has yet to make a decision on whether data can start to flow through it. In the past, Team Telecom has permitted submarine cables, even from China, to land in the US, as long as the companies operating them signed what are called network security agreements. These agreements typically require network operations to be based in the US, using an approved list of equipment and staffed by security-screened personnel. Operators are obliged to block security threats from foreign powers, while complying with lawful surveillance requests from the US government. In 2017, for example, Team Telecom gave the green light to the New Cross Pacific (NCP) cable directly connecting China and the US, despite it being part-owned by China Mobile, the state-owned company it later denied US access to on national security grounds. “Normally there wouldn’t be so much fuss over a cable to China,” says Nicole Starosielski, a professor at New York University and author of The Undersea Network. “We’ve had cables to China for a long time and all of these networks interconnect, so even if they don’t land directly in China, they’re only a hop away. It is just one of those moments where it is more difficult to land a cable, no matter who the Chinese partner is, because of the political situation.” In September, Senator Rick Scott (R-FL), who sits on Senate committees for technology, communications and homeland security, sent a letter to FCC Chairman Ajit Pai urging him to block PLCN. “[PLCN] threatens the freedom of Hong Kong and our national security,” wrote Scott. “This project is backed by a Chinese partner, Dr Peng Telecom & Media Group Co., and would ultimately provide a direct link from China into Hong Kong … China has repeatedly shown it cannot be trusted … We cannot allow China expanded access to critical American information, even if funded by US companies.” Google and Facebook saw the writing on the wall. On January 29 last week, representatives from the two companies – but not PLDC – met with FCC officials to propose a new approach. A filing, made the same day, requests permission to operate just the two PLCN fiber pairs owned by the American companies: Google’s link to Taiwan, and Facebook’s to the Philippines. “[Google] and [Facebook] are not aware of any national security issues associated with operation of US-Taiwan and US-Philippine segments,” reads the application. “For clarity, the [request] would not authorize any commercial traffic on the PLCN system to or from Hong Kong, nor any operation of the PLCN system by PLDC.” The filling goes on to describe how each fiber pair has its own terminating equipment, with Google’s and Facebook’s connections arriving at Los Angeles in cages that are inaccessible to the other companies. “PLDC is contractually prohibited from using its participation interest in the system to interfere with the ownership or rights of use of the other parties,” it notes. Neither company would comment directly on the new filing. A Google spokesperson told TechCrunch, “We have been working through established channels in order to obtain cable landing licenses for various undersea cables, and we will continue to abide by the decisions made by designated agencies in the locations where we operate.” A Facebook spokesperson said, “We are continuing to navigate through all the appropriate channels on licensing and permitting for a jointly-owned subsea cable between the US and Asia to provide fast and secure internet access to more people on both continents.” “I think stripping out the controversial [Hong Kong] link will work,” says Starosielski. “But whenever one of these projects either gets thwarted, it sends a very strong message. If even Google and Facebook can’t get a cable through, there aren’t going to be a ton of other companies advancing new cable systems between the US and China now.” Ironically, that means that US data to and from China will continue to flow over the NCP cable controlled by China Mobile – the only company that Team Telecom and the FCC have ever turned down on national security grounds. Source
  20. By Kate O'Flaherty Horror writer Stephen King has just become the latest to join the #DeleteFacebook movement. Stephen King, best known for horror fiction including The Shining and Carrie, has just announced he is quitting Facebook. In a Tweet, King said he deleted the social network due to misinformation and Facebook’s inability to protect user privacy. King’s announcement comes at a time when the social network is being attacked from all corners for its inability to protect user data after multiple scandals and data breaches. Facebook is also facing criticism after continuing to take political adverts when rivals such as Twitter stopped last year over the spread of misinformation. Many people are quitting Facebook already Stephen King is just the latest big name to delete his Facebook account. Other celebrities who have quit the social network include pop singer Cher, Starwars actor Mark Hamill, actor Will Ferrell and wealthy businessman Elon Musk. As the #DeleteFacebook movement continues to gain pace, celebrities aren’t the only people quitting. The figures show people are leaving Facebook in droves: the social network’s active users are falling, so even if people haven’t actually deleted their accounts, they have stopped posting and engaging regularly. Why you should quit Facebook too Scandals involving Facebook are continuing to mount. The social network came under huge scrutiny around the time of the Cambridge Analytica revelations back in 2018. It has also been the subject of several data breaches over the last two years. Last year, users of Facebook’s Messenger voice to text functionality were given another reason to delete their accounts when it emerged that contractors were listening to recordings. Facebook Messenger itself is not a secure way to communicate with friends, because it is not end to end encrypted. Facebook is integrating its back end with WhatsApp and said that it would end to end encrypt Messenger, but these plans appear to be delayed indefinitely. At the same time, concerns remain about Facebook’s use of facial recognition technology. The company has just agreed to pay $550 million to a group of Facebook users in Illinois, who claimed that the firm’s facial recognition tool violated privacy laws. Meanwhile, last year it emerged that Facebook had tested a terrifying facial recognition app on employees and their friends. You can say you will secure your Facebook account, but in reality, this is very difficult to do. Take for example, the new Off-Facebook Activity tool, which the social network says shows you how you are being tracked online. This helps to some extent, but the social network will still be collecting your data. Deleting Facebook, one step at a time I’ve outlined just some of the many reasons for deleting your Facebook account. If you are now ready to do so, you need to ensure you are deleting, not deactivating it. If you want to try out deactivating first, Facebook offers a handy guide on how to do it. To permanently delete your account the steps are as follows: Click the arrow at the top right of any Facebook page. Click Settings then click Your Facebook Information in the left column. Click Deactivation and Deletion. Choose Delete Account, then click Continue to Account Deletion. Enter your password, click Continue and then click Delete Account. Facebook does give you the ability to backtrack on your decision, after 30 days. However, it can take a full 90 days for the social network to completely delete your account. If you really can’t part with your Facebook account yet, even if you don’t use it often, there’s another way you can help stop Facebook from accessing your data. Your phone is with you all the time, potentially allowing Facebook to collect your location information. Therefore, if you care about your privacy and security, it’s a good idea to at least delete the Facebook app from your smartphone. Source
  21. By Stephanie Condon for Between the Lines The social media giant reported earnings and revenue in line with expectations and steady growth in its number of active users. Facebook published fourth quarter results in line with market estimates on Wednesday. Nevertheless, the social media giant's shares sank in after-hours trading. Diluted earnings per share for the fourth quarter were $2.56 on revenue of $21.082 billion, up 25 percent year-over-year. Analysts expected earnings of $2.53 on revenue of $20.89 billion. For the full year, Facebook's EPS came to $6.43 on revenue of $70.697 billion. "We had a good quarter and a strong end to the year as our community and business continue to grow," CEO Mark Zuckerberg said in a statement. "We remain focused on building services that help people stay connected to those they care about." Facebook's daily active users were 1.66 billion on average for December 2019, an increase of 9 percent year-over-year. Its monthly active users totaled 2.5 billion as of December 31, 2019, an increase of 8 percent year-over-year. The number of people active daily on at least one of Facebook's products -- including Facebook, Instagram, Messenger and WhatsApp -- was 2.26 billion on average for December 2019, an increase of 11 percent year-over-year. Monthly active people for Facebook products was 2.89 billion as of December 31, 2019, an increase of 9 percent year-over-year. Turning to revenue outlook, Facebook expects its year-over-year total revenue growth rate in Q1 to decelerate by a low to mid single-digit percentage point as compared to the Q4 growth rate. This deceleration is expected due to the maturity of the business, said Facebook CFO David Wehner, as well as the increasing impact of global privacy regulations and headwinds related to ad targeting. On Wednesday's conference call, Zuckerberg noted that Facebook now has more than 1,000 engineers working on privacy projects. A day earlier, Facebook released its Off-Facebook Activity (OFA) tools, which allow users to learn more about what data Facebook and its partners collect about them. As Steven J. Vaughan-Nichols notes for ZDNet, Facebook and other companies are now required to disclose this kind of information under the California Consumer Privacy Act, a law inspired by privacy violations from Facebook and other tech companies. "It's going to take time but over the next decade, I want us to build a reputation around privacy that's as strong as our reputation around building good, stable services," Zuckerberg said on the call. Source
  22. Facebook will pay over half a billion dollars to settle a class action lawsuit that alleged systematic violation of an Illinois consumer privacy law. The settlement amount is large indeed, but a small fraction of the $35 billion maximum the company could have faced. Class members — basically Illinois Facebook users from mid-2011 to mid-2015 — may expect as much as $200 each, but that depends on several factors. If you’re one of them you should receive some notification once the settlement is approved by the court and the formalities are worked out. The proposed settlement would require Facebook to obtain consent in the future from Illinois users for such purposes as face analysis for automatic tagging. This is the second major settlement from Facebook in six months; an seemingly enormous $5 billion settlement of FTC violations was announced over the summer, but it’s actually a bit of a joke. The Illinois suit was filed in 2015, alleging that Facebook collected facial recognition data on images of users in the state without disclosure, in contravention of the state’s 2008 Biometric Information Privacy Act (BIPA). Similar suits were filed against Shutterfly, Snapchat, and Google. Facebook pushed back in 2016, saying that facial recognition processing didn’t count as biometric data, and that anyway Illinois law didn’t apply to it, a California company. The judge rejected these arguments with flair, saying the definition of biometric was “cramped” and the assertion of Facebook’s immunity would be “a complete negation” of Illinois law in this context. Facebook was also suspected at the time of heavy lobbying efforts towards defanging BIPA. One state senator proposed an amendment after the lawsuit was filed that would exclude digital images from BIPA coverage, which would of course have completely destroyed the case. It’s hard to imagine such a ridiculous proposal was the suggestion of anyone but the industry, which tends to regard the strong protections of the law in Illinois as quite superfluous. As I noted in 2018, the Illinois Chamber of Commerce proposed the amendment, and a tech council there was chaired by Facebook’s own Manager of State Policy at the time. Facebook told me then that it had not taken any position on the amendment or spoken to any legislators about it. 2019 took the case to the 9th U.S. Circuit Court of Appeals, where Facebook was again rebuffed; the court concluded that “the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests. Similar conduct is actionable at common law.” Facebook’s request for a rehearing en banc, which is to say with the full complement of judges there present, was unanimously denied two months later. At last, after some 5 years of this, Facebook decided to settle, a representative told TechCrunch, “as it was in the best interest of our community and our shareholders to move past this matter.” Obviously it admits to no wrongdoing. The $550 million amount negotiated is “the largest all-cash privacy class action settlement to date,” according to law firm Edelson PC, one of three that represented the plaintiffs in the suit. “Biometrics is one of the two primary battlegrounds, along with geolocation, that will define our privacy rights for the next generation,” said Edelson PC founder and CEO Jay Edelson in a press release. “We are proud of the strong team we had in place that had the resolve to fight this critically important case over the last five years. We hope and expect that other companies will follow Facebook’s lead and pay significant attention to the importance of our biometric information. Source
  23. Facebook's Off-Facebook Activity tool is now available to everyone Facebook CEO Mark Zuckerberg announced today that a new privacy tool called Off-Facebook Activity is now available to Facebook users around the globe. Designed to improve transparency, Off-Facebook Activity provides information about data that third-party businesses share with Facebook. Facebook uses the provided information to show advertisement to its users, for suggestions, e.g. groups or businesses, or to help organizations "understand how their website, app, or ads are performing". The tool provides a summary of the information and an option to clear it from the Facebook account. Off-Facebook Activity should be available in desktop and mobile versions of Facebook. Facebook users need to open the Settings of the service and select "Your Facebook Information" from the left column and then "Off-Facebook Activity" on the page that opens. Tip: you can open the page directly as well using this link: https://www.facebook.com/off_facebook_activity/ A click on the link opens a summary and information. It starts with a list of companies or applications that shared data with Facebook. These are just examples and may not reflect the full list of companies and apps that shared data with Facebook. The page provides an explanation that includes an example Jane buys a pair of shoes from an online clothing and shoe store. The store shares Jane's activity with us using our business tools. We receive Jane's off-Facebook activity and we save it with her Facebook account. The activity is saved as "visited the Clothes and Shoes website" and "made a purchase". Jane sees an ad on Facebook for a 10% off coupon on her next shoe or clothing purchase from the online store. More interesting that the summary or the description is the "what you can do" section. If lists the following options: Manage your Off-Facebook Activity -- (requires the account password on desktop). Lists apps and websites that shared information with Facebook. Each is listed with a name and when the information was received. You can click on any item to display details, e.g. how many interactions were shared, and settings to turn off future activity for that particular company, or to give feedback. Clear History -- The option disconnects the data from the account but does not prevent Facebook from receiving future data. Also note that Facebook uses the term "disconnect" and not delete or remove; this suggests that the data may not be deleted outright or at all. Access your information -- A list of information that is categorized by Facebook; not necessarily relevant to Off-Facebook Activity. Download your information -- An option to download information that Facebook has about your account and your activity. Manage Future Activity -- An option to turn off Off-Facebook Activity entirely to prevent the linking of third-party data with the Facebook account in the future. Also provides options to manage individual items that you have blocked using "Manage your Off-Facebook Activity". If you don't want Facebook to use third-party data and associate with your account, you need to do two things: Clear the History. Disable Off-Facebook Activity. Note that Facebook linked the Future Activity option to its login system. A warning is displayed to users who click on the turn-off option that doing so will prevent the user from "logging into apps and websites with Facebook". Closing Words The Off-Facebook Activity tool may be an eye-opener to some users as it lists apps, websites, and companies that may have shared data with Facebook. Sharing does not necessarily mean that the data was sold to Facebook but it is possible that this was the case. Source: Facebook's Off-Facebook Activity tool is now available to everyone (gHacks - Martin Brinkmann)
  24. The Electronic Frontier Foundation (EFF) on Monday announced that its research into the Ring app’s Android version identified several embedded third-party trackers sucking up “a plethora” of personal information. Three of the trackers aren’t included in Ring’s privacy notice—a list last updated a year and eight months ago. The civil liberties group, whose work focuses on privacy and other digital rights, said it had observed Ring for Android’s activity using tools for inspecting web traffic. EFF researchers found it was delivering users’ personal information to four marketing and analytics firms, including Facebook. In Facebook’s case, Ring hands over data whether its customers have Facebook accounts or not, the EFF said. Ring’s privacy policy makes clear that it uses web analytics services. “The service providers that administer these services use automated technologies to collect data (such as email and IP addresses) to evaluate use of our websites and mobile apps,” it says. However, the policy also claims to identify which third-party services specifically are used by the company. The list, last updated in May 2018, does not include Facebook and other trackers currently in use. Screenshot: Ring.com “Like many companies, Ring uses third-party service providers to evaluate the use of our mobile app, which helps us improve features, optimize the customer experience, and evaluate the effectiveness of our marketing,” a Ring spokesperson told Gizmodo. According to EFF’s research, Ring for Android version 3.21.1 delivers a range of personal information to the following sites: branch.io, mixpanel.com, appsflyer.com and facebook.com. Gizmodo also inspected Ring’s web traffic can confirm the EFF’s findings. “The danger in sending even small bits of information is that analytics and tracking companies are able to combine these bits together to form a unique picture of the user’s device,” EFF said. Privacy researchers refer to this as a digital “fingerprint,” which marketing companies use to paint a complete portrait of a person’s likes and activities. A Ring spokesperson said that Ring takes steps to ensure its service providers’ use of customer data is “contractually limited to appropriate purposes such as performing these services on our behalf and not for other purposes.” In the case of business analytics service MixPanel—the only tracker identified by EFF listed among Ring’s third-party services—Ring provides access to users’ names, email addresses, and device information, such OS version and model, EFF said. Ring told Gizmodo that MixPanel is used to target messaging within the app when new features become available, including security-related settings. Other trackers help the company identify which in-app features are performing the best, it said. Ring was purchased by Amazon in the summer of 2018. The company markets a line of home security products, including the popular Ring Doorbell, which uses Amazon Web Services (AWS) servers to store footage. Privacy advocates have scrutinized Ring heavily over the past year, largely due to its quickly expanding local law enforcement partnerships, the terms of which appear often to restrain public officials from speaking freely about the services Ring provides. Gizmodo reported last year, for example, that Ring had edited the written statements of police officials. In some cases, Ring’s intervened to omit the word “surveillance” from quotes attributed to senior police officials, warning them that use of the term could elicit “privacy concerns” among consumers. “Ring claims to prioritize the security and privacy of its customers,” EFF Senior Staff Technologist William Budington said in a statement, “yet time and again we’ve seen these claims not only fall short, but harm the customers and community members who engage with Ring’s surveillance system.” Updated, Article was updated to reflect Ring data collected by Gizmodo confirmed EFF’s findings. Source
  25. Security researchers have criticised Facebook's head of communications, Sir Nick Clegg, for his response to the hacking of Amazon chief Jeff Bezos. Mr Bezos' phone was hacked in May 2018 after he received a WhatsApp message loaded with malware. But in an interview with the BBC, Sir Nick said WhatsApp's encrypted messages could "not be hacked into". And he failed to acknowledge security flaws in the app that had let hackers compromise their target's smartphones. "Nobody tell Nick Clegg about how exploits work," joked cyber-security researcher Kevin Beaumont. Mr Bezos' phone was compromised after he received a WhatsApp message containing a malicious file from the personal number of Saudi Arabia's crown prince Mohammed bin Salman, according to the Guardian newspaper which broke the story. An investigation suggested the phone secretly started sharing huge amounts of data after he received the message. The kingdom's US embassy has described the allegations as "absurd". When asked about the hack in an interview with BBC Radio 4's Today programme, Sir Nick said: "It can't have been anything when the message was sent in transit because that's end-to-end encrypted on WhatsApp. "We're as sure as you can be that the technology of end-to-end encryption cannot... be hacked into." But cyber-security researchers have pointed out that security flaws in WhatsApp's software have previously been discovered. Two significant problems were disclosed in 2019. One let hackers remotely install surveillance software on phones just by initiating a voice call, even if the recipient did not answer. Another let surveillance tools be deployed by sending the recipient an infected MP4 video clip. Sir Nick told the BBC: "If someone sends you a malicious email, it only comes to life when you open it." However, some of the most significant vulnerabilities in WhatsApp let hackers install their malware without the recipient doing anything at all. Facebook told the BBC it had nothing to add to Sir Nick's comments. Source
×
×
  • Create New...