Jump to content

Search the Community

Showing results for tags 'facebook'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 378 results

  1. Breaking up Facebook? Try data literacy, social engineering, personal knowledge graphs, and developer advocacy Yes, Facebook is a data-driven monopoly. But the only real way to break it up is by getting hold of its data and functionality, one piece at a time. It will take a combination of tech, data, and social engineering to get there. And graphs -- personal knowledge graphs. The relationship with Facebook has gone from infatuation to passive-aggressive. People love to hate Facebook, often by sharing angry posts on Facebook itself, usually to return to it shortly afterward. The truth is that Facebook has made it pretty easy for people to take issue with it, with blunder after blunder exposing its practices on data management, user rights, privacy, transparency, and control. Last week it was friendly fire, so to speak, as Facebook's co-founder Chris Hughes called for Facebook to be broken up by the FTC. Hughes pointed out the fact that Facebook is, in effect, a monopoly. He was not the first one to do so. The Economist was a forerunner, notable for the fact that it pointed to the real nature of the monopoly, not just including Facebook: if data is the new oil, Big Tech owns the oil rigs. Data-driven products such as Facebook harvest data, and use it to enhance their product, and harvest more data. As noted by ZDNet's Larry Dignan, however, it's questionable whether the FTC would consider Facebook a monopoly. It may take a while, if it ever happens. It's also questionable whether a breakup would work, if it ever happened -- as noted by internet pioneer Jaron Lanier, business models based on services in exchange for data are broken, and invariably lead to asymmetry. So, Facebook clones based on the same model would probably not be substantially different. To fix this, it will take a combination of tech, data, and social engineering. DATA LITERACY: PAYING THE PRICE OF DATA The surveys on people's attitude toward data collection are telling. Respondents want more privacy, but are not willing to pay for it. A big part of them would be willing to give out even more data in exchange for free services. This speaks volumes on the state of data sovereignty and awareness in the world right now. Even though being profiled on an individual level should be alarming in and by itself, it's the bigger picture that matters most here. Your data is the oil that fuels Facebook. With every scroll, click, like, read, you are feeding its monopoly and voting for its practices with your thumbs. Although to be fair, Facebook tries its best to get your data even if you don't use it, or have an account in the first place. The amount of data Facebook has amassed via its practices means it has many legs up in the data and AI race. In a world increasingly ran via machine learning algorithms, the importance of data to train those algorithms is paramount. Facebook's data trove has also attracted lots of talent, as researchers need data to make algorithms work. This, however, brings us to a very real issue. As Hughes pointed out, after the Cambridge Analytica scandal, the backlash against Facebook has intensified. Yet #deletefacebook has not lead to much change, as lack of viable alternatives means people are largely left with two options: Giving up social networking altogether, or keep using Facebook. But is there really no alternative to Facebook? To answer this question, we must start by breaking up Facebook in terms of functionality. Facebook really is a conglomerate of functionalities all tied together, further expanded via its acquired ecosystem -- WhatsApp and Instagram. Media and form factors aside, it all comes down to a few core things: Contacts, 1-on-1 messaging, open and closed group posts, public and restricted newsfeed posts, and events. Though few alternatives offer all of those in the seamless way Facebook does, alternatives do exist. WhatsApp used to be one of those, but other messaging apps with contact management and support for groups and channels exist, too. Some of them, like Telegram, Mastodon, or Diaspora, are open source, too, and they give users the option to have more control of their data. But they are mostly used by people who have been deplatformed, or fringe groups. So, why have they not caught up? Facebook alternatives across the Fediverse Fragmentation is one reason. If some of your contacts use AppA, while others use AppB, how is it possible to communicate across them? But that's no longer an issue -- not if you choose an application from the Fediverse, or the Federation, or the Activity Web, where these applications live. These are applications built on open communication protocols and are able to interoperate with each other. Though each application comes with its own UI/UX and perks, supporting those open protocols ensures that basic functionality such as following or posting works across all of them. This means you can keep up with people across applications. Pretty cool, I hear you say. It is -- but it's not good enough. Facebook's core functions should not be too hard to replicate, and many alternative social networking applications have done this. But part of the reason these applications have not gotten mainstream yet is that, frankly, many of them are not ready for it. The functionality is often limited and wonky, UI/UX is not ideal, and a good number of those applications are based on old programming frameworks that are having trouble keeping up with modern development and attracting contributors. Many of them have no mobile app counterparts to speak of, for example. There is an expanding ecosystem of alternative, open source social networking platforms. The good news is these platforms give users great control and offer interoperability. The bad news is they are not as easy to use as Facebook. Image: Sean Tilley Regardless of their (varying) quality of implementation, however, these applications have a different philosophy altogether. They are typically not hosted by commercial entities shouldering the burden of installing, configuring and keeping the back-end running. This means that potential users can do one of two things: They have to find or rent from a hosting or cloud provider, some machine to use as their personal server, and then run the software there. Or, they need to find someone who runs a server node of the application they want to use, and get an account they can use on their node. The first option is not feasible for the vast majority of users. Although efforts such as the Freedom Box promise to make hosting your own applications easier, they are not yet at a point where they can offer seamless functionality. The second option may be more realistic. Families, friends, and communities of all sorts could all chip in and either have their application of choice ran and use it as a hosted service, or run their own if they are savvy and motivated enough. Both options require paying a price, however. Not just in terms of paying a fee to cover operational expenses of keeping the software running. Perhaps, more importantly, paying the price of taking ownership and control of your data. Or trusting whoever is running your software of choice with your data. And doing the social engineering required to make something like this work. DO YOU LIKE SOCIAL ENGINEERING? Yes, it will take some social engineering to make this work. Even with marketing budgets in place, getting something off the ground is not easy. There always is a cost of switching, and besides having to advocate to a number of contacts, Facebook sure does not make it easy to just take your data and walk away. To begin with, until recently, that was not even an option. Now there is data export functionality in Facebook, but the data you get is far from complete and usable to boot with. The format is proprietary and undocumented, forcing anyone who wants to use that data to write custom parsers and adapters. And there is crucial information missing from the export. Most prominently, likes. You may have heard the psychographics, aka "5 likes are enough to profile you" mantra, as well as the fact that Facebook and many of its partners, authorized or otherwise, have access to that data. Users who generated the data, however, do not have access to it. All you get in the data export is the fact that you liked something -- not what it is! And that's without even mentioning all the other information Facebook does not export. This isn't just data that could help you bootstrap another application. It's data you, or anyone you authorize, may use to feed machine learning algorithms. Facebook makes sure you don't have it, GDPR or no GDPR. So, there is a role for regulation there: Regulation should force Facebook to give users all the information it has on them at the push of a button. Deleting your account and all of its data should be equally easy, too. (Surprise: it is not). To be clear, that would not be the end all in being able to move away from Facebook -- or any other platform, for that matter. But it would be a major step in making it possible. It would give users the option of doing as they please with their data. Facebook displays the same kind of contempt to calls for transparency by the US and the UK administration, as it does to calls from users to give them access to their data. Maybe starting with data sovereignty for user data would lead to access to advertising data, too. Personal knowledge graphs Data sovereignty is precisely the vision behind Solid, the application development framework championed by Tim Berners Lee. Berners Lee, credited as the inventor of the web, is working on Solid with the help of startup Inrupt, and contributors from the open source and research community. Solid is working on developing so-called pods. You can think of pods as personal data vaults, keeping all your data in one place. From there, you can authorize applications to use the data, giving read or write access to your pod. And pods could be hosted on your own machine, or in the cloud. It sounds great, but it's tricky. Besides all the non-technical reasons, there are technical obstacles, too. If this caught up, it would mean applications would have to fetch data from pods all over the web and the cloud to work. Querying in distributed environments is notoriously hard. But Solid has an ace up its sleeve there: knowledge graphs. Assembling data for an application from many sources is challenging. Personal knowledge graphs built on the Linked Data stack can help achieve that. Image: Ruben Verborgh Knowledge graphs are a rebranding of a technology that goes back 20 years. Started out as Semantic Web, rebranded as Linked Data, now going by the knowledge graph moniker, this technology enables a number of things, including federated querying. The Linked Data stack (RDF, URIs, and SPARQL) can make any piece of data accessible and queryable on the web. Solid is based on this technology, effectively aiming to build a personal knowledge graph for each of its users. Needless to say, this is quite ambitious, which may help explain why there is still no production-ready software for Solid. SPARQL is more than a query language. It also is a protocol, and can be used to execute federated queries across many endpoints on the web. But this is far from trivial, both from a performance and from a usability point of view. Using SPARQL and the Linked Data stack for this type of application has been challenging even for connoisseurs. Will things be different this time around? DEVELOPERS, DEVELOPERS, DEVELOPERS It looks like the Solid team has the mindset and the people to make it work, while the tech is work in progress. Ruben Verborgh, Semantic Web professor, researcher, and Inrupt technology advocate, together with his team have been working on enriching the Linked Data stack with tools to address modern developers. Their tools are based on JavaScript and frameworks such as React, and aim to make programming Solid a seamless experience. Verborgh and team are trying to meet developers where they are. Rather than expecting developers to switch to the Linked Data stack, they are giving them tools to build applications on Solid. This is smart, pragmatic, and a bit ironic, considering React's origins in Facebook. But will it work? Berners Lee's leverage could help, but it will take more than good intentions to succeed. In 2009, Berners Lee had TED audiences chanting in support of his previous project, Linked Open Data. Linked Open Data is about making open data available on the web as Linked Data. Today, open datasets may be growing, but the way these datasets are made available is different. Getting to the hearts and minds of developers is key to the success of Solid, as much as it is for any other software project today. Getting Solid in production-ready shape, and building applications on top of it, largely depends on it. Attracting developers is a fine art. Deep pockets, which we don't know whether Solid has, is a good starting point, but it does not necessarily guarantee success. But just think of the possibilities that open up by putting the Fediverse and Solid together. This could be the key to social networking minus the dictator in the middle, data sovereignty, and a whole new ecosystem of innovation. Let's hope Solid really gets solid soon. Source
  2. Facebook to ban users from Live streaming if they violate community rules A new one-strike policy comes after the livestreamed massacre in Christchurch. Facebook's Live streaming feature policy has changed. Picture Alliance Facebook said Tuesday it would ban users from its Live streaming feature for a set period of time if they violate certain community guidelines. The move is in response to the Mosque massacre that occurred in Christchurch, New Zealand, in March, in which a gunman livestreamed his gunning down of 50 victims. "Starting today, people who have broken certain rules on Facebook -- including our Dangerous Organizations and Individuals policy -- will be restricted from using Facebook Live," Guy Rosen, Facebook's vice president of integrity, wrote in a Tuesday blog post. A comprehensive list of offenses that would see a user barred from Live wasn't included, although the examples used all had to do with circulating terrorist-related content. It's one part of a two-pronged attack against malicious livestreaming, as Rosen also announced in the blog that Facebook is investing $7.5 million in research to develop better video detection technology. Rosen explained that Facebook has historically banned rule-breaking users from its entire platform, but that its new policy seeks to set rules that would specifically bar people from the Live service. "Today we are tightening the rules that apply specifically to Live," Rosen wrote. "We will now apply a 'one strike' policy to Live in connection with a broader range of offenses. From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time -- for example 30 days -- starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time." He added that a user banned from Live will "over the coming weeks" also be restricted from other services on the platform, such as creating ads. Weeks after the massacre, Facebook said that the 17-minute video wasn't reportedduring the period it was live, and that the first user report came 12 minutes after the livestream ended. In other words, the original video was available on Facebook for a full 29 minutes. However, the video was then uploaded over a million times by users. Facebook was able to purge 1.5 million uploads of the video and 1.2 million were blocked before going live on the platform. To assist with such purges, the company is investing $7.5 million in research, across the University of Maryland, Cornell University and the University of California, Berkley, to improve video detection software. Specifically, the company wants to get better at detecting edited versions of clips -- say, for instance, a banned clip that has its audio and colors distorted -- and at identifying if the poster is innocently sharing an image of someone intentionally manipulating videos and photos to bypass Facebook's systems. "Dealing with the rise of manipulated media will require deep research and collaboration between industry and academia," Rosen wrote. "In the months to come, we will partner more so we can all move as quickly as possible to innovate in the face of this threat." Source
  3. With the look of someone betrayed, Facebook’s CEO has fired back at co-founder Chris Hughes and his brutal NYT op-ed calling for regulators to split up Facebook, Instagram, and WhatsApp. “When I read what he wrote, my main reaction was that what he’s proposing that we do isn’t going to do anything to help solve those issues. So I think that if what you care about is democracy and elections, then you want a company like us to be able to invest billions of dollars per year like we are in building up really advanced tools to fight election interference” Zuckerberg told France Info while in Paris to meet with French President Emmanuel Macron. Zuckerberg’s argument boils down to the idea that Facebook’s specific problems with privacy, safety, misinformation, and speech won’t be directly addressed by breaking up the company, and that would instead actually hinder its efforts to safeguard its social networks. The Facebook family of apps would theoretically have fewer economies of scale when investing in safety technology like artificial intelligence to spot bots spreading voter suppression content. Hughes claims that “Mark’s power is unprecedented and un-American” and that Facebook’s rampant acquisitions and copying have made it so dominant that it deters competition. The call echoes other early execs like Facebook’s first president Sean Parker and growth chief Chamath Palihapitiya who’ve raised alarms about how the social network they built impacts society. Facebook’s co-founders (from left): Dustin Moskovitz, Chris Hughes, and Mark Zuckerberg But Zuckerberg argues that Facebook’s size benefits the public. “Our budget for safety this year is bigger than the whole revenue of our company was when we went public earlier this decade. A lot of that is because we’ve been able to build a successful business that can now support that. You know, we invest more in safety than anyone in social media” Zuckerberg told journalist Laurent Delahousse. The Facebook CEO’s comments were largely missed by the media, in part because the TV interview was heavily dubbed into French with no transcript. But written out here for the first time, his quotes offer a window into how deeply Zuckerberg dismisses Hughes’ claims. “Well [Hughes] was talking about a very specific idea of breaking up the company to solve some of the social issues that we face” Zuckerberg says before trying to decouple solutions from anti-trust regulation. “The way that I look at this is, there are real issues. There are real issues around harmful content and finding the right balance between expression and safety, for preventing election interference, on privacy.” Claiming that a breakup “isn’t going to do anything to help” is a more unequivocal refutation of Hughes’ claim than that of Facebook VP of communications and former UK deputy Prime Minster Nick Clegg . He wrote in his own NYT op-ed today that “what matters is not size but rather the rights and interests of consumers, and our accountability to the governments and legislators who oversee commerce and communications . . . Big in itself isn’t bad. Success should not be penalized.” Mark Zuckerberg and Chris Hughes Something certainly must be done to protect consumers. Perhaps that’s a break up of Facebook. At the least, banning it from acquiring more social networks of sufficient scale so it couldn’t snatch another Instagram from its crib would be an expedient and attainable remedy. But the sharpest point of Hughes’ op-ed was how he identified that users are trapped on Facebook. “Competition alone wouldn’t necessarily spur privacy protection — regulation is required to ensure accountability — but Facebook’s lock on the market guarantees that users can’t protest by moving to alternative platforms” he writes. After Cambridge Analytica “people did not leave the company’s platforms en masse. After all, where would they go?” That’s why given critics’ call for competition and Zuckerberg’s own support for interoperability, a core tenet of regulation must be making it easier for users to switch from Facebook to another social network. As I’ll explore in an upcoming piece, until users can easily bring their friend connections or ‘social graph’ somewhere else, there’s little to compel Facebook to treat them better. Source
  4. Facebook may face 20 years of privacy oversight by FTC Two decades of oversight being discussed in settlement talks, Reuters reports. Facebook may soon agree to 20 years of oversight of its privacy policies and practices by the US government, Reuters reported Monday. An investigation by the Federal Trade Commission is trying to determine whether Facebook's actions violated a 2011 agreement with the government in which it pledged to keep user data private. Facebook has said it didn't violate the consent decree. Under the agreement, Facebook agreed to get permission from users before sharing their data with third parties. In addition, the tech giant is required to have a third party conduct audits every two years for 20 years to ensure the program is effective. The consumer watchdog began investigating Facebook after revelations surfaced last year that UK consultancy Cambridge Analytica harvested the data of as many as 87 million users without their permission. The social media giant and the consumer protection agency have reportedly been in discussions for months to settle the investigation. Facebook said last month that it had set aside $3 billion to cover possible expenses for a possible fine related to the ongoing investigation. The as-yet-unannounced FTC fine, which Facebook said could be as high as $5 billion, would be the largest ever against a US tech company. The FTC's previous record-setting fine against a tech company for breaking a privacy agreement was against Google in 2012 for $22.5 million. A settlement announcement could be a month away, a source told Reuters. Facebook declined to comment, citing ongoing discussions. The FTC didn't immediately respond to a request for comment. Source
  5. The Turkish Personal Data Protection Authority fined Facebook $270,000 for the Photo API bug that exposed personal photos of 300,000 Turkish users. The Turkish Personal Data Protection Authority (KVKK) has fined Facebook 1.65 million Turkish lira ($270,000) for the Photo API bug that exposed personal photos of 300,000 Turkish users. In December, Facebook announced that photos of 6.8 Million users might have been exposed by a bug in the Photo API allowing third-party apps to access them. The bug impacted up over 870 developers, only apps granted access to photos by the user could have exploited the bug. According to Facebook, the flaw exposed user photos for 12 days, between September 13 and September 25, 2018. KVKK fined the social network giant for failing to quickly address the issue and for neglecting to notify Turkish authorities of the incident. The fine is composed of 1 million for failure to address the issue in time, while the rest is for failing to notify the KVKK of impact on Turkish Facebook users. The Turkish KVKK is also investigating Facebook Facebook hack that in September 2018 exposed access tokens of 50 Million Users. This means that the KVKK Facebook may fine again the social network. Source
  6. Facebook sues analytics firm Rankwave over alleged data misuse Image copyrightGETTY IMAGES Image captionFacebook's Mark Zuckerberg was in Paris on Friday to meet President Emmanuel Macron Facebook is suing a South Korean firm it accuses of unlawfully using data to sell marketing and advertising. The social network is asking a judge to force Rankwave to allow it to audit the firm’s activities to see if user data was obtained and potentially sold. A source at Facebook told the BBC it was as yet unable to say how much data or how many users may be affected. The network said the move would "send a message to developers that Facebook is serious about enforcing our policies". "Facebook was investigating Rankwave’s data practices in relation to its advertising and marketing services," said Jessica Romero, Facebook's director of platform enforcement. "Rankwave failed to co-operate with our efforts to verify their compliance with our policies, which we require of all developers using our platform." The BBC was unable to reach Rankwave for comment on Friday. Tracking users' posts According to court documents filed in California on Friday, Facebook is accusing Rankwave of using at least 30 different apps to "track and analyse" comments and likes on Facebook pages. Rankwave also had a consumer app that, after gaining the user's consent, would track the popularity of that user's posts. The app would calculate a "social influence score", Facebook said. However, the social network said it had information that since 2014, Rankwave had been using data gathered by its apps "for its own business purposes, which include providing consulting services to advertisers and marketing companies". What's in Zuckerberg's privacy plan? Facebook bans 'dangerous individuals' In its lawsuit, Facebook accuses Rankwave of ignoring repeated requests to open itself up to an audit and provide evidence relating to data it had allegedly obtained. Facebook wants a judge to force Rankwave to take those steps, as well as pay an unspecified amount in damages. Facebook said the data company had harmed its "reputation" and "public trust". Facebook said it began investigating Rankwave, which has remained active on the network until last month, in June 2018. 'We need new internet rules' The case will probably draw comparisons with Cambridge Analytica, the UK-based data analytics firm that abused private Facebook data in order to inform political campaigning efforts. The discovery of that incident plunged Facebook into a crisis. On Friday, Facebook co-founder and chief executive Mark Zuckerberg met French President Emmanuel Macron in Paris to discuss potential regulation of social networks. The growing threat to Zuckerberg's power "We need new rules for the internet that will spell out the responsibilities of companies and those of governments," Mr Zuckerberg told French TV channel France 2 after the meeting. However, he did not address calls from fellow Facebook co-founder Chris Hughes that the company was too powerful and should be broken up. Source
  7. Facebook is unwittingly auto-generating content for terror-linked groups that its artificial intelligence systems do not recognize as extremist, according to a complaint made public on Thursday. The National Whistleblowers Center in Washington carried out a five-month study of the pages of 3,000 members who liked or connected to organizations proscribed as terrorist by the US government. Researchers found that the Islamic State group and al-Qaeda were "openly" active on the social network. More worryingly, the Facebook's own software was automatically creating "celebration" and "memories" videos for extremist pages that had amassed sufficient views or "likes." The Whistleblower's Center said it filed a complaint with the US Securities and Exchange Commission on behalf of a source that preferred to remain anonymous. "Facebook's efforts to stamp out terror content have been weak and ineffectual," read an executive summary of the 48-page document shared by the center. "Of even greater concern, Facebook itself has been creating and promoting terror content with its auto-generate technology." Survey results shared in the complaint indicated that Facebook was not delivering on its claims about eliminating extremist posts or accounts. The company told AFP it had been removing terror-linked content "at a far higher success rate than even two years go" since making heavy investments in technology. "We don't claim to find everything and we remain vigilant in our efforts against terrorist groups around the world," the company said. Facebook and other social media platforms have been under fire for not doing enough to curb messages of hate and violence, while at the same time criticized for failing to offer equal time for all viewpoints, no matter how unpleasant. Facebook in March announced bans at the social network and Instagram on praise or support for white nationalism and white separatism. Source
  8. On paper, they would seem to have little in common. Tun Khin is a human rights activist who advocates for the persecuted Rohingya Muslims in his home country of Myanmar. Jessikka Aro is a Finnish journalist who exposed the international influence of Russian propagandists at the Internet Research Agency long before the rest of the world had ever heard of them. Lenny Pozner is an American father who lost his 6-year-old son, Noah, in the shooting at Sandy Hook Elementary in 2012. Ethan Lindenberger is almost a kid himself, a high school student who’s become a vaccination proponent despite his parents’ anti-vaccination beliefs. Photo: Ethan Lindenberger, seen here testifying before the Senate in March about his parents' anti-vaccine stance, is among those who've seen firsthand how dangerous online disinformation can be But all four of them are bound by one unfortunate and common thread: They’ve all seen firsthand just how ugly—and downright dangerous—the spread of fake news and disinformation online can be. Which is why this week, they gathered in Silicon Valley to talk with tech executives about what they’ve been through and what they want tech companies to do about it. The group met with Twitter on Tuesday, and another meeting was planned at Facebook Wednesday afternoon. The meetings, which were organized by a nonprofit advocacy group called Avaaz, come at a time of fierce debate over what responsibility tech companies have to limit the spread of toxic content on their platforms. Just last week, Facebook announced it was banning seven people, including Infowars conspiracy theorists Alex Jones and Paul Joseph Watson, under a policy that prohibits “dangerous individuals” from having any presence on Facebook. The bans prompted President Trump to lash out against tech companies over the weekend, ramping up accusations of censorship that have become a constant drumbeat on the right. The discussions organized by Avaaz served as a counterpoint to all that pressure, as individual victims of online harassment campaigns came forward to tell tech companies exactly how they’ve been hurt by the hate and hoaxes that have festered on their platforms. “Our job as advocates is to make them stop for a minute and think about the implications of not acting fast enough,” says Oscar Soria, a senior campaigner with Avaaz. During Tuesday’s meeting with Twitter, the attendees took turns telling their stories. Aro shared the details of the global smear campaign that was lodged against her, after her reporting outed the Internet Research Agency. She explained the threats that have been made against her life and read a recent direct message she received while traveling in the Czech Republic, in which a stranger threatened to “castrate” her if she ever came back to the country. Aro says the harassment she’s received violates Finnish defamation laws, and she is in the process of pursuing cases against some of her harassers in court. And yet, she says, the complaints she’s filed to Twitter and Facebook often go unanswered, leaving local investigators to do the work the American companies won't. “I'm basically here, to put it simply, to give a user report live, because they haven't reacted to the ones that I have made online,” Aro says. Khin described the trauma he’s seen in Rohingya refugee camps and pressed Twitter about why it continues to provide safe haven for Senior General Min Aung Hlaing, the commander-in-chief of the Myanmar military. The military was behind some of the accounts that notoriously flooded Facebook with anti-Islam rhetoric, and the United Nations called for its leaders to face genocide charges last year. Facebook has since banned Min Aung Hlaing and other accounts and pages that the UN linked to human rights abuses in the country. While the general's Twitter account hasn’t been active since last year, it remains up on the platform today. “He was the mastermind of the Rohingya genocide. The UN has said he was personally responsible. And Facebook has already banned him. What more evidence do they need?” Khin wrote in a tweet following the meeting. Lindenberger, meanwhile, discussed how his parents came to believe anti-vaccination propaganda on social media, leaving him and his siblings exposed to potentially deadly viruses like the measles. According to Soria, Lindenberger told Twitter executives that after he testified about this issue before the Senate, he himself became the subject of a disinformation campaign. Recently, he said, his own pastor told him to avoid church for his own protection. (WIRED wasn't able to reach Lindenberger.) Pozner, for his part, has faced such violent threats that he is participating in the meetings remotely. Ever since the Sandy Hook tragedy took his son's life, Pozner and his family have been forced to live in hiding, hounded by online death threats from people who believe that the shooting was a hoax. The conspiracy theory, propagated by figures like Alex Jones, has no basis in reality. Now, Pozner runs a non-profit called HONR Network aimed at ending online harassment campaigns, helping its victims, and working with tech companies to change their policies. Of all the tech platforms, Pozner says, Twitter has the farthest to go in terms of cracking down on hoaxes and harassment. "Twitter has allowed their platform to be used as a weapon of mass destruction for which they must take accountability," he says. Twitter spokesperson Liz Kelley told WIRED that the conversation on Tuesday centered on how Twitter can prohibit the “manipulation of the conversation, not serving as the arbiters of truth,” and how Twitter is enforcing the policies against hate speech and violent threats that are already in place. “Hearing these stories is a valuable way for us to inform our decisions and product investments going forward,” Kelley said. Facebook confirmed its executives met with the group, but declined to offer further comment. Avaaz's organizers also hoped to meet with executives from Google, whose video platform YouTube has helped promote some of the internet's worst conspiracies. As of Wednesday afternoon, a meeting with Google had not yet been scheduled. In addition to giving the group a chance to share their stories, Avaaz also encouraged Facebook and Twitter to adopt a policy that would alert people when they've been exposed to information marked false by third-party fact-checkers. Facebook has taken steps to expand fact-checking on its platform, recently announcing that it will limit the visibility of groups that repeatedly share content marked as false by fact-checkers. And just this week the company announced that fact-checkers will also begin vetting information on Instagram. Avaaz wants to see Twitter adopt its own fact-checking policy and to see Facebook build upon the one that's already in place. "This is a necessary step to restore public trust," Soria says. Social media companies have been historically reluctant to make such editorial decisions on their platforms. And, given the recent heightened accusations of liberal bias in Silicon Valley, including from the President of the United States, making decisions about who is right and wrong on the internet comes with risks for these companies. Pozner just hopes these meetings will underscore the fact that the risks he and other victims have faced are so much greater. "I am a strong proponent of the First Amendment, and free speech is an essential aspect of American society. However, there is a fundamental misunderstanding of people's rights and responsibilities online," Pozner says. "A person cannot violate my civil rights to be free of harassment, bullying, or to have my likeness manipulated and my family targeted with death threats and intimidation and then simply attempt to hide behind 'free speech.'" Update: 9:27 AM ET 5/9/2019 This story has been updated to include confirmation from Facebook about its meeting with Avaaz and to clarify the nature of Aro's reporting on the Internet Research Agency. Source
  9. Facebook co-founder Chris Hughes argues for a breakup of Facebook, but easier said than done Facebook could be a monopolist, but it's not a slam dunk the courts would see it that way. A Facebook breakup is interesting but the details are messy. Facebook's privacy pivot elicits skepticism at F8 Facebook co-founder Chris Hughes says the social networking giant and CEO Mark Zuckerberg have too much power over communications and it's time to break apart the company. In a New York Times essay, Hughes makes his case. The essay is blistering to say the least. Two excerpts sum up the break up Facebook argument well. Mark is a good, kind person. But I'm angry that his focus on growth led him to sacrifice security and civility for clicks. I'm disappointed in myself and the early Facebook team for not thinking more about how the News Feed algorithm could change our culture, influence elections and empower nationalist leaders. And I'm worried that Mark has surrounded himself with a team that reinforces his beliefs instead of challenging them. The government must hold Mark accountable. And. Because Facebook so dominates social networking, it faces no market-based accountability. This means that every time Facebook messes up, we repeat an exhausting pattern: first outrage, then disappointment and, finally, resignation. I can't say I disagree with Hughes' take. Facebook has screwed up repeatedly. Zuckerberg's appearance at F8 and his remarks about privacy didn't help matters. The future may be private, but it's laughable that Facebook can get us there. And on some level you could argue that Facebook is an antitrust concern -- at least when it comes to taking up our time. In many ways, Hughes' argument rhymes with the regulation Elizabeth Warren proposes for big tech. But I'm nagged by one question: What are the details behind a Facebook breakup? Hughes argues that the government and its antitrust laws are tools we can use today, but it's unclear Facebook would be considered a monopoly. We could also toss in a looming Federal Trade Commission fine as something that could keep Facebook in check. The argument is that the Department of Justice broke up Standard Oil and AT&T so they could break up tech giants too. But the big question is whether the court system would see Facebook as a monopoly. Yes, Facebook has its core site, Instagram and WhatApp, but a monopoly argument could be tough to make. Why? Facebook isn't essential. It's not oil and it's certainly not communication lines. Depending on how you want to argue it, Facebook faces competition everywhere vs. Google, Apple, Amazon and other smaller players. In an antitrust trial, Facebook could argue that its real competition is anything that takes up your time -- TV, YouTube, email etc. Facebook could also note that beyond social networking it's not No. 1. Facebook could note that it competes globally and doesn't win. In advertising, Facebook isn't the top dog. Google is. Facebook as a monopolist only holds weight because people appear to be addicted to social networking. Facebook is a monopolist just like a heroin dealer would be. The meaningful alternative to Facebook isn't another service. The meaningful alternative is deleting that app completely and getting social networking sober. What does a breakup look like? So let's just say that we can wave a wand and break up Facebook. The details get sketchy. The components of Facebook have huge scale by themselves. Instagram could squash competition with or without Facebook. The details of creating baby Facebooks would be gnarly. Think of the break up of AT&T. All of the Baby Bells were pretty strong in their markets and basically monopolies. Who governs the algorithms? One of Hughes' more notable points is that Zuckerberg can tweak Facebook's algorithms and how we communicate on his own. He has the power of what we see, how we relate to it and to some degree what we think. That's a bit much for one person. One big detail would be transparency in algorithms and perhaps a new bureaucracy on algorithm fairness would be formed. This algorithm by regulation would be a disaster if the tech knowhow shown in various Congressional hearings is any indicator. Privacy regulation. It was telling how Google I/O had the search giant talking about privacyand actually delivering real tools to developers. Facebook talked about turning a big ship with privacy at the center with a whole lot of nada behind it. It's clear that there will have to be some kind of privacy regulation and agency to control Facebook's scale somewhat. But those details are nasty too. For now, the US is basically outsourcing privacy regulation to the EU since any global company has to comply with things like GDPR. Add it up and Facebook isn't likely to sweat Hughes' call for a breakup. Ironing out just a few of these aforementioned details may take a decade. By then, we'll all move on to something new. Source
  10. Facebook seeks investors for planned cryptocurrency, merchants who might accept it. Enlarge / Facebook CEO Mark Zuckerberg checks his phone during the annual Allen & Company Sun Valley Conference, July 13, 2018 in Sun Valley, Idaho. Drew Angerer/Getty Images If Facebook's pivot from town square to private living room wasn’t laden with enough irony, here’s a new twist: Big business, it appears, has been invited to join us by the fireplace. On Thursday, The Wall Street Journal reported new potential details about Facebook’s long-awaited cryptocurrency plans. The company is reportedly seeking dozens of business partners, including online merchants and financial firms, in an effort to extend the reach of its blockchain-based marketplace. Facebook’s would-be partners are being asked to pitch into an investment fund, valued at $1 billion or more, that would serve as backing for Facebook’s coin and mitigate the wild speculative swings that make cryptocurrencies like bitcoin hard to spend. The pitch, according to the Journal, involves offering merchants lower fees than credit cards. Some were quick to note that this would reduce Facebook’s ability to make money from payments in the short term. But that may not matter much—if, in the end, Facebook’s crypto effort is really all about getting you to spend more time glued to Facebook. Facebook appears to be already building out the plumbing to make its marketplace a reality. At its F8 developer conference last week, the word “blockchain” was notably absent. But even as Zuckerberg emphasized the company’s plan to reorganize your Facebook experience around intimate relationships, his update included plenty of ways money would be involved. “I believe that it should be as easy to send money to someone as it is to send a photo,” he said, alluding to “simple and secure payments” as a core feature of his privacy-forward vision. That apparently extends beyond the peer-to-peer payments available on Venmo and Facebook’s own Messenger app. In a series of keynotes, Facebook execs touted a litany of commerce-focused improvements: better checkout for Instagram’s digital mall, donation stickers, and a new tool for small business owners to list items on WhatsApp. Indeed, WhatsApp appears to sit at the center of Facebook’s commerce efforts—at least to start. At F8, Facebook said WhatsApp Pay, currently on limited trial in India, would expand to additional, unnamed countries later this year. The platform isn’t blockchain-based (for now) and is designed for peer-to-peer payments. But with 80 percent of small businesses in India using WhatsApp to market their goods, some form of payments processing is a natural evolution. In December, Bloomberg reported that the first tests of the crypto coin may occur in India, initially as a way for workers to send money home from overseas. An added twist from the Journal’s report is the possibility that the coin will be integrated into Facebook’s lucrative ads ecosystem. The scheme, reportedly still under debate within Facebook, would potentially work on both sides of the ads equation: Merchants could use the coins to pay for ads, and users would be rewarded in coins for viewing or interacting with them. That reflects a growing perception—seen recently in efforts like the Brave browser, which compensates users through a token for clicking on ads—that people should get paid for their attention, not simply help internet giants make money. For Facebook, it also presents a vision of how its ads and eyeballs-driven business could continue in the company’s supposedly privacy-first era. The idea is to keep Facebook’s coins—and therefore users—tightly enmeshed in the platform. “I don’t believe they’re doing anything that isn’t in the service of increasing interactions on their platforms,” says Joshua Gans, a professor at the University of Toronto. Sending money to businesses presents a challenge, he notes. Compared with friends and family, businesses are more likely to dump their Facebook coins at the end of the month in favor of real money. Gans is skeptical that Facebook would pay users for viewing ads—an immensely tricky system to create—unless it involved something like a rebate for buying a product through a Facebook advertisement. On the merchant side, encouraging businesses to pay for ads and services on Facebook with the coin could be one way of staunching the flow of money out of the system. As the Journal notes, Facebook’s foray into blockchain could look a bit like a loyalty-points system—tokens that can be earned through and spent on Facebook services, or cashed out elsewhere though partner merchants. That’s not without precedent among technology companies: Uber, for example, has Uber Cash, which rewards users for purchases both in and out of Uber with app-specific money. Gans notes offerings like the Apple Card hold a similar purpose: It’s a service that, for all the talk of disrupting the credit card industry, is mostly a shiny, heavy way to buy more of Apple’s apps and products. A Facebook spokesperson reiterated an earlier comment: “Like many other companies, Facebook is exploring ways to leverage the power of blockchain technology. This new small team is exploring many different applications.” Facebook still faces many challenges, from sorting out how it will oversee the system to assuaging the privacy concerns of users to determining how to funnel money in and out of its currency—a process that, for other cryptocurrencies, is typically handled by exchanges. It also has to contend with the realities of the global economic system, which runs on euros and yen as well as dollars. Even if it backs the currency with a basket of currencies, as reported, it “can’t be stable with every currency in the world,” says Gans. “That’s not how the world works.” Hence the need to enlist financial partners to smooth transactions in and out of Facebook’s system. Bottom line: It’s very unclear how this will work in practice. “There are a lot of moving parts. Facebook doesn’t always do what we expect,” says Gans. This story originally appeared on wired.com. Source: Facebook’s cryptocurrency might work like loyalty points (Ars Technica)
  11. The AchieVer

    Uber and Facebook: Partners in crime

    Uber and Facebook: Partners in crime Uber's IPO will probably be the biggest since Facebook's. It's remarkable how similar the companies are. He made the world a better place. For himself. I'm experiencing a bubbling sensation just below my throat. No, not acid reflux. It's the sheer vicarious, stomach-churning excitement I have for all those who'll make a vast fortune out of Uber's IPO. You see, it's rumored to be the biggest tech IPO since Facebook's. How can one not feel delighted that one of tech's other fine sages and authoritarians, Travis Kalanick, may be another $9 billion richer? No, he's not quite a Zuckerberg. Financially, that is. Yet the two have so much the same fine young entrepreneurial spirit that I can't help but see their similarities. One painted a picture of a world that would be so open and connected that you could make friends with strangers in St. Petersburg. The one in Florida, or even the one in Russia. The other offered rhapsodies about a world where cars would be shared among many, the roads would be miraculously decongested, and drivers would discover fine, new jobs that freed them from penury and pain. You see, that's the way to make billions. Paint the world as a better, warmer, lovelier place. Then go about ruthlessly exerting control over everything you can and treat everyone in your path as just so very, oh, backward. Or non-existent. The headlines make for rapturous reading. Sample: "Uber faces criminal investigation over Greyball spying program." Another sample: "How Uber deceives the authorities worldwide." These merely echo such joys as: "Facebook is breaking law in how it collects your personal data, court rules." Or how about: "Facebook could face $1.63 billion fine under GDPR over latest data breach." I can't help thinking that the rules for entrepreneurial success in the tech world consisted -- and perhaps still consist -- of routinely breaking regulations, laws, and anything else that smacks of maintaining vague social order and justice and replacing it with whatever will make your company bigger and richer right now. I wonder if anyone ever whispered such things as they were training the great young things at America's finest business schools. They surely don't teach it, though I'm moved that Stanford only recently discovered something precious: Ethics. Perhaps, you'll mutter, all great companies have to shave a few corners and break a few laws just because they need to be great quickly. Yet famed tech columnist Walt Mossberg surely isn't alone when he looks at Facebook latest admission that it's setting aside billions to pay a likely Federal Trade Commission fineand muses: "I suggest calculating it in years of revenues. And, after we get a real DOJ back, jail time." I can't help thinking that these companies are a classic and painful example of how the few can make untold, uncontrolled piles, while everyone else has to deal with the consequences. Where one company is now implicated in allegedly being the repository of disgraceful election manipulation and being the prime driver in flouting personal privacy, the other peddled a new vision of misogyny, cheap labor, and complete disregard for municipalities worldwide. Some of Facebook's own former executives now wonder how badly it's affecting children's minds. As for Uber, it seems that in cities -- where it's making a killing -- there are more cars, not fewer. You'll tell me both companies are very different now. These young founders learned from their mistakes. After all, one even got ousted. His company's business hasn't changed that much, however. These are the two most valuable tech IPOs of our time. This is the very best we could do. Of course it's humanity's fault that it was suckered. We have a habit of choosing ease over everything else. We delight in the idea that technology gives us an extra split-second here and a cheery ability to get things through our phones anywhere. And not for a moment do we consider what we're losing. Source
  12. Facebook Bans Personality Quizzes Alongside Other Notable Changes For Users’ Privacy It seems the aftermath of Cambridge Analytica continues As Facebook announces new changes. The changes should come as good news for the users as the tech giant looks focused on users’ privacy. As revealed in a blog post, Facebook bans personality quizzes and other apps with minimal utility. Facebook Bans Personality Quizzes According to a blog post by Eddie O’Neil, Head of Platform at Facebook, the firm plans numerous changes to ensure users’ privacy. As part of the regime, Facebook bans personality quizzes and other such apps. Facebook refers to these as ‘apps with minimal utility’. “Our Facebook Platform Policies are being updated to include provisions that apps with minimal utility, such as personality quizzes, may not be permitted on the platform.” Besides, banning, Facebook also limits the app to access data that does not contribute to enriching in-app user-experiences. Personality quiz apps have been active on Facebook for many years, however these apps claimed to assess users’ personalities based on the information obtained from their profiles. The infamous Cambridge Analytica also emerged from such an app ‘thisisyourdigitallife’. Banning such apps looks like a step towards preventing such scandals in the future. Facebook had already removed such apps that it found involved in data mishandling, such as MyPersonality app. Control On App Permissions In addition to banning certain apps, Facebook also announced other updates. It involves deprecation of some APIs until April 30 2019, and changes in app permissions. As stated, Facebook will revoke access to permissions not used by an app during the previous 90 days. “Previously approved user permissions that your app has not used or accessed in the past 90 days may be considered expired. Access to expired permissions will be revoked.” Facebook will track such permissions via periodic review and audits. To regain access to such permissions, the developers will have to submit for App Review. Well, not everything announced by Facebook this time is bad for developers. Rather Facebook has also announced a plan to facilitate the developers as well. “In the coming months, we will be releasing a more significant update to our policies. In response to developer feedback, we will be moving toward a more streamlined and straightforward experience for developers and eliminating the need for certain supplemental terms.” Nonetheless, they haven’t hinted about the kind of updates yet. Source
  13. Hem... sorry for the double post... If an admin can delete this one... Hi all. After having searched in many places on the net, I come here to describe my problem (hope I am in the right place) : I am running Windows Seven x64 SR1, Fr language. I installed 3 browsers : Chrome, Opera and Firefox. I (almost) always use Firefox, and old version (52.9.0 esr x86) for I have some extensions that I "need". The problem is : A friend of mine posted a video in his story. When I click to watch it, I only have the sound. The video does not play. I tried with Opera (last version) : same problem. I ran Chrome (last version) : worked fine. I have flash Player PPAPI and NPAPI installed (last versions) I searched solutions and tried many thing but nothing worked... - cleaned my firefox profile (history, cookies...) - uninstall my firefox and installed the last version - uninstalled Flash player and tried an older version (NPAPI, PPAPI) - updated my NVidia drivers to last version No way... Yes, sure, the easiest way is : "you have to use Chrome to open facebook" but I don't like it and don't want to use it. If somebody has a solution... Thank you for your wise answers
  14. Facebook’s latest privacy scandals open regulator floodgates Storing passwords in plain text and harvesting email contacts have landed the firm in hot water -- again. Facebook's damage control teams must be busy these days with data scandal after scandal appearing out of the woodwork on what seems to be a monthly basis -- all of which are gaining the interest of regulators worldwide. The Cambridge Analytica incident seemed to be only the tip of the iceberg, with the most recent examples of Facebook's failure to adequately protect and store user data is highlighted by the harvest of email contact dataand the storage of millions of user passwords in plain text. Data protection is now a hot topic and one that Europe has taken more seriously with the revamp of old data and security rules through the implementation of the EU's General Data Protection Regulation(GDPR). Under these rules, companies operating in European countries are held to a high standard when it comes to consumer data storage and security -- and this is an area Irish regulators are now examining to see if Facebook has fallen short. On Thursday, the Irish Data Protection Commissioner said a new investigation is now underway due to March's reveal of the storage of Facebook, Facebook Lite and Instagram passwords in plain text on company servers. Up to 20,000 Facebook employees may have been able to access this information, which potentially dated back to 2012. "We have this week commenced a statutory inquiry in relation to this issue to determine whether Facebook has complied with its obligations under relevant provisions of the GDPR," the data watchdog said. Across the pond, Canadian authorities published the results of a year-long investigation into Facebook's privacy practices this week. The investigation focused on the Cambridge Analytica scandal, in which the Privacy Commissioner of Canada and the Information and Privacy Commissioner for British Columbia believe hundreds of thousands of Canadians -- as part of a wider pool of 87 million users -- were affected. The watchdogs said that Facebook had committed a "major breach of trust" and Facebook "abdicated its responsibility for personal information under its control, effectively shifting that responsibility to users and apps." Separately, Facebook is also facing scrutiny from regulators in the United States. In April, the social media giant admitted to the "unintentional upload" and harvest of email account contacts during some new account registration and verification systems. In total, Facebook stored email contact data belonging to roughly 1.5 million users over three years, a practice which the company said in hindsight was "not the best way" to go about verification. Facebook has promised to delete the information. Regulators, however, have not looked upon the latest example of the firm's lax privacy practices with a friendly eye. As reported by sister site CNET, Facebook is now being investigated by the New York attorney general's office over the contact scraping. New York Attorney General Letitia James said that it is about time that the social network "is held accountable for how it handles consumers' personal information." Facebook, in turn, said, "we're in touch with the New York State attorney general's office and are responding to their questions on this matter." During Facebook's quarterly earnings release, the company also said that $3 billion has been set aside to cover legal expenses related to a US Federal Trade Commission (FTC) investigation based on the company's attitude to user privacy following the Cambridge Analytica scandal. Source
  15. Facebook 'unintentionally uploaded' email contacts from 1.5M users Contacts harvested during new account signups for nearly three years. Facebook harvested the email contacts of about 1.5 million users when they signed up to the service. Josh Edelson/AFP/Getty Images Facebook "unintentionally" harvested the email contacts of about 1.5 million of its users during the past three years. The activity came to light when a security researcher noticed that Facebook was asking users to enter their email passwords to verify their identities when signing up for an account, according to Business Insider, which previously reported on the practice. Those who did enter their passwords then saw a pop-up message that said it was "importing" their contacts -- without first asking permission, BI reported. A Facebook spokesperson confirmed that 1.5 million people's contacts were collected in this manner since May 2016 to help build Facebook's web of social connections and recommend other users to add as friends. "Last month we stopped offering email password verification as an option for people verifying their account when signing up for Facebook for the first time," a Facebook spokesperson said. "When we looked into the steps people were going through to verify their accounts we found that in some cases people's email contacts were also unintentionally uploaded to Facebook when they created their account. "We've fixed the underlying issue and are notifying people whose contacts were imported," Facebook said, adding that the contacts weren't shared with anyone and are being deleted. It also pointed out that users can review and manage the contacts they share with Facebook in their settings. As the world's largest social network, Facebook controls data on more than 2 billion people, and who has access to it. The company's data handling practices were called into question in the wake of the Cambridge Analytica scandal, during which the personal information on up to 87 million Facebook users was improperly accessed. Source
  16. Facebook/FTC settlement could include "heightened oversight" of Zuckerberg. Enlarge / Facebook CEO Mark Zuckerberg leaving the Merrion Hotel in Dublin after meeting with Irish politicians to discuss regulation of social media on Tuesday, April 2, 2019. Getty Images | NurPhoto Federal Trade Commission officials are discussing whether to hold Facebook CEO Mark Zuckerberg personally accountable for Facebook's privacy failures, according to reports by The Washington Post and NBC News. Facebook has been trying to protect Zuckerberg from that possibility in negotiations with the FTC, the Post wrote. Federal regulators investigating Facebook are "exploring his past statements on privacy and weighing whether to seek new, heightened oversight of his leadership," the Post reported, citing anonymous sources who are familiar with the FTC discussions. "The discussions about how to hold Zuckerberg accountable for Facebook's data lapses have come in the context of wide-ranging talks between the Federal Trade Commission and Facebook that could settle the government's more than year-old probe," the Post wrote. According to NBC, FTC officials are "discussing whether and how to hold Facebook Chief Executive Mark Zuckerberg personally accountable for the company's history of mismanaging users' private data." However, NBC said its sources "wouldn't elaborate on what measures are specifically under consideration." According to the Post, one idea raised during the probe "could require [Zuckerberg] or other executives to certify the company's privacy practices periodically to the board of directors." But it's not clear how likely the FTC is to target Zuckerberg in a final settlement, and "Facebook has fought fiercely to shield Zuckerberg as part of the negotiations, one of the sources familiar with the probe said," the Post wrote. Facebook dismisses “recycled” storyline When contacted by Ars, Facebook said, "These storylines have been recycled for some time." The company cited an April 2 Politico story as an example. But the Politico story was more speculative: it said the FTC could use its authority to seek "new, more aggressive privacy auditors; or even management changes, up to the level of Chairman and CEO Mark Zuckerberg." And unlike the new Post and NBC stories, Politico's article merely discussed the FTC targeting Zuckerberg as a theoretical possibility and didn't assert that FTC investigators themselves are considering a punishment for Zuckerberg. As for the FTC investigation, Facebook told Ars that it hopes "to reach an appropriate and fair resolution" with the commission. The FTC reached a settlement with Facebook in 2011 over charges that it deceived users by failing to keep privacy promises. During the lead-up to that settlement, the FTC "considered, then backed down, from putting Zuckerberg directly under order," the Post wrote. "Had it done so, Zuckerberg could have faced fines for future privacy violations." The FTC's current investigation began in March 2018, after revelations that up to 87 million users' information was improperly shared with Cambridge Analytica. The FTC investigation focuses on whether Facebook violated the terms of its 2011 settlement with the FTC. That settlement prohibited Facebook from misrepresenting the privacy or security of user information, and it required Facebook to get consumers' express consent before making changes that override their privacy settings. Republicans hold a 3-2 majority on the FTC. Democratic Commissioner Rohit Chopra wrote in a May 2018 memo that "the FTC should hold individual executives accountable for order violations in which they participated, even if these individuals were not named in the original orders." Source: Facebook fights to “shield Zuckerberg” from punishment in US privacy probe (Ars Technica)
  17. Three-Fourths of Consumers Don’t Trust Facebook, Threatpost Poll Finds On the heels of several Facebook data privacy snafus this week – and over the past year – users no longer trust the platform. As Facebook privacy-related incidents continue to pile up this week, a new Threatpost poll found that a whopping three-fourths of respondents no longer trust the social media giant. The negative sentiment, reflected in a Thursday Threatpost poll of over 130 security professionals, comes as Facebook faces a slew of data privacy snafus this week – more than a year after the Cambridge Analytica scandal first thrust the social media platform’s data privacy into the spotlight. On the heels of these incidents, the Threatpost poll found that Facebook users have completely lost trust in the platform. Of those polled, 75 percent said that they believe whole organization is lying to consumers about data how it handles data. “Facebook’s principal defense to many of the privacy criticisms in the last year-plus is that malicious third parties misused the platform to access private user data,” Dan Goldstein, the president and owner of Page 1 Solutions, said in an email. “This claim really doesn’t hold water at this point, now that we know that Facebook actively rode roughshod over issues of consumer consent in order to collect data.” Trust is Gone Just this week, an array of new reports, leaked documents, and incidents revealed just how much is going on behind the scenes when it comes to Facebook collecting, leveraging and sharing user data. A Tuesday NBC News report, detailing thousands of newly-leaked Facebook emails, webchats, spreadsheets and meeting summaries, found that Facebook has been using its user data as leverage in various relationships with other companies. On Thursday, another report found that Facebook had harvested the email contact lists for 1.5 million people in an ongoing effort since May 2016. And, also on Thursday, Recode discovered that Facebook had accidentally stored millions of Instagram users’ passwords (not thousands, as previously thought) unencrypted on its servers. On the heels of these incidents, the Threatpost poll found that Facebook users have completely lost trust in the platform. Up to 95 percent of respondents said that they recognize the firm is built on monetizing people’s data – so it’s likely all these issues have been intentional and Facebook just continues getting caught. (In contrast, only 4 percent said that they instead believe “there are sure to be things that fall through the cracks and data that gets mishandled,” but that it’s not a corporate conspiracy). Making matters worse, when asked what Facebook can do to clean up its act, almost 50 percent of respondents answered that there is nothing the firm can do – it has lost all credibility. “These online giants shouldn’t be able to just grab your entire social network through your contact list without specific permission, and companies like Facebook need to face stiff penalties when they do it,” said Brian Vecci, field CTO at Varonis in an email. “Without basic consumer protections that lead to real penalties, this kind of thing will continue to happen.” However, those polled don’t think the incidents will stop consumers from using the platform – and remain unsure what it will take to get Facebook to prioritize responsible data security. Uncertain Future Up to 65 percent of survey respondents said that none of these data privacy-related incidents will be enough to bring Facebook down – because consumers will continue to use the platform anyways. So where will change ultimately come from? Some surveyed asserted that the social media firm should pledge to adhere by General Data Privacy Regulation (GDPR) tenants in all markets (as opposed to just the EU, where GDPR is currently enforced), or adopt official third-party auditing. But many in the security space agree: the main responsibility beyond consumers and regulations, survey respondents said, needs to come from the tech industry. In fact, 40 percent of respondents argued that the tech industry as a whole needs to re-evaluate how it collects, maintains and shares data. “If not Facebook, then Google or Amazon or the big social network of the future will exploit consumer trust,” said Vecci. “This news illustrates how easy it is for any company—not just Facebook—to skip asking for consent when harvesting personal data like your contacts. Consumers need to be vigilant, but also need a basic set of online rights.” Source
  18. The AchieVer

    WhatsApp, Facebook, Instagram Down

    WhatsApp, Facebook, Instagram Down Facebook’s services are once again down, including here not only the social network, but also Instagram and WhatsApp. At the time of writing this article, attempts to connect to Facebook return a simple error message revealing that “something went wrong.” The official Instagram website fails with “5xx Server Error.” Furthermore, the WhatsApp mobile clients on both Android and iOS can no longer send and receive messages, with the app displaying a “Connecting…” message on launch. WhatsApp Web also appears to be impacted by the outage, and the service can no longer connect to mobile devices.Facebook, Instagram, and WhatsApp down in EuropeAccording to DownDetector, Facebook is down mostly for users in Europe, and several countries like Hungary, Serbia, and Romania are apparently the most affected. The social network appears to be working fine elsewhere. In the case of Instagram, the problem seems to be more widespread. Romania, Hungary, and Moldavia once again appear to be impacted by the outage, but the aforementioned service also indicates sporadic connectivity issues in other countries like the UK, Germany, France, Italy, Russia, and China. Part of the US was also hit by Instagram connectivity problems. WhatsApp is down mostly for users in Europe, but this time the regions that experience connectivity problems are Germany, the Netherlands, Romania, Italy, and the UK. Facebook hasn’t released a statement on what’s happening with its services, and most likely, the company would restore them shortly. Obviously, no information on the cause of the outage has been provided either. In the meantime, there’s not much you can do than wait for Facebook to provide us with some status updates. Nevertheless, the social network rarely discloses the cause of service outages, unless it’s something truly critical, so it remains to be seen how fast this new blunder is resolved this time. Source
  19. Facebook admits to storing plaintext passwords for millions of Instagram users Last month, Facebook admitted to storing plaintext passwords for hundreds of millions of Facebook accounts. Facebook admitted today to storing the passwords of millions of Instagram users in plaintext format in internal server logs. The announcement came as an update to an incident from last month when the company admitted to storing plaintext passwords for hundreds of millions of Facebook Lite users, tens of millions of Facebook users, and tens of thousands of Instagram accounts. "We discovered additional logs of Instagram passwords being stored in a readable format," the company said in an update published today. "We now estimate that this issue impacted millions of Instagram users. We will be notifying these users as we did the others." Facebook said that its investigation revealed that none of these plaintext passwords were abused by employees. Just like it did in last month's breach incident, the company did not put an exact figure on the number of impacted accounts, a practice the company has been criticized over the past few weeks. Facebook has been very secretive about its security incidents, a fact that more users are finding annoying, especially since user privacy and security incidents are becoming more common. In fact, the company went public with last month's "revelation" that it stored user passwords in plaintext for years only after investigative reporter Brian Krebs published an article citing an internal source. Krebs reported that over 2,000 Facebook employee had access to the server logs on a daily basis. It took the company years to discover the blunder. Now, Facebook is seen as the villain again, and is being criticized on social media for trying to bury this security update by releasing it on the same day as the Mueller Report. Source
  20. Twitter fined the same sum last week. LinkedIn was blocked inside Russia's borders in 2016 for the same offense. A Moscow court fined Facebook today 3,000 rubles (approximately $47) for failing to comply with a data privacy law and store data of Russian Facebook users on servers located inside Russia. The legal proceedings started after a complaint from Roskomnadzor (Russia's Federal Service for Supervision of Communications, Information Technology and Mass Media), the country's telecommunications watchdog. Roskomnadzor lodged a complaint after Facebook failed to comply with Russia's data localization legislation --Federal Law No. 242-FZ. Adopted on December 31, 2014, the law entered into effect on September 1, 2015. According to this legislation, all domestic and foreign companies that accumulate, store, or process the data of Russian citizens must do it on servers physically located inside Russia's borders. Russian authorities have very rarely enforced this new law. The most high-profile case remains LinkedIn, which Roskomnadzor banned in November 2016, and the site remains blocked to this day, according to Roskomnadzor's list of banned sites that local ISPs must block on their networks. Russian news agency Interfax, which broke the story earlier today, said Facebook did not represent itself in court. Twitter fined last week The same court that fined Facebook today --located in Moscow's Tagansky District-- also fined Twitter the same sum last week, Interfax also reported. Back in April 2017, the same Interfax news agency reported that Twitter had agreed to comply with the law; however, last week's fine means the company failed to act on its promise. When it blocked LinkedIn in 2016, Roskomnadzor sent shots across the bow to both companies. Alexander Zharov, head of communications regulator Roskomnadzor, said, at the time, that Facebook and Twitter had until the start of 2018 to move data of Russian users inside Russia's borders. Today's fines, the minimum which the court could have imposed, are the first steps in the legal process that will eventually allow Russian authorities to ban both social networks inside Russia's borders. Facebook did not respond to a request for comment seeking information on why the company did not defend itself in court, or if it has any plans to store data for Russian users on local servers. Source
  21. Facebook confirms it’s working on an AI voice assistant for Portal and Oculus products It may not really be a true Alexa or Google Assistant competitor Illustration by Alex Castro / The Verge Facebook has confirmed a report from earlier today saying it’s working on an artificial intelligence-based digital voice assistant in the vein of Amazon’s Alexa and Google Assistant. The news, first reported by CNBC, indicates Facebook isn’t giving up on a vision it first put out years ago, when it began developing an AI assistant for its Messenger platform simply called M. This time around, however, Facebook says it is focusing less on messaging and more on platforms in which hands-free interaction, via voice control and potentially gesture control, is paramount. “We are working to develop voice and AI assistant technologies that may work across our family of AR/VR products including Portal, Oculus and future products,” a Facebook spokesperson told The Verge today, following the initial report. That means Facebook may not position the product as a competitor to Alexa or similar platforms, but as more of a feature exclusive to its growing family of hardware devices. CNBC reported that the team building the assistant is working out of Redmond, Washington under the direction of Ira Snyder, a general manager at Facebook Reality Labs and a director of augmented and virtual reality at the company. Snyder’s LinkedIn page also lists him as director of a product called Facebook Assistant, which is likely the internal name of the project. It’s unclear if that will be its final, commercial name when it does launch. FACEBOOK’S ASSISTANT IS BEING BUILT FOR VR/AR AND VIDEO CHAT DEVICES CNBC says the project has been in the works since early 2018, shortly before Facebook announced it had shut down its M personal assistant service. Facebook also tried its hand at building a robust network of bots that would layer AI throughout Messenger and power automated chatting features, news alerts, and even mobile games, though the Messegner bots haven’t really taken off. This project and the various divisions involved in bringing it to life highlight the goals of Facebook’s new approach to experimental technology. Since it acquired Oculus in 2013, the social network’s forward-looking divisions have taken various organizational structures, most recently in the form of a new pair of divisions. The first of the those two divisions is the AR/VR hardware group responsible for developing the Portal video chatting device, and that division now also includes the remnants of Facebook’s disbanded Building 8, a secretive division formerly run by former DARPA director and Google employee Regina Dugan, who left the company in late 2017. The second division is now known as Facebook Reality Labs, run by video game pioneer Michael Abrash, who became a Facebook employee by way of Oculus and now holds the title of chief scientist at the VR company. It seems the Facebook AI assistant is being jointly built by both teams, with Snyder seemingly holding positions at both divisions. Whatever the eventual purpose, it’s clear Facebook is treating its growing family of hardware devices as conduits for a shared vision for the future, one in which AI is layered throughout Facebook-owned platforms and not restricted to singular products. Source
  22. Blow for Google, Facebook as EU approves tougher copyright regulations Google and other online platforms will have to sign licensing agreements with musicians, performers, authors, news publishers and journalists to use their workREUTERS | April 16, 2019, 07:40 IST Google will have to pay publishers for news snippets and Facebook filter out protected content under new copyright rules aimed at ensuring fair compensation for the European Union's $1 trillion creative industries. EU governments on Monday backed the move launched by the European Commission two years ago to protect Europe's creative industries, which employ 11.7 million people in the bloc. "When it comes to completing Europe's digital single market, the copyright reform is the missing piece of the puzzle," the Commission's president Jean-Claude Juncker said in a statement. Under the new rules, Google and other online platforms will have to sign licensing agreements with musicians, performers, authors, news publishers and journalists to use their work. The European Parliament gave a green light last month to a proposal that has pitted Europe's creative industry against tech companies, internet activists and consumer groups. Wikipedia blacked out several European sites in protest last month, while the change was opposed by Finland, Italy, Luxembourg, the Netherlands, Poland and Sweden. But 19 countries, including France and Germany, endorsed the revamp, while Belgium, Estonia and Slovenia abstained. Under the new regime Google-owned YouTube, Facebook's Instagram and other sharing platforms will have to install filters to prevent users from uploading copyrighted materials. Google said the new rules would hurt Europe's creative and digital economies, while critics said it would hit cash-strapped smaller companies rather than the tech giants. Poland said the overhaul was a step backwards as the filter requirement may lay the foundation for censorship. EU lawmaker for the European Pirate Party Julia Reda, who had campaigned against the reforms, said critics could take their case to court but it would be slow and difficult and that the best thing would be to monitor fair implementation. The European Magazine Media Association, the European Newspaper Publishers' Association, the European Publishers Council, News Media Europe and independent music labels lobbying group Impala welcomed the move. EU countries have two years to transpose the copyright directive into national laws. ($1 = 0.8835 euros) Source
  23. Facebook ships Oculus controllers with the message: Big Brother is Watching Such messages were meant to be easter eggs, but the irony hasn’t been lost on social media. Facebook's virtual reality (VR) arm Oculus has admitted to an amusing -- and ironic -- mistake in which virtual reality headsets are going to be accidentally shipped to customers containing messages such as "Big Brother is Watching." Given Facebook's rather colorful practices to data sharing and privacy, the blunder, deemed an "easter egg" failure by Nate Mitchell, co-founder of Oculus and VR Product chief at Facebook, has been met with amusement across social networks. According to Mitchell, easter egg labels intended for prototype models have made it into the internal hardware for tens of thousands of Touch controllers for Quest and Rift S products, due to ship soon. The messages include "This Space For Rent," and "The Masons Were Here." In addition, some products intended for businesses have shipped with the messages " "Big Brother is Watching," and "Hi iFixit! We See You!" -- the latter of which relates to iFixit, a global community dedicated to the "right to repair" principle and a provider of free guides to patching up consumer electronics. An image was shared across Twitter of one such device, emblazoned with "This Space For Rent" on what appears to be a flex cable inside a controller. "While I appreciate easter eggs, these were inappropriate and should have been removed," Mitchell said. "The integrity and functionality of the hardware were not compromised, and we've fixed our process so this won't happen again." TechRepublic: Top 10 developer skills you need to work at Facebook The news was met by many with amusement and jokes relating to Facebook surveillance. A handful of users, too, joked that they would pay double for a device containing one of the messages as they have instantly become collector's items. "Guess I need to head over to iFixIt to see how to take my controllers apart and find out if I'm lucky," one Twitter user commented. "Honestly, I would love to have any of those messages on my controller." Source
  24. Under-18s face 'like' and 'streaks' bans on social media Image copyrightGETTY IMAGES Image captionThe ICO is concerned that Facebook likes encourage children to over-share personal information Facebook and Instagram face a ban on letting under-18s "like" posts on their platforms while Snapchat could be prevented from allowing the age group to build up "streaks", under new rules proposed by the UK's data watchdog. The Information Commissioner's Office (ICO) said these techniques exploit "human susceptibility to reward". This, it said, encouraged users to share more personal data and spend more time on apps than desired. The proposal is part of a 16-rule code. To ensure its success, the watchdog says that online services must also adopt "robust" age-verification systems. Location tracking In addition to calling for an end to children being exposed to so-called "nudge techniques", the ICO advocates internet firms make the following changes among others for their younger members: make privacy settings "high" by default switch location-tracking off by default after each session and make it obvious when it had been activated give children choices over which elements of the service they want to activate and then collect and retain the minimum amount of personal data provide "bite-sized" explanations in clear language about how users' personal data is used make it clear if parental controls, such as activity-tracking, are being used The ICO suggests that firms that do not comply with the code could face fines of up to 20 million euros (£17.2m) or 4% of their worldwide turnover under the General Data Protection Regulation. "The internet and all its wonders are hardwired into their everyday lives," commented Information Commissioner Elizabeth Denham. "We shouldn't have to prevent our children from being able to use it, but we must demand that they are protected when they do. This code does that." Her office is now seeking feedback as part of a consultation that will run until 31 May. It is envisaged that the rules would come into effect next year. Bad nudges Restrictions on Facebook's like button - which registers a user's interest in another user or advertiser's post - and Snapchat streaks - which count the number of consecutive days two members have messaged each other - are not the only nudge behaviours being targeted. The ICO also says that apps should not: show boxes where the Yes button is much bigger than that for No use language that presents a data-sharing option in a much more positive light than the alternative make it much more cumbersome to select the high-privacy option by, for example, requiring more clicks to turn it on Image copyrightICO Image captionThe ICO says nudge techniques like those above encourage children to make poor privacy decisions However, the regulator said it was appropriate in some cases to use nudges that encourage children to opt for privacy-enhancing settings, or to take a break after using an online service for some time. The ICO's rules follow a proposal from the Department for Digital, Culture, Media and Sport (DCMS) for the creation of an independent tech watchdog that would write its own "code of practice" for online companies. The suggestions have already been welcomed by the National Society for the Prevention of Cruelty to Children (NSPCC). "Social networks have continually failed to prioritise child safety in their design, which has resulted in tragic consequences," commented the charity's Andy Burrows. "This design code from the ICO is a really significant package of measures, but it must go hand in hand with the government following through on its commitment to enshrine in law a new duty of care on social networks and an independent regulator with powers to investigate and fine." The Internet Association UK - which represents Facebook, Snap and other tech firms - has yet to comment. But the code has drawn criticism from the Adam Smith Institute think tank. "The ICO is an unelected quango introducing draconian limitations on the internet with the threat of massive fines," said its head of research Matthew Lesh. "It is ridiculous to infantilise people and treat everyone as children." Source
  25. Facebook and its subsidiaries Instagram and WhatsApp experienced widespread outages on Sunday for the second time in the past month (and the third time this year), with issues reported starting at around 6:30 a.m. ET and extending until around 9:00 a.m. ET. Per Bloomberg, Facebook and Instagram domains ceased to be accessible by users during that time period, while Messenger and WhatsApp were also non-functional. In a statement to the news agency, the company offered few details: Users worldwide appeared to be impacted, with Bloomberg noting that Twitter users everywhere from the U.S. to Israel and Thailand were complaining about the outage. The last time this happened in mid-March, Facebook blamed a “server configuration change” that resulted in an unprecedented, cascading series of issues persisting for over 24 hours. As the New York Times noted, even the platform’s bug reporting system became inaccessible, a black eye for a service that (at least in theory) is never supposed to go down. This incident is nowhere near as bad: outage-monitoring service DownDetector listed reports peaking in the tens of thousands on Sunday, while it listed millions of reports during the mid-March outage. More at [Bloomberg] Source
×
×
  • Create New...