Jump to content
New Members Read more... ×

Search the Community

Showing results for tags 'facebook'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 334 results

  1. Photographer Kristen Pierson Reilly has filed a lawsuit against Facebook for failing to respond properly to a DMCA notice. The social network refused to remove a copy of her photo, stating that it wasn't clear whether its use was infringing. In a complaint filed in a federal court in New York, Pierson now demands compensation for the damage she suffered. Every day millions of people post photos online, without approval from the rightsholder. This is particularly prevalent on social media platforms such as Facebook. Many photographers don’t have the time or resources to go after these types of infringements, but some are clearly drawing a line in the sand. This week, photographer Kristen Pierson filed a complaint against Facebook at a New York District Court. Pierson accuses the social media platform of hosting and displaying one of her works without permission. Normally these issues are resolved with a DMCA takedown notice but in this case that didn’t work. Last year, Pierson noticed that the Facebook account “Trusted Tech Tips” had used one of her works, a photo of Rhode Island politician Robert Nardolillo, without permission. When she requested Facebook to remove it, the company chose to leave it up instead. “Hi-, Thanks for your report. Based on the information you’ve provided, it is not clear that the content you’ve reported infringes your copyright,” the Facebook representative wrote in reply. “It appears that the content you reported is being used for the purposes of commentary or criticism. For this reason, we are unable to act on your report at this time.” Facebook’s reply The takedown notice was sent March last year and the post in question remains online at the time of writing, with the photo included. This prompted Pierson to file a complaint at a New York Federal Court this week accusing Facebook of copyright infringement. According to the Rhode Island-based photographer, Facebook failed to comply with the takedown request and can’t rely on its safe harbor protection. “Facebook did not comply with the DMCA procedure on taking the Photograph down. As a result, Facebook is not protected under the DMCA safe harbor as it failed to take down the Photograph from the Website,” the complaint reads. The ‘infringing’ post (exhibit d) The short five-page complaint accuses Facebook of copyright infringement and Pierson requests compensation for the damages she suffered. “Facebook infringed Plaintiff’s copyright in the Photograph by reproducing and publicly displaying the Photograph on the Website. Facebook is not, and has never been, licensed or otherwise authorized to reproduce, publically display, distribute and/or use the Photograph,” it reads. The photographer is not new to these types of lawsuits. She has filed similar cases against other outlets such as Twitter. The latter case was eventually dismissed, likely after both parties reached an agreement. In the present case, Pierson requests a trial by jury but it wouldn’t be a surprise if this matter is settled behind closed doors, away from the public eye. A copy of the complaint against Facebook is available here (pdf). Original Article.
  2. If you’ve been thinking about trying your hand at social media’s 10 Year Challenge and are concerned about your privacy, you may want to take a moment to see why some are saying the trend may not be so harmless. Like many fads in the social realm, this one could come with some unintended consequences. First, for those who are catching up on the 10-year craze, the challenge, otherwise known as #2009vs.2019, the #HowHardDidAgingHitYouChallenge and the #GloUpChallenge, involves posting two photos of yourself – one from 2009 and one from 2019.In lieu of that, 2008 and 2018, or some other decade or substantial length of time. On Facebook, people shared their first profile picture with their current picture. In all cases, the idea is to show how you’ve changed (or stayed the same, like Reese Witherspoon, pictured below) over that period. Celebrities ranging from Janet Jackson to Snooki, Kevin Smith, Fat Joe and Tyra Banks have taken up the challenge. Some, like Smith and Fat Joe, showed off a considerable slimdown, while others just had fun looking back 10 years. (Or 50, like Samuel L. Jackson.) What could go wrong? “Y’all posting these #2009v2019 comparison photos and that’s how you get your identity stolen,” tweeted Desus Nice of the upcoming Showtime series “Desus vs. Mero," on Sunday. “Imagine that you wanted to train a facial recognition algorithm on age-related characteristics, and, more specifically, on age progression (e.g. how people are likely to look as they get older),” she says. “Ideally, you’d want a broad and rigorous data set with lots of people’s pictures. It would help if you knew they were taken a fixed number of years apart—say, 10 years.” It’s not that Facebook or Twitter or Instagram didn’t already have photos of you, she says. It’s just that they just weren’t clearly organized in a specific, labeled progression, she explains. The date you posted a profile picture doesn’t necessarily mean that’s when it was taken. So with this trend, we are providing more detailed data by denoting when each photo was taken. “In other words, thanks to this meme, there’s now a very large data set of carefully curated photos of people from roughly 10 years ago and now,” O’Neill says. If you’re OK with that, by all means, proceed with showing off your glo-up. But know this: “Age progression could someday factor into insurance assessment and healthcare,” O’Neill says, allowing the lighthearted trend a dystopian ending. “For example, if you seem to be aging faster than your cohorts, perhaps you’re not a very good insurance risk. You may pay more or be denied coverage.” And law enforcement could use facial recognition technology to track people — she notes that Amazon sold its facial recognition services to police departments. But O’Neill also says the technology can be used to find missing children. Ultimately, every digital footprint comes with a wide host of implications for how that information can be used. Of course, it’s up to you to decide what photos and information you want to share, even if you’re just doing it for the “likes.” Source
  3. The news was first reported by the German newspaper Bild am Sonntag, German regulators are going to request Facebook changes in its platforms aimed at protecting privacy and personal data of its users. The German watchdog want to ask the social network giant to change the way it collects and shares users’ personal data to be compliant with privacy laws. The Federal Cartel Office is monitoring Facebook’s conduct since at least 2015, focusing on the way the company gathers data and share it with third-party apps, including WhatsApp, Instagram. “Germany’s antitrust watchdog plans to order Facebook to stop gathering some user data, a newspaper reported on Sunday.” reported the Reuters. “The Federal Cartel Office, which has been investigating Facebook since 2015, has already found that the social media giant abused its market dominance to gather data on people without their knowledge or consent.” Cambridge Analytica privacy scandal and misinformation campaigns carried out by Russia-linked APT groups raised discussion about the importance of monitoring the activity of the social network. At the time, it is not clear how Facebook will have to comply with the German request. Experts believe the German watchdog will set a deadline for compliance rather than urging to immediately apply the changes. “A Facebook spokeswoman said the company disputes the watchdog’s findings and will continue to defend this position.” concludes the Reuters. Source
  4. Facebook staff discussed charging companies for access to user data, before ultimately deciding against such a policy, according to reports. The internal discussions were revealed due to improperly redacted court documents, released as part of Facebook’s lawsuit against American software developer Six4Three last year. According to Ars Technica and the Wall Street Journal, an 18-page court filing contains three pages that were supposed to be blacked out because they contain “sensitive discussion of Facebook’s internal strategic analysis of third-party applications”, Facebook said in other court filings. But while the sensitive discussions were masked with a black bar, the underlying text was not removed from digital versions of the documents, allowing it to be uncovered. That text reportedly shows Facebook staff discussing how to use access to user data to extract higher advertising spend from major clients, in emails dating from 2012 and 2013. The conversations occurred at roughly the time that Facebook decided to change the way third-party developers could access user data, which had the effect of closing the hole through which Cambridge Analytica partner GSR had managed to extract the personal information of millions of Facebook users. In the filings, the Wall Street Journal reports, one employee proposes blocking access “in one go to all apps that don’t spend … at least $250k a year to maintain access to the data”. Elsewhere, the filings suggest that Facebook offered to extend the length of time Tinder could continue using the old, more permissive terms of access in exchange for a licence for the dating company’s trademark on the term “Moments”. There is no suggestion that Facebook acted on the proposals, and the company has always maintained that it tightened the restrictions on what could be done with its public access for privacy and security reasons. The filings are drawn from a set of Facebook emails obtained by Six4Three in the discovery portion of its lawsuit against the social network, subsequently sealed by the Californian court. On Sunday, however, the UK parliament seized copies of the emailsfrom Six4Three’s chief executive, and quoted from them during its grilling of a Facebook executive on Tuesday. During the hearings, Labour’s Clive Effort asked whether “apps were shut down on the basis that they could not pay a large sum of money for mobile advertising”, and whether apps were “whitelisted” based on their advertising spend. Richard Allan, Facebook’s head of public policy in Europe, said no to both. In a statement, Facebook’s director of developer platforms and programs, Konstantinos Papamiltiadis, told the Guardian: “As we’ve said many times, the documents Six4Three gathered for this baseless case are only part of the story and are presented in a way that is very misleading without additional context.” Facebook declined a request to provide the Guardian with the emails in their additional context, saying: “Evidence has been sealed by a California court so we are not able to disprove every false accusation.” Papamiltiadis added: “That said, we stand by the platform changes we made in 2015 to stop a person from sharing their friends’ data with developers. Any short-term extensions granted during this platform transition were to prevent the changes from breaking user experience. “To be clear, Facebook has never sold anyone’s data. Our APIs have always been free of charge and we have never required developers to pay for using them, either directly or by buying advertising.” Source
  5. Police in Palo Alto, California received a phone call on Tuesday night from a man who reportedly claimed to be a Facebook executive who had shot his wife and was holding hostages in his home. A rapid response team arrived at the scene and eventually determined the exec was the victim of a hoax. According to a press release issued by the city of Palo Alto, a police dispatcher received the call around 9 PM on Tuesday night from a blocked number. The caller claimed he shot his wife, tied up his children, and had multiple pipe bombs in his possession. He warned that any police who attempted to intervene would be harmed. The Palo Alto Daily Post spoke with Police Agent Marianna Villaescusa, who explained that the caller used the name of a Facebook cybersecurity executive. A team of police officers and crisis negotiators quickly arrived at the executive’s home and used a loudspeaker to request that he step outside. The confused victim complied with the request and explained that he didn’t understand what was happening. After police handcuffed the man and an unidentified woman who was present at the scene, they searched the residence and determined the call was a hoax. Palo Alto Police told Gizmodo that they could not immediately confirm Villaescusa’s claim that the victim was a Facebook exec. Facebook did not immediately respond to a request for comment, however Ars Technica reported that the company offered the following statement. “We thank the city of Palo Alto for their swift and thoughtful response. They quickly identified this as a prank, and we are glad that our colleague and his family are safe.” According to Villaescusa’s account, the hoaxer stayed on the line with police until 10:02 PM and police weren’t able to determine his identity. Villaescusa, who works as a negotiator for the department, said there have been a number of swatting incidents in the region over the last 18 months. Most recently, a hoaxer targeted a “high-profile person in the cryptocurrency world,” according to the Daily Post. Calling in fake emergencies with the goal of sending a massive police presence to a victim’s home is an incredibly dangerous crime. Victims have been murdered by responding officers in the past. We’ve asked Facebook if it is aware of the caller’s motive and will update this post when we receive a reply. Source
  6. Share this with Facebook Share this with Messenger Share this with Twitter Share this with Email Facebook has employed a UK fact-checking service to help it deal with the spread of fake news. Full Fact, a charity founded in 2010, will review stories, images and videos and rate them based on accuracy. It said that it will focus on misinformation that could damage people's health or safety or undermine democratic processes. Facebook said it was working "continuously" to reduce the spread of misinformation. Sarah Brown, training and news literacy manager at Facebook, said: "People don't want to see false news on Facebook, and nor do we. We're delighted to be working with an organisation as reputable and respected as Full Fact to tackle this issue. In a blogpost, Full Fact explained that users can flag up content they think may be false and its team will rate the stories as true, false or a mixture of accurate and inaccurate content. It will only be checking images, videos and articles presented as fact-based reporting. Other content, such as satire and opinion, will be exempt. If something is found to be fake, it will appear lower in the news feed but will not be deleted. It will be tackling "everything from dangerous cancer 'cures' to false stories spreading after terrorist attacks, to fake content about voting processes ahead of elections", the charity said. "This isn't a magic pill. Fact-checking is slow, careful, pretty unglamorous work and realistically we know we can't possibly review all the potentially false claims that appear on Facebook every day. But it is a step in the right direction." Over 65s 'more likely' to share fake news on Facebook The godfather of fake news Debunking fake news in Nigeria MPs' fury over Mark Zuckerberg 'no-show' Facebook has faced accusations from politicians in the US and the UK that it is helping spread misinformation that can have an effect on the way people vote. The Brexit referendum and the 2017 general election were both found to have been tarnished by fake news, and social media firms have been threatened with regulation if they fail to do something about the issue. Four million views Chief executive Mark Zuckerberg appeared before the US Congress in April to talk about how Facebook is tackling false reports, but has so far failed to respond to requests from a UK parliamentary inquiry to answer its questions face to face. The social network now works with fact-checkers in more than 20 countries. To illustrate the problem that fact-checking services and social networks face, the BBC has learned that a video went live last weekend falsely suggesting that smart meters emit radiation levels that are harmful to health. The original video had more than four million views in four days before being taken down. Sacha Deshmukh, chief executive at campaigners Smart Energy GB, said: "Smart Energy GB welcomes the news today from Facebook to take action on the phenomenon of misleading videos on its platform in the UK. "But for this to work effectively, Facebook must guarantee that the speed of action will match the speed with which such misleading stories spread. "Facebook provided an environment in which, over 72 hours, the video attracted more than four million views. "This is clearly unacceptable and, moving forward, the challenge for Facebook is whether their new system will be nimble enough to swiftly fact-check misleading content and protect their users." source
  7. from the this-is-stupid dept It should be well understood at this point that attempts by internet platforms to automagically do away with sexualized content on their sites via algorithms are... imperfect, if we want to be kind. The more accurate description is to say that these filters are so laughably horrible at actually filtering out objectionable content that they seem farcical. When, for instance, Tumblr can't tell the difference between porn and pictures of Super Mario villains, and when Facebook can't do likewise between porn and bronze statues or educational breast cancer images consisting of stick figures...well, it's easy to see that there's a problem. Notably, some of the examples above, and many others, are years old. You might have thought that in the intervening years, the most prominent sites would have gotten their shit together. You would be decidedly wrong, as evidenced by Facebook's refusal to allow Devolver Digital, the publishers of the forthcoming video game GRIS, to publish this launch trailer for the game, due to its sexual content. Did you spot the sexual content? I know you probably think you did. Or, you at least you think you know what confused the filters, and you probably think it had something to do with the close up on the female character's face. Well, ha ha, jokes on all of us, because it was this image for...reasons? Yes, the outline image of a crumbling sculpture is what set off Facebook's puritanical alarms. Now, Devolver Digital appealed this with Facebook, but, amazingly, that appeal was rejected by Facebook, which argued for some reason that it "doesn't allow nudity." Except, of course, there is no damned nudity in the trailer. In fact, there isn't anything even remotely close to nudity. This is about as clean as it gets. Let's go to the folks at Devolver Digital for a reaction to the failed appeal. A Devolver representative tells Kotaku “this is stupid”. I could try to add something to that, but why bother? Facebook filters: this is stupid. Source
  8. Twitter will be rolling out beta updates including “conversational”-style speech bubbles and indented/color-coded replies based on whether you follow another user in the coming weeks, the Verge reported on Wednesday. Other features under consideration, but which will not be available in the forthcoming beta, include green-colored status availability icons that might indicate whether a user is online. Screenshots posted by Twitter’s director of product management, Sara Haider, to the site illustrate what these changes like. Pretty much anyone who has already gone down the Twitter rabbit hole—something that as a longtime user of the platform I cannot, in all honesty, recommend—will be familiar with the competitor this looks almost exactly like: Facebook. Just, you know, cropped to leave out the hordes of neo-Nazis, trolls, and white supremacists that continue to flood Twitter. Screenshots of a potential Twitter redesign. Once you see it, it’s hard to ignore. This looks like Facebook, except on Facebook you usually only have to put up with someone using the display name Reich_Daddy_420 if they’re your great aunt’s second cousin or venture into your private message requests or something. The intent of these changes would obviously seem to be amping up user engagement. That’s something that would be required for Twitter to maintain its streak in Q4 2018 of actually making a profit for once, despite apparently hitting a wall on U.S. user growth. Other features under development like “ice breaker” tweets, the Verge wrote, are also apparently intended to encourage more conversations on the site: Another feature that could roll out in the future is “ice breaker” tweets, which are supposed to help start a conversation about a specific topic. Users would be able to post their own ice breakers for others to respond to. An additional feature would let people attach tags to their tweets that explained what they were in reference to, like a TV show. While Haider wrote in her tweet that the changes were intended to “make it feel more conversational here,” they also take the site further away from its original conception as a strictly linear flow of posts—somewhat along the lines of its much-maligned decision to inject algorithmically curated tweets into feeds. And conversations are all well and good, except if the one of the people having the conversation is you and the others are the aforementioned hordes of Reich_Daddies. A BuzzFeed News report on Wednesday emphasized that Twitter’s infamously spineless “abuse report infrastructure remains opaque and sometimes confounding,” and reports of the site failing to take action on even the most cut-and-dry issues of harassment continue to mount up. As BuzzFeed noted, an editor at American Jewish magazine Tablet, Yair Rosenberg, recently reported a user with a bio reading “Account for my son Yair Jr controlled by @yair_rosenberg.” Rosenberg reported a tweet from the account with swastikas photoshopped over a photo of a baby for what seemed like cut-and-dry violations of policies on abusive behavior and impersonation, receiving a prompt reply that said: “We reviewed your report carefully and found that there was no violation of the Twitter rules against abusive behavior.” More at The [The Verge] Source
  9. steven36

    Hey, Zuck. You're no Oprah

    Facebook CEO Mark Zuckerberg says his personal challenge for the year will be to host discussions about technology. That should be riveting. The Oprah of tech? It's hard competing with yourself. Every year, Facebook CEO Mark Zuckerberg commits to a new personal challenge. Every year, the challenge is greater than last year's. Why, last year Zuckerberg promised he'd fix Facebook. What could be bigger than that? Alright, the picky will say that was a multi-year commitment, while the very picky will suggest that the only fixing Zuckerberg has done so far is to affix his company in the position of leading tech pariah. Still, he's moving ever upward. He's just announced that his personal challenge for 2019 is, oh, this: "My challenge for 2019 is to host a series of public discussions about the future of technology in society -- the opportunities, the challenges, the hopes, and the anxieties." Zuckerberg as Oprah? That ought to be riveting, as the Facebook CEO talks to "leaders, experts, and people in our community from different fields." Yes, leaders aren't experts, as he himself has so brilliantly proved over the last few years. Please imagine, though, that someone as uniquely ill-suited to warm discussion with other humans is now going to be hosting discussions with other humans about how tech has set the world on fire. This is something Zuckerberg appears to realize -- or, at least, his PR handlers do. In his announcement, he concedes that "I'm an engineer, and I used to just build out my ideas and hope they'd mostly speak for themselves. But given the importance of what we do, that doesn't cut it anymore." Dear Mark, it never cut it. The fact that humanity sat back and watched, as you and your fellow tech leaders tried to get everyone to think and behave like engineers doesn't mean that, even back then, you were all wise. You were a bunch of kids who thought you were so, so clever. Equally, you thought the way humans went about things was largely quite stupid. Now that you've grown up (a little), you see that there are many more dimensions to the human conundrum than you realized. Zuckerberg admits that he's (finally) leaving his comfort zone of aloof power and total control. He writes: "I'm going to put myself out there more than I've been comfortable with and engage more in some of these debates about the future, the tradeoffs we face, and where we want to go." But, given that he's outside his comfort zone, how can he drive these discussions to bear significant fruit? Won't there be a lot of staring into space? Moreover, if you're a leader and/or an expert, why would you want to appear on a show that's designed to give Zuckerberg and Facebook renewed credibility? Now if Zuckerberg could get Oprah to host these affairs, that might be interesting. She might start by looking benignly at the Facebook CEO and asking: "Why, Mark. Why?" Source
  10. PHNOM PENH (Reuters) - A Cambodian court jailed a man on Wednesday for three years for insulting the king in Facebook posts, the second known conviction under a new lese majeste law enacted last year, which rights groups fear could be used to stifle dissent. “The court announced a verdict against Ieng Cholsa which sentenced him to 3 years in prison and ordered him to pay five million riels ($1,250),” Phnom Penh Municipal Court spokesman Y Rin said. The Facebook posts, which the court found had insulted King Norodom Sihamoni, were uploaded in June last year, Y Rin said. Facebook did not immediately respond to a request for comment. The defendant could not be reached for comment and the court did not say whether he had a lawyer. Cambodia’s lese majeste law was unanimously adopted by parliament in February last year. Rights groups expressed concerns at the time that the law, which is similar to legislation in neighboring Thailand, could be used to silence government critics. Last October, a court in the northern province of Siem Reap jailed a member of the dissolved opposition Cambodia National Rescue Party (CNRP) under the law. The Supreme Court dissolved the CNRP in 2017 at the government’s request after it was found guilty of plotting to take power with the help of the United States – an accusation the party and Washington have denied. Prime Minister Hun Sen’s ruling Cambodian People’s Party (CPP) won a general election in July last year which critics said was flawed because of a lack of a credible opposition, among other factors. Source
  11. HANOI (Reuters) - Facebook has violated Vietnam’s new cybersecurity law by allowing users to post anti-government comments on the platform, state media said on Wednesday, days after the controversial legislation took effect in the communist-ruled country. Despite economic reforms and increasing openness to social change, Vietnam’s Communist Party retains tight media censorship and does not tolerate dissent. “Facebook had reportedly not responded to a request to remove fanpages provoking activities against the state,” the official Vietnam News Agency said, citing the Ministry of Information and Communication. In a statement, a Facebook spokeswoman said, “We have a clear process for governments to report illegal content to us, and we review all these requests against our terms of service and local law.” She did not elaborate. The ministry said Facebook also allowed personal accounts to upload posts containing “slanderous” content, anti-government sentiment and defamation of individuals and organizations, the agency added. “This content had been found to seriously violate Vietnam’s Law on cybersecurity” and government regulations on the management, provision and use of internet services, it quoted the ministry as saying. Global technology companies and rights groups have earlier said the cybersecurity law, which took effect on Jan. 1 and includes requirements for technology firms to set up local offices and store data locally, could undermine development and stifle innovation in Vietnam. Company officials have privately expressed concerns that the new law could make it easier for the authorities to seize customer data and expose local employees to arrest. Facebook had refused to provide information on “fraudulent accounts” to Vietnamese security agencies, the agency said in Wednesday’s report. The information ministry is also considering taxing Facebook for advertising revenue from the platform. The report cited a market research company as saying $235 million was spent on advertising on Facebook in Vietnam in 2018, but that Facebook was ignoring its tax obligations there. In November, Vietnam said it wanted half of social media users on domestic social networks by 2020 and plans to prevent “toxic information” on Facebook and Google. Source
  12. More than a dozen former Facebook employees detailed how the company's leadership and its performance review system has created a culture where any dissent is discouraged. Employees say Facebook's stack ranking performance review system drives employees to push out products and features that drive user engagement without fully considering potential long-term negative impacts on user experience or privacy. Reliance on peer reviews creates an underlying pressure for Facebook employees to forge friendships with colleagues for the sake of career advancement. At a company-wide town hall in early October, numerous Facebook employees got in line to speak about their experiences with sexual harassment. The company called the special town hall after head of policy Joel Kaplan caused an internal uproar for appearing at the congressional hearing for Judge Brett Kavanaugh. A young female employee was among those who got up to speak, addressing her comments directly to COO Sheryl Sandberg. "I was reticent to speak, Sheryl, because the pressure for us to act as though everything is fine and that we love working here is so great that it hurts," she said, according to multiple former Facebook employees who witnessed the event. "There shouldn't be this pressure to pretend to love something when I don't feel this way," said the employee, setting off a wave of applause from her colleagues at the emotional town hall in Menlo Park, California. The episode speaks to an atmosphere at Facebook in which employees feel pressure to place the company above all else in their lives, fall in line with their manager's orders and force cordiality with their colleagues so they can advance. Several former employees likened the culture to a "cult." This culture has contributed to the company's well-publicized wave of scandals over the last two years, such as governments spreading misinformation to try to influence elections and the misuse of private user data, according to many people who worked there during this period. They say Facebook might have have caught some of these problems sooner if employees were encouraged to deliver honest feedback. Amid these scandals, Facebook's share price fell nearly 30 percent in 2018 and nearly 40 percent since a peak in July, resulting in a loss of more than $252 billion in market capitalization. Meanwhile, Facebook's reputation as being one of the best places in Silicon Valley to work is starting to show some cracks. According to Glassdoor, which lets employees anonymously review their workplaces, Facebook fell from being the best place to work in the U.S. to No. 7 in the last year. But employees don't complain in the workplace. "There's a real culture of 'Even if you are f---ing miserable, you need to act like you love this place,'" said one ex-employee who left in October. "It is not OK to act like this is not the best place to work." This account is based on conversations with more than a dozen former Facebook employees who left between late 2016 and the end of 2018. These people requested anonymity in describing Facebook's work culture, including its "stack ranking" employee performance evaluation system and their experiences with it, because none is authorized by Facebook to talk about their time there. This stack ranking system is similar to the one that was notoriously used by Microsoft before the company abandoned it in 2013, the former Facebook employees said. Facebook declined to comment on former employees' characterization of the work place as "cult-like." Inside the bubble Former employees describe a top-down approach where major decisions are made by the company's leadership, and employees are discouraged from voicing dissent — in direct contradiction to one of Sandberg's mantras, "authentic self." For instance, at an all-hands meeting in early 2017, one employee asked Facebook Vice President David Fischer a tough question about a company program. Fischer took the question and answered, but within hours, the employee and his managers received angry calls from the team running that program, this person said. "I never felt it was an environment that truly encouraged 'authentic self' and encouraged real dissent because the times I personally did it, I always got calls," said the former manager, who left the company in early 2018. The sentiment was echoed by another employee who left in 2017. "What comes with scale and larger operations is you can't afford to have too much individual voice," said this person. "If you have an army, the larger the army is, the less individuals have voice. They have to follow the leader." In this employee's two years at Facebook, his team grew from a few people to more than 50. He said "it was very much implied" to him and his teammates that they trust their leaders, follow orders and avoid having hard conversations. The company's culture of no-dissent prevented employees from speaking up about the impact that News Feed had on influencing the 2016 U.S. election, this person added. The message was clear in August 2016 when the company laid off the editorial staff of its trending news team, shortly after some workers on that team leaked to the press that they were suppressing conservative-leaning stories. Employees were further discouraged from speaking up following the election, when CEO Mark Zuckerberg brushed off the accusation that Facebook could have impacted the election, calling that idea "crazy." The former employee described "a bubble" at the company in which employees are dissuaded from giving managers critical feedback or challenging decisions. "I'm pretty disappointed in that because I have a lot of respect for Sheryl, and she preaches about giving hard feedback," the employee said. "All the things we were preaching, we weren't doing enough of them. We weren't having enough hard conversations. They need to realize that. They need to reflect and ask if they're having hard conversations or just being echo chambers of themselves." Show no weakness Many former employees blamed the cult-like atmosphere partly on Facebook's performance review system, which requires employees to get reviews from approximately five of their peers twice a year. This peer review system pressures employees to forge friendships with colleagues at every possible opportunity, whether it be going to lunch together each day or hanging out after work. "It's a little bit of a popularity contest," said one manager who left the company in 2017. "You can cherry-pick the people who like you — maybe throw in one bad apple to equalize it." Peers can provide feedback directly to their colleagues, or they can send the reviews to the employee's manager. That feedback is typically treated as anonymous and cannot be challenged. "You have invisible charges against you, and that figures mightily into your review," said an employee who left in October. "Your negative feedback can haunt you for all your days at Facebook." Several former employees said that peers and managers iced them out because they had personal commitments or problems that required significant attention outside of work. For instance, one employee who left in recent weeks said a manager was critical in a public team meeting because the employee didn't attend a team-building event outside work. At the time, this person was going through a divorce. "She definitely marked me down for not attending those team-building events, but I couldn't attend because I was going through my own issues and needed work-life balance," said the employee. Employees are not required to attend after-hours events, according to a Facebook spokeswoman, adding that collaboration is important at the company. Another manager who also left the company in recent weeks said she once took multiple weeks of vacation instead of going on medical leave to treat a major illness. She says she did this based on advice from her supervisor. "I was afraid that if I told too many people or took too much time off, I would be seen as unable to do my job," the former manager said. "I was scared that if I let up in any way, shape or form they would crumble me, and they did." Ironically, one of the best ways to see the desperation to be liked is to follow Facebook employees on Facebook itself. Employees parade the company's projects and post any report on the benefits of working at the company or the positive impact the company is making on the world. This is in part a show for peers and managers, former employees said. "People are very mindful about who they're connected with on Facebook who they also work with and how what they're posting will put them in a favorable light to their managers," an employee who left in 2016 said. As with many social media users, the online content does not always reflect the offline emotions. "There's so many people there who are unhappy, but their Facebook posts alone don't reflect the backdoor conversations you have with people where they're crying and really unhappy," she said. How employees are graded Twice a year, this peer feedback comes into play in so-called calibration meetings, where employees are given one of seven grades. Managers deliberate with their peers to grade employees in all levels below them. As the review process moves up the chain over the course of multiple weeks, lower-level managers gradually leave the room, until the company's vice presidents finish the calibration. At this point, Zuckerberg and Sandberg sign off that their vice presidents have done due diligence, and each employee's grade for the past six months is finalized. But there's a companywide limit on the percentage of employees who can receive each grade. So during the reviews process, managers compete against their peer managers to secure strong grades for their direct reports. Managers are compelled to vouch fiercely for their favorite employees, but don't speak up for employees they don't like or who have previously received poor ratings. "There's a saying at Facebook that once you have one bad half, you're destined for bad halves the rest of your time there. That stigma will follow you," said a manager who left in September. According to two former executives, the grade breakdown is approximately as follows: "Redefine," the highest grade, is given to fewer than 5 percent of employees "Greatly exceeds expectations": 10 percent "Exceeds": 35 percent "Meets all": 35 to 40 percent "Meets most," a low grade that puts future employment at risk, goes to most of the remaining 10 to 15 percent "Meets some" grades are extremely rare and are seen as an indication that you're probably getting fired, according to multiple employees. "Does not meet" are exceptionally rare, as most employees are fired before they get to that level. The distribution of these grades are not a hard limit but rather a recommended guidance for managers to follow, according to a Facebook spokeswoman. Facebook isn't the only tech company to use a performance evaluation system where a percentage of employees is pegged to each performance grade, meaning that there's always a fixed population at risk of being fired. Pioneered by Jack Welch at General Electric in the 1990s and sometimes known as "stack ranking," this method is fairly common in Silicon Valley and was most notoriously used by Microsoft until the company got rid of it in 2013 after widespread employee complaints. Stack ranking systems work well at companies with competitive environments that compare employees on objectively measurable performance, according to Alexandra Michel, professor at the University of Pennsylvania who studies work culture. However, the system tends to break down and cause distrust among employees and create a political atmosphere when applied by companies that measure performance subjectively, or companies that demand employee loyalty in exchange for benefits and the promise of career advancement, Michel said. "If you have an environment that is completely cutthroat like Wall Street, this system works pretty well," Michel said. "But if you have employees who come in and want to be taken care of, want to learn, want to be part of a warm group and people who care about them — that's a very jarring mismatch." Since early 2017, Facebook has become more rigorous in distributing grades by specific percentages, according to multiple former employees. "I had a boss literally say to me 'You don't have enough people in 'meets some,' 'meets most,' and 'meets all,'" said a former director who left earlier this year. "I was finding myself making up things to be hypercritical of employees to give them lower ratings than they really deserved." These twice-yearly reviews encourage employees to be particularly productive around June and December, working nights and weekends as they race to impress bosses before reviews, which are typically completed in August and February. It's especially true in December, the half Facebook predominantly uses to determine which employees will receive promotions. This rush causes employees to focus on short-term goals and push out features that drive user engagement and improve their own metrics without fully considering potential long-term negative impacts on user experience or privacy, multiple former employees said. "If you're up for promotion, and it's based on whether you get a product out or not, you're almost certainly going to push that product out," a former engineer said. "Otherwise you're going to have to wait another year to get that promotion." As employees begin gathering peer reviews and buckling up for their next round of calibrations in February, the process will reveal how employees are thinking of the company after a bruising 2018, according to employees who left recently. There will be an extra level of anxiety around the process this time, one person said. Folks who have been wanting to leave will be hoping to notch a high rating so they can depart on good terms. Others who are committed to the company will be torn between speaking up about their concerns or staying in line for the sake of their careers. Any changes to company's grading process this time could reveal whether Facebook is taking special steps to keep valued employees around, or continuing along the same lines. "This review cycle will be particularly colorful for them," according to a director who left recently. Source
  13. At the CES 2019 here on Monday, Intel announced: "Nervana Neural Network Processor for Inference" (NNP-I). CES 2019: Intel, Facebook Working on Cheaper AI Chip (photo for representation) Intel and Facebook are working together on a new cheaper Artificial Intelligence (AI) chip that will help companies with high workload demands. At the CES 2019 here on Monday, Intel announced "Nervana Neural Network Processor for Inference" (NNP-I). "This new class of chip is dedicated to accelerating inference for companies with high workload demands and is expected to go into production this year," Intel said in a statement.Facebook is also one of Intel's development partners on the NNP-I. Navin Shenoy, Intel Executive Vice President in the Data Centre Group, announced that the NNP-I will go into production this year. The new "inference" AI chip would help Facebook and others deploy machine learning more efficiently and cheaply. Intel began its AI chip development after acquiring Nervana Systems in 2016. Intel also announced that with Alibaba, it is developing athlete tracking technology powered by AI that is aimed to be deployed at the Olympic Games 2020 and beyond. The technology uses existing and upcoming Intel hardware and Alibaba cloud computing technology to power a cutting-edge deep learning application that extracts 3D forms of athletes in training or competition."This technology has incredible potential as an athlete training tool and is expected to be a game-changer for the way fans experience the Games, creating an entirely new way for broadcasters to analyse, dissect and re-examine highlights during instant replays," explained Shenoy. Intel and Alibaba, together with partners, aim to deliver the first AI-powered 3D athlete tracking during the Olympic Games Tokyo 2020. "We are proud to partner with Intel on the first-ever AI-powered 3D athlete tracking technology where Alibaba contributes its best-in-class cloud computing capability and algorithmic design," said Chris Tung, CMO, Alibaba Group. Source
  14. Prevent Facebook from tracking you around the web. The Facebook Container extension for Firefox helps you take control and isolate your web activity from Facebook. What does it do? Facebook Container works by isolating your Facebook identity into a separate container that makes it harder for Facebook to track your visits to other websites with third-party cookies. How does it work? Installing this extension closes your Facebook tabs, deletes your Facebook cookies, and logs you out of Facebook. The next time you navigate to Facebook it will load in a new blue colored browser tab (the “Container”). You can log in and use Facebook normally when in the Facebook Container. If you click on a non-Facebook link or navigate to a non-Facebook website in the URL bar, these pages will load outside of the container. Clicking Facebook Share buttons on other browser tabs will load them within the Facebook Container. You should know that using these buttons passes information to Facebook about the website that you shared from. Which website features will not function? Because you will be logged into Facebook only in the Container, embedded Facebook comments and Like buttons in tabs outside the Facebook Container will not work. This prevents Facebook from associating information about your activity on websites outside of Facebook to your Facebook identity. In addition, websites that allow you to create an account or log in using your Facebook credentials will generally not work properly. Because this extension is designed to separate Facebook use from use of other websites, this behavior is expected. What does Facebook Container NOT protect against? It is important to know that this extension doesn’t prevent Facebook from mishandling the data that it already has, or permitted others to obtain, about you. Facebook still will have access to everything that you do while you are on facebook.com, including your Facebook comments, photo uploads, likes, any data you share with Facebook connected apps, etc. Rather than stop using a service you find valuable, we think you should have tools to limit what data others can obtain. This extension focuses on limiting Facebook tracking, but other ad networks may try to correlate your Facebook activities with your regular browsing. In addition to this extension, you can change your Facebook settings, use Private Browsing, enable Tracking Protection, block third-party cookies, and/or use Firefox Multi-Account Containers extension to further limit tracking. What data does Mozilla receive from this extension? Mozilla does not collect data from your use of the Facebook Container extension. We do receive the number of times the extension is installed or removed. Learn more Other Containers Facebook Container leverages the Containers feature that is already built in to Firefox. When you enable Facebook Container, you may also see Containers named Personal, Work, Shopping, and Banking while you browse. If you wish to use multiple Containers, you’ll have the best user experience if you install the Firefox Multi-Account Containers extension. Learn more about Containers on our support site. Known Issues When Facebook is open and you navigate to another website using the same tab (by entering an address, doing a search, or clicking a bookmark), the new website will be loaded outside of the Container and you will not be able to navigate back to Facebook using the back button in the browser. NOTE: If you are a Multi-Account Containers user who has already assigned Facebook to a Container, this extension will not work. In an effort to preserve your existing Container set up and logins, this add-on will not include the additional protection to keep other sites out of your Facebook Container. If you would like this additional protection, first unassign facebook.com in the Multi-Account Container extension, and then install this extension. What version of Firefox do I need for this? This extension works with Firefox 57 and higher on Desktop. Note that it does not work on other browsers and it does not work on Firefox for mobile. If you believe you are using Firefox 57+, but the install page is telling you that you are not on a supported browser, you can try installing by selecting or copying and pasting this link. (This may be occurring because you have set a preference or installed an extension that causes your browser to obscure its user agent for privacy or other reasons.) How does this compare to the Firefox Multi-Account Containers extension? Facebook Container specifically isolates Facebook and works automatically. Firefox Multi-Account Containers is a more general extension that allows you to create containers and determine which sites open in each container. You can use Multi-Account Containers to create a container for Facebook and assign facebook.com to it. Multi-Account Containers will then make sure to only open facebook.com in the Facebook Container. However, unlike Facebook Container, Multi-Account Containers doesn’t prevent you from opening non-Facebook sites in your Facebook Container. So users of Multi-Account Containers need to take a bit extra care to make sure they leave the Facebook Container when navigating to other sites. In addition, Facebook Container assigns some Facebook-owned sites like Instagram and Messenger to the Facebook Container. With Multi-Account Containers, you will have to assign these in addition to facebook.com. Facebook Container also deletes Facebook cookies from your other containers on install and when you restart the browser, to clean up any potential Facebook trackers. Multi-Account Containers does not do that for you. Report Issues If you come across any issues with this extension, please let us know by filing an issue here. Thank you! ----- Release Notes: This release also asks for permission to clear recent browsing history, so we can improve its protection and its integration with Multi-Account Containers. 83ae8bf fix #183: Can't search Google/other sites with string "fbclid". Add-on's Permissions: This add-on can: Access your data for all websites Clear recent browsing history, cookies, and related data Monitor extension usage and manage themes Access browser tabs ----- Homepage/Download https://addons.mozilla.org/en-US/firefox/addon/facebook-container/
  15. Facebook's data collecting practices have once again been called into question, after a new report revealed that it "routinely tracked" people who do not use the app. Privacy International analysed 34 apps on the Android mobile operating system with user bases of between 10 and 500 million. The charity began the study after the scandal surrounding the now defunct London based political consultancy Cambridge Analytica, which was accused of improperly obtaining personal information on behalf of political clients and using it to influence the outcome of the US 2016 presidential election or the UK Brexit referendum. Privacy International researchers found that 23 apps sent data to Facebook the moment a user opened them. Their report, which was presented at Chaos Computer Congress in Leipzig, Germany, stated: "Facebook routinely tracks users, non-users and logged-out users outside its platform through Facebook Business Tools. App developers share data with Facebook through the Facebook Software Development Kit (SDK), a set of software development tools that help developers build apps for a specific operating system." Watch more Is Facebook finished? 'We're not far from Zuckerberg subpoena' The apps included language-learning tool Duolingo, job database Indeed and flight search engine Skyscanner – all of which were tested between August and December 2018. When contacted by Privacy International, Skyscanner said it had updated its data-collecting practices in the wake of the report. "Since receiving your letter, we released an update to our app as a priority which will stop the transmission of data via the Facebook SDK," the firm told Privacy International. Facebook told Privacy International that sharing data is "common practice for many companies" and is useful for both users and the companies involved. "This information is important for helping developers understand how to improve their apps and for helping people receive relevant advertising in a privacy-protective way," Facebook said. "We do this in a transparent manner by explaining the practice through our Data Policy and Cookies Policy, and by using Google's advertising identifier, which can be controlled centrally by people using their device settings." source
  16. The social network has increased the engagement in the “the last few months” with the Indian companies including Genpact and Accenture Facebook has stepped up outsourcing to Indian IT services firms such as HCLTechnologies, Wipro and Tech Mahindra for content moderation, anti-money laundering and data analytics as it faces increased global pressure and scrutiny to curb rumours as well as fraud on the platform. The social network has increased the engagement in the “the last few months” with the Indian companies including Genpact and Accenture, two people familiar with the development said. One executive of a tech firm said the combined contracts of Indian companies with Facebook is valued at over $ 400 million and would only increase as the social network seeks human help in resolving issues that it had earlier thought would be helped by technology. Facebook did not respond to a mail sent on Wednesday seeking comments till press-time Sunday. Wipro and Genpact spokespersons said they would not discuss specific customer engagements. The Wipro spokesperson also cited the “silent period” ahead of the company’s third quarter results in January. HCL Technologies declined to comment. Tech Mahindra and Accenture did not respond to mails from ET. Most of the work activity for Facebook by these companies done out of India is for US and Europe, the large profitable markets, where the social network has faced the maximum scrutiny from lawmakers, regulators and the public. Over the past year, Facebook has faced scrutiny over the data of its users being harvested by Cambridge Analytica to influence election outcomes in the US; its policies of sharing data of users with private entities such as Netflix. After the scandals broke out, Facebook founder Mark Zuckerberg had promised to have over 20,000 people to improve security and content review on the platform. In India, the government has already raised concerns with Facebook over the platform being used for spreading rumours. The social network has over 294 million users in India, its largest user base in the world as an October, according to data platform Satista. On September 5, ET had reported that Genpact was hiring for Facebook content moderators in Indian languages as part of the social network’s promise of hiring over 20,000 moderators globally to keep it clean. The content moderators were to monitor and moderate user generated content/video on the social website to ensure that the online community is maintained as a safe and fun environment. In advertisements to recruit people, Genpact had also explicitly asked applicants that they should be comfortable for any content (of) “Sexual Assault, Terrorism, Child Abuse, may be live suicidal videos and blood”. Zuckerberg on December 28 in a post on the platform said that the company has built artificial intelligence systems to automatically identify and remove 99% of the content related to terrorism, hate speech and more before anyone even sees it. “We've tripled the size of our content review team to handle more complex cases that AI can't judge,” Zuckerberg wrote in the post. Source
  17. Image copyrightGETTY IMAGES Image captionWhatsApp does not allow users to search for groups in its own app, which led to the creation of other services that did Evidence that adverts for major brands were placed in "child abuse discovery apps" via Google and Facebook's ad networks has led to fresh calls for the tech giants to face tougher regulation. The apps involved used to be available on Google's Play Store for Android devices, and directed users to WhatsApp groups containing the illegal content. Facebook and Google said they have taken steps to address the problem. But the NSPCC charity wants a new regulator to monitor their efforts. "WhatsApp is not doing anywhere near enough to stop the spread of child sexual abuse images on its app," said Tony Stower, head of internet safety at the child protection campaign. "For too long tech companies have been left to their own devices and failed to keep children safe." The charity believes a watchdog with the power to impose large fines would give the technology firms the incentive needed to hire more staff and otherwise spend more to tackle the problem. WhatsApp is owned by Facebook. Group searches News site Techcrunch published details of a two-part investigation by the Israeli child protection start-up AntiToxin Technologies and two NGOs from the country before and after Christmas. It reported that Google and Facebook's automated advertising tech had placed adverts for household names in a total of six apps that let users search for WhatsApp groups to join - a function that the chat service does not allow in its own app. Using the third-party software, it was possible to look for groups containing inoffensive material. But a search for the word "child" brought up links to join groups that clearly signalled their purpose was to share illegal pictures and videos. The BBC understands these groups were listed under different names in WhatsApp itself to make them harder to detect. Brands whose ads were shown ahead of these search results included: Amazon Microsoft Sprite Dyson Western Union "The link-sharing apps were mind-bogglingly easy to find and download off of Google Play," Roi Carthy, AntiToxin's chief marketing officer told the BBC "Interestingly, none of the apps were to be found on Apple's App Store, a point which should raise serious questions about Google's app review policies." After the first article was published, Google removed the group-searching apps from its store. "Google has a zero-tolerance approach to child sexual abuse material and we thoroughly investigate any claims of this kind," a spokeswoman for the firm said. "As soon as we became aware of these WhatsApp group link apps using our services, we removed them from the Play store and stopped ads. "These apps earned very little ad revenue and we're terminating these accounts and refunding advertisers in accordance with our policies." Human moderators WhatsApp messages are scrambled using end-to-end encryption, which means only the members of a group can see their contents. Group names and profile photos are, however, viewable. WhatsApp's own moderators began actively policing the service about 18 months ago, having previously relied on user reports. They use the names and profile pictures as a means to detect banned activity. Earlier this month, the firm revealed it had terminated 130,000 accounts over a 10 day period. However, Techcrunch and the Financial Times both subsequently documentedexamples of groups with child abuse-related names and profile pictures that remained active. They are no longer available. Image copyrightGOOGLE/FACEBOOK Image captionGoogle and Facebook say they both intend to reimburse affected advertisers "WhatsApp has a zero-tolerance policy around child sexual abuse," a spokesman for the service told the BBC. "We deploy our most advanced technology, including artificial intelligence to scan profile photos and actively ban accounts suspected of sharing this vile content. "Sadly, because both app stores and communications services are being misused to spread abusive content, technology companies must work together to stop it." At present, WhatsApp has less than 100 human moderators compared to more than 20,000 working on the main Facebook platform, but because of WhatsApp's nature they have less material to work with. 'Vile images' The BBC has asked several of the brands whose adverts were displayed for comment, but none have done so. Facebook noted that its Audience Network, which placed some of the promotions, checks whether an app is live in Google Play before pushing content. As a result, removal of the apps from the store meant its system would stop placing ads in copies of the apps already downloaded on people's devices. Furthermore, it said in the future it would prevent ads being placed in any WhatsApp group search apps, even if Google allows them to return to its marketplace. Facebook is also refunding impacted advertisers. Even so, the NSPCC thinks the brands affected should hold the two tech firms to account. "It should be patently obvious that advertisers must ensure their money is not supporting the spread of these vile images," said a spokeswoman. source
  18. The shift to messaging and ephemeral content could pose challenges. Facebook users are shifting toward ephemeral "Stories" and messaging. Getty Images Facebook isn't the same social network it used to be. A decade ago, status updates had to include the word "is." Catching up with friends meant writing on their Facebook digital walls. Today's social media couldn't be more different. Users are sharing moments of their lives in vertical videos and photos that vanish in 24 hours through a feature called "Stories." We've become aware of the network's dark side, with data privacy violations and the spread of misinformation and hate speech. Messaging apps are all the rage. And Facebook CEO Mark Zuckerberg says he thinks people will share more information on Stories -- a feature it copied from its rival Snapchat -- than on news feeds. "The concept [of social media] is part of the fabric of human life at this point," said Debra Aho Williamson, an analyst for eMarketer who's researched social media for nearly 15 years. "But how we share and what we share will definitely morph and change." The shift toward ephemeral content and messaging could fundamentally alter how we use Facebook and other social media, while also making it harder to combat misinformation, election interference and hate speech, some experts say. After all, it's hard for companies to crack down when they can't see what's being shared in encrypted messages, or when photos and videos disappear after 24 hours. And while Facebook and others are investing in AI to spot and remove messages that violate their online rules, they still face a tough road ahead. "The companies are getting smarter at using artificial intelligence to identify egregious hate speech or extreme content, but the technology is far from perfect, meaning users getting frustrated when there are false positives," said Claire Wardle, the executive director of First Draft, a nonprofit aimed at tackling misinformation. "All in all, these shifts present a number of challenges for a number of different technology companies." The rise of Stories and messaging Facebook-owned Instagram introduced Stories two years ago before Facebook added the Snapchat-like feature to WhatsApp, Messenger and its main social network. Facebook now expects ephemeral content and messaging to play much bigger roles in its future, Zuckerberg said in October during the company's third-quarter earnings call. Instagram Stories hit 400 million daily active users in June, up 60 percent from the same month last year. WhatsApp's version of Stories, called Status, hit 450 million daily active users in May, up 80 percent from July 2017. Facebook and Messenger Stories have 300 million daily active users. Facebook has shared user numbers for Stories at different times throughout the year. Zuckerberg also said in October that he thought other social media sites, including Twitter, Pinterest and LinkedIn, would introduce their own versions of Stories. There are various reasons why Stories and messaging are becoming more popular. "People feel more comfortable being themselves when they know their content will only be seen by a smaller group and when their content won't stick around forever," Zuckerberg told analysts in October. A November report by eMarketer called out two other reasons: People are sharing more videos and photos instead of text. And they want to broadcast more intimate moments to a smaller audience. Social media's dark side may also be fueling the shift, said Wardle, citing "a chilling effect caused by increased levels of harassment online." Facebook and Instagram didn't respond to a request for comment. Misinformation on messaging apps The spread of false news on messaging apps, including WhatsApp, is already posing challenges for fact-checkers. WhatsApp was "flooded by falsehoods and conspiracy theories" during Brazil's runoff presidential election in October between Jair Bolsonaro and Fernando Haddad, Reuters reported. Some of the misinformation was spread in WhatsApp chat groups that allow up to 256 people to join. "Such chat groups are much harder to monitor than the Facebook news feed or Google's search results," Cristina Tardáguila, director of the fact-checking platform Agência Lupa, co-wrote in an op-ed published in The New York Times. Agência Lupa, the Federal University of Minas Gerais, and the University of São Paulo studied more than 100,000 political images that circulated in 347 WhatsApp groups that were open to the public. They found that more than half of the most-shared images contained misleading information. The consequences of failing to stop misinformation can be literally fatal. In India, false rumors of child abductions went viral on WhatsApp, causing mobs to murder at least two dozen innocent people, The New York Times reported in July. When asked how WhatsApp is combating misinformation, a spokesperson pointed to a blog post outlining WhatsApp's efforts. These included labeling and limiting the reach of forwarded messages; running full-page newspaper ads in English, Hindi and other languages with tips for spotting fake news; removing spam accounts; and working with governments. Tracking ephemeral content source Some researchers fear they'll have a harder time studying the spread of misinformation as ephemeral content becomes more prevalent, since it won't be "online long enough to be flagged, deranked or removed," Wardle said. Filippo Menczer, a professor of informatics and computer science at Indiana University who's studied how automated Twitter accounts spread misinformation, said that because of the lack of available data, it's hard to tell if fake news is being spread through ephemeral content. "Even the platforms themselves don't want to look inside that data because they're making promises to their customers that it's private," Menczer said. "By the time someone realizes that there's some terrible misinformation that's causing a genocide, it may be too late." Snapchat, which started the whole ephemeral content craze, appears to have kept itself mostly free of fake news and election meddling. The company separates news in a public section called Discover. Snapchat's editors vet and curate what shows up in that section, making it difficult for misinformation to go viral on the platform. Instagram, on the other hand, is being considered a "key battleground" for fighting Russian troll propaganda. Russia's Internet Research Agency used Instagram to sow discord among Americans during the 2016 election and garnered more engagement there than it did on Facebook, researchers at the cybersecurity firm New Knowledge found. As Facebook pushes more people to post ephemeral content in Stories, the uphill battle against false news could get even harder.
  19. It's rare that a new gadget these days serves a true need. Rather, it creates a want. You certainly don't need the Facebook Portal, whose primary purpose is to let you make the types of video calls you can already make on Facebook's Messenger app. And given the company's poor record on user privacy, do you even want it? Or a similar device from Amazon or Google? The Portal is part of a new category of gadgets best described as screens for making video calls, listening to music and responding to voice commands for tasks you can also do on your phone. Unlike tablets, these microphone- and camera-equipped screens are meant to rest at a fixed location in your living room, kitchen or, gasp, your bedroom. If you are a tech trailblazer willing to try new things — and have no qualms on privacy — here are some things to consider. Why have one Facebook's $349 Portal Plus is a great device for making video calls using Messenger. It's also gigantic — 15.6 inches, measured diagonally, or roughly the size of the window of many microwave ovens. There's also a smaller sibling, simply called the Portal, at $199. Both models are designed to do one thing and one thing well — let you chat with other people on their own Portal or through the regular Messenger app. Yes, the Portal can do a few things more, such as tap Amazon's Alexa voice assistant, but these features feel tacked on, much like trying to cook breakfast sausages with a toaster created just to cook hot dogs. Unless you are in a long-distance relationship and want to spend hours each evening gazing into your sweetheart's eye (while also getting dinner and laundry done), you can certainly live without one, just as you can live without a hot dog cooker. That's especially true if you are concerned about the number of screens in your home, especially screens that could be watching you. Google's Home Hub ($129) and Amazon's Echo Show ($230) can do a lot more, but their video-calling capabilities aren't as good as the Portal's. With Home Hub, for instance, you can see the person calling you, but the device itself has no camera for two-way videos. If video calling is your thing, you're better off with a Portal. The device's camera can recognize people in a room and follow them as they move around. So you can literally pace up and down while you argue with your mother. (Facebook says it doesn't use facial-recognition technology to identify individuals.) Portal also has a cute "story time" feature that adds face masks and other animation while you to kids on the other side of the call. The Google and Amazon devices don't do either. Siloed systems All three devices allow you to add multiple users, so different people in your household can call their circle of friends. But you're locked into that company's messaging system. Try explaining to your 87-year-old grandfather why he can't FaceTime you on the Portal or Skype on the Home Hub. Hell will freeze over before you can get him to sign up for Facebook just to chat with his great-grandchildren. And you haven't even mentioned the Cambridge Analytica privacy scandal yet. The good news is you can make calls from these devices to smartphones, though in the Portal's case, you need the device to tell the animated stories. Travelling parents likely won't be lugging one to read to their kids at home. As for compatibility, Facebook's Messenger has more than 1 billion users, and many of your friends are likely already on it, at least in the United States. But Portal doesn't work with Facebook's WhatsApp, which is popular overseas. Setting up the device is relatively simple. The Home Hub works with Google's Duo messaging service, so friends and relatives will have to at least install the Duo app on their phones. It's also difficult to set up. After much cursing and online searches for the right settings, I still get error messages. Google press representatives didn't immediately respond to help requests. On the Echo Show, the recipient of your call needs to have the Alexa app, if not an Echo device with a screen. You need to set it up on a phone first by giving Alexa access to your contacts list and making sure this person is on it. You can also call others on Skype after connecting your Skype account. Privacy matters Clearly a lot of thought went into making the Portal optimal for connecting with friends and family. It's just a shame that it comes in a year full of privacy scandals for the company. True, Google has had its share of privacy issues this year, including an Associated Press report that it tracks people's location even when they tell it not to. But with Facebook, it's something new every few weeks, culminating with revelations this week from The New York Times that Facebook shared user data with more than 150 other companies without people's explicit permission. While it's possible to use Messenger on the phone without having a Facebook account, Portal still requires one. Facebook says it's to enable other features, such as displaying Facebook photos on your Portal. But these features aren't essential to video calling — just essential to fold the Portal experience into Facebook's massive advertising system. Facebook says it doesn't listen to, view, record or store the content of your calls, so if you believe Facebook — and that's a big if — it's not going to try to target ads based on whom you talk to or what's hanging on your walls in the background. But other information, such as the length and frequency of your calls, is fair game and may be used for advertising purposes — such as ads for video-calling services. No doubt to address privacy concerns, Facebook has included a plastic cover for the Portal's camera. You can also turn it off using a button. But promises and plastic covers aren't enough when Facebook has shown carelessness with its users' data over and over. source
  20. Deleting your Facebook account isn’t a bad New Year resolution – the company has proven yet again it violated public trust ‘Time and time again Facebook has made it abundantly clear that it is a morally bankrupt company that is never going to change unless it is forced to.’ Photograph: Saul Loeb/AFP/Getty Images ‘Time and time again Facebook has made it abundantly clear that it is a morally bankrupt company that is never going to change unless it is forced to.’ Photograph: Saul Loeb/AFP/Getty Images Prepare yourself for an overwhelming sense of deja vu: another Facebookprivacy “scandal” is upon us. A New York Times investigation has found that Facebook gave Netflix, Spotify and the Royal Bank of Canada (RBC) the ability to read, write and delete users’ private messages. The Times investigation, based on hundreds of pages of internal Facebook documents, also found that Facebook gave 150 partners more access to user data than previously disclosed. Microsoft, Sony and Amazon, for example, could obtain the contact information of their users’ friends. Facebook may finally face a reckoning as more grave misdeeds are revealed Jill Abramson Read more Netflix, Spotify and RBC have all denied doing anything nefarious with your private messages. Netflix tweeted that it never asked for the ability to look at them; Spotify says it had no idea it had that sort of access; RBC disputes it even had the ability to see users’ messages. Whether they accessed your information or not, however, is not the point. The point is that Facebook should never have given them this ability without getting your explicit permission to do so. Explicit being the key word here. After all, technically speaking, you probably did give Facebook permission to do whatever it wanted with your personal information. Somewhere along the line you probably clicked “accept” to 25m undecipherable terms and conditions the company knew full well you weren’t going to read, let alone understand. In a tone-deaf response to the Times investigation, the tech giant explained: “None of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC.” Perhaps not, but they did violate public trust. The Times’ new report caps off a very bad year for Facebook when it comes to public trust. Let’s just recap a few of the bigger stories, shall we? March: The Observer reveals that Cambridge Analytica harvested the dataof millions of Facebook users without their consent for political purposes. It is also revealed that Facebook had been keeping records of Android users’ phone calls and texts. April: It was revealed that Facebook was in secret talks with hospitals to get them to share patients’ private medical data. September: Hackers gained access to around 30m Facebook accounts. November: Facebook acknowledges it didn’t do enough to stop its platform being as a tool to incite genocidal violence in Myanmar. A New York Times report reveals the company hired a PR firm to try and discredit critics by claiming they were agents of George Soros. December: Facebook admitted it exposed private photos from 68 million users to apps that weren’t authorized to view your photos. (You can check if you were affected via this Facebook link.) If you’re still on Facebook after everything has happened this year, you need to ask yourself why. Is the value you get from the platform really worth giving up all your data for? More broadly, are you comfortable being part of the reason that Facebook is becoming so dangerously powerful? Are you comfortable being on a platform that has, among other things, helped incite genocide in Myanmar? Facebook has made it very clear that it thinks it can get away with anything because its users are idiots In March, following the Cambridge Analytica scandal, Facebook put out print ads stating: “We have a responsibility to protect your information. If we can’t, we don’t deserve it.” I think they’ve proved by now that they don’t deserve it. Time and time again Facebook has made it abundantly clear that it is a morally bankrupt company that is never going to change unless it is forced to. What’s more, Facebook has made it very clear that it thinks it can get away with anything because its users are idiots. Zuckerberg famously called the first Facebook users “dumb fucks” for handing their personal information over to him; his disdain for the people whose data he deals with doesn’t appear to have lessened over time. To be clear, I’m not urging everyone to delete Facebook. For some people Facebook really is a valuable tool. Further, unless all of its 2 billion users delete it en masse, Facebook’s abuse of power isn’t a problem that we can solve as individuals. Technology giants must be regulated. However, having said that, if Facebook doesn’t provide you with an invaluable service, I’d urge you to extricate yourself from the company as much as possible. If you’re looking for a New Year resolution, deleting Facebook isn’t a bad one. After all, if we all continue using Facebook after it betrays our trust time and time again then maybe Zuck is right. We are dumb fucks. source
  21. Facebook is reportedly working on its own digital currency, despite banning ads for cryptocurrency earlier this year, according to Bloomberg. Citing Facebook insiders as its source, Bloomberg has said the company is developing a type of 'stablecoin', which is a digital currency tied to the US dollar – and in theory, it's more stable than cryptocurrencies like Bitcoin as a result. According to the article, the new currency will allow users to transfer money via its mobile messaging service WhatsApp. Lack of public trust A company spokesperson told Bloomberg, "like many other companies, Facebook is exploring ways to leverage the power of blockchain technology." They went on to say that, "this new small team is exploring many different applications. We don’t have anything further to share." Although Facebook has so far been unwilling to shed much light on its plans to develop a digital currency, Bloomberg has suggested that it would likely launch in India first – after all, India is a world-leader in remittance, with people sending a huge $69 billion home in 2017. Whether people will be accepting of a Facebook-led digital currency is uncertain, particularly as public trust in the company has dipped yet again as a result of a new data privacy scandal emerging earlier this week. source
  22. Facebook CEO Mark Zuckerberg testifies before Congress earlier this year. (Screenshot via YouTube) Facebook is in the news again, but not for the reasons it wants to be. The New York Times published an extensive report on the social giant’s privacy practices, this time focusing on data-sharing agreements with more than 150 partners, including tech giants like Amazon, Microsoft, Spotify and Netflix. The Times reports that these agreements, some of which are still active today, give companies from a variety of industries more access to user data than previously known and essentially exempted partners from its privacy rules. From the report: Steve Satterfield, Facebook’s director of privacy and public policy, told the Times that its data-sharing partnerships didn’t violate users’ privacy and required companies to follow Facebook policies. Satterfield also insisted that the agreements don’t violate a consent agreement with the U.S. Federal Trade Commission forbade Facebook from sharing user data without permission. A few of the companies —Amazon, Microsoft and Yahoo — told the Times they used the data appropriately but declined to give further details. Amazon issued the following statement in response to our inquiry about the story. “Amazon uses APIs provided by Facebook in order to enable Facebook experiences for our products. For example, giving customers the option to sync Facebook contacts on an Amazon Tablet. We use information only in accordance with our privacy policy.” Microsoft told GeekWire during its partnership with Facebook it respected all user preferences. Officials told the Times it used data to build profiles of Facebook users on Microsoft servers. Data wasn’t used for advertising and has since been deleted, the company told the Times. Netflix issued the following statement on its relationship with Facebook: “Over the years we have tried various ways to make Netflix more social. One example of this was a feature we launched in 2014 that enabled members to recommend TV shows and movies to their Facebook friends via Messenger or Netflix. It was never that popular so we shut the feature down in 2015. At no time did we access people’s private messages on Facebook, or ask for the ability to do so.” The report notes the importance of user data in this day and age, calling it the “oil of the 21st century, a resource worth billions to those who can most effectively extract and refine it.” Companies are spending billions on data, and this appetite has been a boon for Facebook and Google in particular given the vast amounts of information they have on customers. It’s been a scandal-ridden year for Facebook. Reports surfaced about personal data of millions of users being illegitimately shared with Republican-backed political consultancy firm Cambridge Analytica and the company’s attempts to deflect blamefor its role in Russia’s attempts to influence the 2016 U.S. presidential election. Facebook stock is down slightly Wednesday morning. source
  23. It’s a new week, which means it’s time for another new privacy controversy for Facebook. This time around, the company has gotten itself into a data-sharing controversy, revealed by The New York Times. The publication claims Facebook made deals with some of the world’s biggest tech giants–including Apple, Amazon, Microsoft, Netflix, Spotify, Yahoo, and Russian search engine Yandex–that gave these companies much more power than they needed. The deals were made over the years, dating back to as early as 2010. Facebook claims most of these deals have come to an end, though some — including the ones with Apple and Amazon — are still in action. The data-sharing deals are actually quite scary, though some of the companies involved were quick to claim that they were never aware of the excess power, or misused the data of Facebook users in any way. Apple, for example, had access to Facebook users’ contacts and calendar entries even if the user didn’t agree to let Facebook share data with third-parties. Apple claimed the company wasn’t aware of the special access at all. Microsoft, on the other hand, had access to see the names and profile data of a Facebook user’s friends for Bing. The software giant claimed that the company has already deleted the data accessed and never used the data for advertisement purposes, with Facebook claiming the search engine only had access to user data that was “public”. Apps that allowed users to access their Facebook account also had special access on Windows Phone devices, at least according to Facebook itself. And then there’s Amazon, which had access to see the names and contact information of users, though that partnership is apparently in the process of shutting down. But the scariest of them all is Facebook gave some of its partners — like Spotify, Netflix, and the Royal Bank of Canada — the access to users private messages on Facebook. The access would allow these services to read users’ private messages, write, and even delete them. Netflix was quick to respond to the report, stating that the company did not access people’s private messages on Facebook, or ever asked Facebook for the special access. “Facebook’s partners don’t get to ignore people’s privacy settings, and it’s wrong to suggest that they do,” the company’s director of privacy told The New York Times. The company went on to emphasize the fact that it did not violate any of the users’ privacy settings, stating “none of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC.” At this point in time, all of this is just one, big mess for Facebook. Not only will the latest controversy affect user trust, which has already been going down rapidly, but it will also affect the company’s relationship with companies like Microsoft, Apple, and Amazon. This is not only about Facebook, but it’s also going to affect the reputation of all these companies involved, even though most have already declined to be aware of or abusing the special access given by Facebook. It really makes you wonder if Facebook even values your personal data. If selling your personal data was bad, giving away the same data is embarrassingly disastrous. source
  24. WASHINGTON (Reuters) - The attorney general for Washington, D.C. said on Wednesday the nation’s capital city had sued Facebook over the scandal that broke earlier this year involving Cambridge Analytica’s use of data from the social-media giant. “Facebook failed to protect the privacy of its users and deceived them about who had access to their data and how it was used,” said Attorney General Karl Racine in a statement. “Facebook put users at risk of manipulation by allowing companies like Cambridge Analytica and other third-party applications to collect personal data without users’ permission.” The lawsuit comes as Facebook faces new reports that it shared its users’ data without their permission. Cambridge Analytica, which worked for President Donald Trump’s political campaign at one point, gained access to personal data from tens of millions of Facebook’s users. The D.C. attorney general says in the suit that this exposed nearly half of the district’s residents’ data to manipulation for political purposes during the 2016 presidential election, and alleges Facebook’s “lax oversight and misleading privacy settings” had allowed the consulting firm to harvest the information. Facebook did not immediately respond to a request for comment . Source
  25. Facebook has defended its data-sharing practices with other technology firms while at the same time admitting that lax API control may have exacerbated what has already been a trying year for the social networking giant. On Tuesday, Konstantinos Papamiltiadis, Director of Developer Platforms and Programs said in a blog post on Facebook that recently exposed data-sharing practices were all about "helping people," and said nothing was done without a measure of user consent. The note was published as the social media giant's response to a New York Times report this week which claimed that for years Facebook has conducted "special arrangements' with major technology companies that gave them access to intrusive data on users. According to the NYT, these businesses became exempt from standard privacy rules due to these inside deals. Microsoft's Bing was able to see the names of almost all Facebook user friends without consent; Netflix and Spotify were able to read private messages; Yahoo could view Facebook friend post streams, and Amazon was able to obtain usernames and contact information through friend connections. Papamiltiadis said in response that these features -- many of which are now defunct and no longer in use -- were used for purposes including receiving Facebook notifications while in an active browsing session; integration for song recommendations, to create search results based on the "public information" friends have shared, and the upload of contacts from Facebook to email services. "Take Spotify for example," the post reads. "After signing in to your Facebook account in Spotify's desktop app, you could then send and receive messages without ever leaving the app. Our API provided partners with access to the person's messages in order to power this type of feature." "Over the years we have tried various ways to make Netflix more social," a Netflix spokesperson told ZDNet. "One example of this was a feature we launched in 2014 that enabled members to recommend TV shows and movies to their Facebook friends via Messenger or Netflix. It was never that popular so we shut the feature down in 2015. At no time did we access people's private messages on Facebook or ask for the ability to do so." "Amazon uses APIs provided by Facebook in order to enable Facebook experiences for our products," Amazon said in a statement. "For example, giving customers the option to sync Facebook contacts on an Amazon Tablet. We use information only in accordance with our privacy policy." Microsoft added that "all user preferences" were respected in relation to its dealings with Facebook. While these deals were designed to benefit all companies involved -- potentially as many as 150 firms in total -- and generate revenue, in light of the Cambridge Analytica scandal, it seems such deals are now coming back to haunt Facebook at the worst time. According to the publication, some of these deals date back as far as 2010, and some were still active this year. Questions have now also been raised as to whether the tech giant has broken a 2011 consent agreement with the US Federal Trade Commission (FTC). Facebook really doesn't need to raise the ire of regulators any further this year, and this is a suggestion that Papamiltiadis vehemently denies, as users would need to sign in with their Facebook account to agree to the data-sharing. The executive defended the deals, blurring the lines between the social network and so-called "integration partners" by saying the data-sharing practices were established for the purpose of creating more "social experiences." Papamiltiadis added that instant personalization was closed in 2014, and many other partnerships were wound down over 2018. "We recognize that we've needed tighter management over how partners and developers can access information using our APIs," the executive said. "We're already in the process of reviewing all our APIs and the partners who can access them." Amazon, Apple, Tobil, Alibaba, Mozilla, and Opera integration systems are still in effect. Facebook says that there is "no evidence" that the instant personalization data-sharing agreements and APIs were abused, but the APIs were still left in place after the program was shut down. "We've taken a number of steps this year to limit developers' access to people's Facebook information, and as part of that ongoing effort, we're in the midst of reviewing all our APIs and the partners who can access them," Papamiltiadis said. "This is important work that builds on our existing systems that track APIs and control who can access to them." That is all well and good, but considering how much criticism Facebook has faced in the past 12 months over its data-sharing practices, perhaps it is now time to remove the vagarities and soft approach to such reports, and simply be transparent about what deals involved who, and when. This is an approach which wouldn't necessarily be taken every time a company had poor data protection and inappropriate data sharing allegations thrown their way, but in Facebook's case, trust in the company has been undermined again and again over such a short period and so perhaps more radical, transparent action needs to be taken. As noted by Alex Stamos, Facebook's former Chief Security Officer (CSO), Facebook's response "blends all kinds of different integrations and models into a bunch of prose." "There very well could be serious privacy problems in the Times' story, but it is hard to tell what is really problematic because they intentionally blur the lines between FB allowing 3rd party clients/OS integrations (like Apple) with data actually going to other companies," Stamos added. "Putting your response in a wall of PR-text aimed at end consumers just isn't effective." Earlier this month, Facebook revealed the existence of a bug which may have permitted unauthorized access to the private photos 6.8 million users. The leak was due to an API left in backend code between September 13 to September 25, 2018. It is believed up to 1,500 apps built by 876 developers were involved in the security lapse. It seems that time and time again this year, Facebook has taken a battering when it comes to user trust and its reputation as a social media platform. As a damage control measure, back in May, Facebook CEO Mark Zuckerberg announced a privacy tool called "clear history" which would give users the option to wipe out their browsing activities on the platform. This new tool is expected to be released next year. source
×