Jump to content
nsane.forums

Search the Community

Showing results for tags 'facebook'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Found 37 results

  1. Although Facebook and other tech companies have been able to censor so-called hate speech, they have been unable to prevent the spread of Islamic terrorist propaganda on their sites, according to a new report. “Despite promises by digital platforms to curb material supporting terrorism, Jihadi groups such as ISIS and other terrorist organizations continue to rely upon Google, Facebook, and Instagram to sow fear, spread hate and recruit members,” according to the Digital Citizens Alliance (DCA) report. DCA’s report illustrates how Facebook has been ineffective in its ability to prevent the spread of terrorist propaganda, especially from radical Islamic terrorist organizations like the Islamic State. The report shows a gallery of photos that include beheadings, setting prisoners on fire and much more. DCA worked with Global Intellectual Property Enforcement Center (GIPEC) to conduct the report. Among the many examples, the report shows a pro-ISIS post on Facebook from 2016 that says “ALLAHU AKBAR” paired with ISIS terrorists waving their black flag that has been there until at least May 1, 2018. Facebook was able to censor 2.5 million pieces of hate speech, 38 percent of which was flagged automatically by its technology, in the first three months of 2018 alone. There are a slew of methods terrorists use to get around tech companies blocking their propaganda, including using web archives to preserve content before it’s deleted from the internet, as The Daily Caller News Foundation previously reported. (RELATED: ISIS Uses Internet Archives To Spread Propaganda, Report Finds) “I think what we see is that the platforms are stuck in a loop when it comes to offensive content,” Tom Galvin, executive director of DCA, said on Friday, The Hill reported. He added although they’ve promised to fix it, it’s really not going away. “Either their [Facebook, Google, etc.] systems aren’t as good as they say collectively or it’s not the priority that they claim it is.” Galvin said the business model for the tech companies relies on not cracking down on extremist content. “Their business model is to collect as much information as possible,” Galvin said. “No matter what, they’ll always be in conflict with trying to correct bad content on their platform.” Facebook responded to the allegations, and stated it is taking measures to prevent the spread of radical Islamist propaganda. “There is no place for terrorists or content that promotes terrorism on Facebook or Instagram, and we remove it as soon as we become aware of it,” a Facebook spokesperson said in a statement, The Hill reported. “We know we can do more, and we’ve been making major investments to add more technology and human expertise, as well as deepen partnerships to combat this global issue.” < Here >
  2. I was curious how much Facebook usage would suffer since the Cambridge Analytica scandal broke in March, and here's a pretty good answer: According to SimilarWeb, in April, total Facebook visits (both to Facebook.com and through its mobile apps) fell from 24 billion in March, down to 22.77 billion in April -- a drop of 1.3 billion visits, and down 5.15% of the total. Obviously this drop isn't completely attributable to the CA scandal. Facebook also saw a significant drop in February, compared to the last three months (see chart below). But that February dip is likely due in great part to the changes the company made to its timeline feed in January. How much does a drop of 1.3 billion monthly visits mean in terms of lost revenue, or users? We probably won't know until Facebook's next quarterly corporate report, but here's a very rough casual estimate: Assuming the average Facebook user visits the social network once a day, that would suggest an average of about 43 million less users during that month. In recent months, Facebook makes about $5 per user, per month -- i.e. some $215 million in potential lost revenue. (Again, very primitive estimates here -- or what a VC friend likes to call "monkey math".) My main curiosity moving forward is if this loss continues, or usage recovers. From past experience, social media scandals always involve millions vowing to Quit Facebook Forever (through announcements made on, well, Facebook) -- and then in successive months, most of those millions begrudgingly return. But this time, we seem to be in uniquely uncharted territory. < Here >
  3. Facebook has helped introduce thousands of Islamic State of Iraq and the Levant (Isil) extremists to one another, via its 'suggested friends' feature, it can be revealed. The social media giant - which is already under fire for failing to remove terrorist material from its platform - is now accused of actively connecting jihadists around the world, allowing them to develop fresh terror networks and even recruit new members to their cause. Researchers, who analysed the Facebook activities of a thousand Isil supporters in 96 countries, discovered users with radical Islamist sympathies were routinely introduced to one another through the popular 'suggested friends' feature. Using sophisticated algorithms, Facebook is designed to connect people who share common interests. The site automatically collects a vast amount of personal information about its users, which is then used to target advertisements and also direct people towards others on the network they might wish to connect with. But without effective checks on what information is being shared, terrorists are able to exploit the site to contact and communicate with sympathisers and supporters. The extent to which the ‘suggested friend’ feature is helping Isil members on Facebook is highlighted in a new study, the findings of which will be published later this month in an extensive report by the Counter Extremism Project a non profit that has called on tech companies to do more to remove known extremist and terrorist material online. Gregory Waters, one of the authors of the report, described how he was bombarded by suggestions for pro-Isil friends, after making contact with one active extremist on the site. Even more concerning was the response his fellow researcher, Robert Postings, got when he clicked on several non-extremist news pages about an Islamist uprising in the Philippines. Within hours he had been inundated with friend suggestions for dozens of extremists based in that region. Mr Postings said: "Facebook, in their desire to connect as many people as possible have inadvertently created a system which helps connect extremists and terrorists.” Once initial introductions are made the failure of Facebook to tackle extremist content on the site means Jihadists can quickly radicalise susceptible targets. In one example uncovered by the researchers, an Indonesian Isil supporter sent a friend request to a non-Muslim user in New York in March 2017. During the initial exchange the American user explained that he was not religious , but had an interest in Islam. Over the following weeks and months the Indonesian user began sending increasingly radical messages and links including pro-Isil propaganda, all of which were liked by his target. Mr Postings said: “Over a period of six months the [US based user] went from having no clear religion to becoming a radicalised Muslim supporting Isil.” The study also examined the extent to which Facebook was failing to tackle terrorist material on its site. Of the 1,000 Isil supporting profiles examined by researchers, less than half of the accounts had been suspended by Facebook six months later. Mr Posting said: "Removing profiles that disseminate IS propaganda, calls for attacks and otherwise support the group is important...the fact that the majority of pro-IS profiles in this database have gone unremoved by Facebook is exceptionally concerning." Even when terrorist material was identified and the offending posts removed, the user was often allowed to remain on the site. There have also been numerous examples of pro-Isil accounts being reinstated after the user complained to moderators about their suspension. In one case a British terror suspect had his Facebook account reinstated nine times after complaining, despite being accused of having posted sick Isil propaganda videos. Mr Waters said: "This project has laid bare Facebook's inability or unwillingness to efficiently address extremist content on their site. "The failure to effectively police its platform has allowed Facebook to become a place where extensive IS supporting networks exist, propaganda is disseminated people are radicalised and new supporters are recruited." Mr Postings added: “Even when profiles or content is removed, it is not always done fast enough, allowing Isil content to be to be widely share and viewed before getting removed.” Mr Waters said: “The fact that Facebook's own recommended friends algorithm is directly facilitating the spread of this terrorist group on its site is beyond unacceptable." Simon Hart, a Conservative MP who sits on Culture Media and Sport Select Committee, said: "The idea that Facebook is inadvertently providing an introduction service for terrorists is quite extraordinary. It is another terrifying example of the unintended consequences of this sort of technology. "If you design a system for one thing and it becomes another it is hard to police. "Nobody will have set out to provide a network for terrorists to connect, but the important thing is how Facebook responds now this matter has been raised with them." A spokesman for Facebook said: "There is no place for terrorists on Facebook. We work aggressively to ensure that we do not have terrorists or terror groups using the site, and we also remove any content that praises or supports terrorism. "Our approach is working – 99 per cent of ISIS and Al Qaeda-related content we remove is found by our automated systems. But there is no easy technical fix to fight online extremism. "We have and will continue to invest millions of pounds in both people and technology to identify and remove terrorist content." < Here >
  4. For many Sri Lankans, Facebook is the internet, but while the social media platform has belatedly taken action on hate speech wider society must also take responsibility Kill all Muslims, do not spare even an infant, they are dogs,” a Facebook status, white Sinhalese text against a fuchsia background, screamed on 7 March 2018. Six days later – after hundreds of Muslim families had watched their homes ransacked and their businesses set on fire – it was allowed to remain online, despite being reported for violating the company’s community standards. Facebook’s role in the anti-Muslim violence in Sri Lanka this March cannot be overstated. Posts spreading blatant misinformation about the community, inciting hate and violence, remained online for days after they were reported. This was one reason why the Sri Lankan government banned Facebook in March. To some Sri Lankans, Facebook is the internet. Affordable data packages, from mobile service providers who have identified its popularity, allow people across the socio-economic spectrum to access the platform. As its user base has grown and continues to grow, so has the likelihood of it being misused by those seeking to divide and harm. Though it is difficult to determine when it started, hate speech and content inciting violence towards minorities has been thriving on Facebook in Sri Lanka, long before the posts linked to the violence in March. The constant stream of divisive, abusive content is a result of the deeply entrenched ethno-nationalist biases informing mainstream media, law enforcement, party politics and political leadership in the country. In addition, postwar narratives endorsed by the state, that herald military victory over terrorism, have been sweepingly reinterpreted as the majority’s dominance over a minority; spurring on those who believe in “protecting” the country that is solely “theirs”. These old divides and perceptions, sewn into the country’s social fabric, are now updated for the digital age on Facebook. Violence by sections of the majority community against minorities forms a large part of Sri Lanka’s history, in a pattern that repeats itself with startling precision. A common factor in all of the violence is successive governments who are afraid to call out and condemn extremist ethno-religious nationalism for the toxic influence that it is. Focusing on Facebook’s role in the violence should not be an excuse to sideline what spurs it on; the virus of extremist Sinhala-Buddhist ethno-nationalism that sees little to no condemnation from senior Buddhist monks or political leaders. The assertion from senior government officials, therefore, that the ban on Facebook put a stop to the violence is inaccurate. Data scientists and researchers have flagged that the block on social media did little to deter the mobs, who found their way around it with ease. While not entirely blameless, Facebook also became an easy scapegoat. In the immediate aftermath of the violence in March, Facebook representatives met with members of government and promised to better address the hate speech that the platform had been used to spread. In further communications with local civil society, they have promised to increase the number of Sinhala-language content reviewers. This is something that local activists, who have been documenting the generation and spread of and engagement with hate speech on the platform, have highlighted for years. While it is yet to be seen how these commitments will play out, they still signal long-term action from only one of the responsible parties. It’s easier to push Facebook to take down a violent post than it is to interrogate the deeply racist Sri Lankan “identity” that the extremists claim to represent – one that leaves out large parts of the population. It’s quicker than questioning the actions and inactions of the leaders of the religion the extremists claim to represent. It’s simpler than examining the constitutionally protected “foremost place” that Buddhism enjoys in Sri Lanka. Anti-minority violence in Sri Lanka cannot be curtailed solely with the faster removal of content inciting hate and violence from Facebook, as the government would like to believe. The violence in March carried on for long as it did for reasons that have gone unaddressed for decades – the complacency, and complicity, of law enforcement when faced with certain perpetrators and the inability of the state to condemn the actions of these repeat offenders. Facebook has a responsibility to ensure that a user’s experience on its platforms is protected by the terms laid out in the company’s community standards. At the same time, the government of Sri Lanka has a greater responsibility to protect the rights of its citizens, especially those who are the most vulnerable. One of the two key actors allowing hate speech to seed and grow in Sri Lanka has done the bare minimum in acknowledging its failings; it is about time – indeed several decades overdue – that the other did the same. < Here >
  5. Cambridge Analytica said it has filed papers to begin insolvency proceedings in the U.K. and will seek bankruptcy protection in a federal court in New York. What is Cambridge Analytica? Election data firm used by Trump harvested data on 50 million Facebook users Cambridge Analytica, the Trump-affiliated data firm at the center of Facebook's worst privacy scandal in history, is declaring bankruptcy and shutting down. The London firm blamed "unfairly negative media coverage" and said it has been "vilified" for actions it says are both legal and widely accepted as part of online advertising. Cambridge Analytica said it has filed papers to begin insolvency proceedings in the U.K. and will seek bankruptcy protection in a federal court in New York. "The siege of media coverage has driven away virtually all of the company's customers and suppliers," Cambridge Analytica said in a statement. "As a result, it has been determined that it is no longer viable to continue operating the business." Facebook said it will keep looking into data misuse by Cambridge Analytica even though the firm is closing down. And Jeff Chester of the Center for Digital Democracy, a digital advocacy group in Washington, said criticisms of Facebook's privacy practices won't go away just because Cambridge Analytica has. "Cambridge Analytica's practices, although it crossed ethical boundaries, is really emblematic of how data-driven digital marketing occurs worldwide," Chester said. "Rather than rejoicing that a bad actor has met its just reward, we should recognize that many more Cambridge Analytica-like companies are operating in the conjoined commercial and political marketplace." Cambridge Analytica, whose clients included Donald Trump's 2016 presidential campaign, sought information on Facebook users to build psychological profiles on a large portion of the U.S. electorate. The company was able to amass the database quickly with the help of an app that purported to be a personality test. The app collected data on tens of millions of people and their Facebook friends, even those who did not download the app themselves. Facebook has since tightened its privacy restrictions, and CEO Mark Zuckerberg testified before Congress for the first time in two days of hearings. Facebook also has suspended other companies for using similar tactics. One is Cubeyou, which makes personality quizzes. That company has said it did nothing wrong and is seeking reinstatement. Alexander Nix, CEO of Cambridge Analytica arrives at the offices of Cambridge Analytica in central London, March 20, 2018. Cambridge Analytica suspended CEO Alexander Nix in March pending an investigation after Nix boasted of various unsavory services to an undercover reporter for Britain's Channel 4 News. Channel 4 News broadcast clips that showed Nix saying his data-mining firm played a major role in securing Trump's victory in the 2016 presidential elections. Acting CEO Alexander Tayler also stepped down in April and returned to his previous post as chief data officer. Cambridge has denied wrongdoing, and Trump's campaign has said it didn't use Cambridge's data. On Wednesday, Cambridge Analytica said an outside investigation it commissioned concluded the allegations were not "borne out by the facts." Facebook's audit of the firm has been suspended while U.K. regulators conduct their own probe. But Facebook says Cambridge Analytica's decision to close "doesn't change our commitment and determination to understand exactly what happened and make sure it doesn't happen again." Cambridge Analytica has said it is committed to helping the U.K. investigation. But the office of U.K. Information Commissioner Elizabeth Denham said in March that the firm failed to meet a deadline to produce the information requested. Denham said the prime allegation against Cambridge Analytica is that it acquired personal data in an unauthorized way, adding that the data provisions act requires services like Facebook to have strong safeguards against misuse of data. Source
  6. Mark Zuckerberg, Facebook CEO, today announced the site would introduce a feature which will grant users a measure of control over their data — specifically, giving them control over how much information from their history is shared without third-party apps. The feature, appropriately named “Clear History,” will flush your browsing history on Facebook, including which websites you’ve visited from Facebook and what ads you’ve clicked on. First announced at Facebook’s F8 conference this afternoon, the app is an obvious response to the backlash Facebook’s gotten from just about everyone following the Cambridge Analytica scandal. Zuckerberg himself said in a post today that he gave a rather foggy response to Congressional interrogation over just how much information Facebook’s users were able to control their own data. Zuckerberg said in the same post this feature is an attempt to redress that issue: Once we roll out this update, you’ll be able to see information about the apps and websites you’ve interacted with, and you’ll be able to clear this information from your account. You’ll even be able to turn off having this information stored with your account. Of course, Facebook won’t completely stop collecting data on you, even if you do use this option. According to Erin Egan, FB’s chief privacy officer: We’ll still provide apps and websites with aggregated analytics – for example, we can build reports when we’re sent this information so we can tell developer if their apps are more popular with men or women in a certain age group. We can do this without storing the information in a way that’s associated with your account, and as always, we don’t tell advertisers who you are. Zuckerberg also has temerity to say Facebook will be a little less personal for you without the use of cookies. Under the circumstances, Mark, I think that’s a trade-off some users are willing to accept. < Here >
  7. The social security numbers, addresses, phone numbers, and alleged credit card numbers of dozens of people are being advertised and sold on Facebook. The internet giant deleted some of them after Motherboard flagged the posts. Cybercriminals have posted sensitive personal information, such as credit card and social security numbers, of dozens of people on Facebook and have advertised entire databases of private information on the social platform. Some of these posts have been left up on Facebook for years, and the internet giant only acted on these posts after we told it about them. As of Monday, there were several public posts on Facebook that advertised dozens of people’s Social Security Numbers and other personal data. These weren’t very hard to find. It was as easy as a simple Google search. Most of the posts appeared to be ads made by criminals who were trying to sell personal information. Some of the ads are several years old, and were posted as “public” on Facebook, meaning anyone can see them, not just the author’s friends. Independent security researcher Justin Shafer alerted Motherboard to these posts Monday. “I am surprised how old some of the posts are and that it seems Facebook doesn’t have a system in place for removing these posts on their own,” Shafer told Motherboard in an online chat. “Posts that would have words flagged automatically by their system.” On Monday, Motherboard reached out to Facebook asking for comment, and we included a sample Google search to illustrate the problem. A Facebook spokesperson answered saying they’d look into it. As of this writing, we haven’t received a comment, but some of the posts that appeared in the Google search sample we flagged have been removed. Matt Mitchell, a digital security trainer, said that it should be “easy” for Facebook to stop and prevent these posts. “On their end it's pure laziness to wait for an abuse report to stop post that are following a doxing template,” Mitchell told Motherboard in an online chat. At least some of the data in these posts appears real. Motherboard was able to confirm the first four digits of the social security numbers, names, addresses, and dates of birth for four people whose data appears in a post from July 2014. At least three social security numbers, names, addresses, and dates of birth that appear in a different post from February 2015 also appear to be real, based on records searches. Motherboard called six of these victims but was unable to reach any of them. In some cases, we reached voicemail inboxes and the recorded greetings corresponded to the names contained in the Facebook posts.. Facebook has been sluggish at policing these kinds of posts. Last week, security journalists Brian Krebs found more than 100 Facebook groups—some with thousands of members—whose members exchanged hacked or stolen data. Facebook deleted the groups after Krebs alerted the company. < Here >
  8. Facebook has taken the lion's share of scrutiny from Congress and the media about data-handling practices that allow savvy marketers and political agents to target specific audiences, but it's far from alone. YouTube, Google and Twitter also have giant platforms awash in more videos, posts and pages than any set of human eyes could ever check. Their methods of serving ads against this sea of content may come under the microscope next. Advertising and privacy experts say a backlash is inevitable against a "Wild West" internet that has escaped scrutiny before. There continues to be a steady barrage of new examples where unsuspecting advertisers had their brands associated with extremist content on major platforms. In the latest discovery, CNN reported that it found more than 300 retail brands, government agencies and technology companies had their ads run on YouTube channels that promoted white nationalists, Nazis, conspiracy theories and North Korean propaganda. Child advocates have also raised alarms about the ease with which smartphone-equipped children are exposed to inappropriate videos and deceptive advertising. "I absolutely think that Google is next and long overdue," said Josh Golin, director of the Boston-based Campaign for a Commercial-Free Childhood, which asked the Federal Trade Commission to investigate Google-owned YouTube's advertising and data collection practices earlier this month. YouTube has repeatedly outlined the ways it attempts to flag and delete hateful, violent, sexually explicit or harmful videos, but its screening efforts have often missed the mark. It also allows advertisers avoid running ads on sensitive content—like news or politics—that don't violate YouTube guidelines but don't fit with a company's brand. Those methods appear to have failed. "YouTube has once again failed to correctly filter channels out of our marketing buys," said a statement Friday from 20th Century Fox Film, which learned that its ads were running on videos posted by a self-described Nazi. YouTube has since deleted the offending channel, but the Hollywood firm says it has unanswered questions about how it happened in the first place. "All of our filters were in place in order to ensure that this did not happen," Fox said, adding it has asked for a refund of any money shared with the "abhorrent channel." YouTube said Friday that it has made "significant changes to how we approach monetization" with "stricter policies, better controls and greater transparency" and said it allows advertisers to exclude certain channels from ads. It also removes ads when it's notified of problems running beside content that doesn't comply with its policies. "We are committed to working with our advertisers and getting this right." So far, just one major advertiser—Baltimore-based retailer Under Armour—had said it had withdrawn its advertising in the wake of the CNN report, though the lull lasted only a few days last week when it was first notified of the problem. After its shoe commercial turned up on a channel known for espousing white nationalist beliefs, Under Armour worked with YouTube to expand its filters to exclude certain topics and keywords. On the other hand, Procter & Gamble, which had kept its ads off of YouTube since March 2017, said it had come back to the platform but drastically pared back the channels it would advertise on to under 10,000. It has worked on its own, with third parties, and with YouTube to create its restrictive list. That's just a fraction of the some 3 million YouTube channels in the U.S. that accept ads, and is even more stringent than YouTube's "Google Preferred" lineup that focuses on the top most popular 5 percent of videos. The CNN report was "an illustration of exactly why we needed to go above and beyond just what YouTube's plans were and why we needed to take more control of where our ads were showing up," said P&G spokeswoman Tressie Rose. The big problem, experts say, is that advertisers lured by the reach and targeting capability of online platforms can mistakenly expect the same standards for decency on network TV will apply online. In the same way, broadcast TV rules that require transparency about political ad buyers are absent on the web. "There have always been regulations regarding appropriate conduct in content," says Robert Passikoff, president of Brand Keys Inc., a New York customer research firm. Regulating content on the internet is one area "that has gotten away from everyone." Also absent from the internet are many of the rules that govern children's programming on television sets. TV networks, for instance, are allowed to air commercial breaks but cannot use kids' characters to advertise products. Such "host-selling" runs rampant on internet services such as YouTube. Action to remove ads from inappropriate content is mostly reactive because of lack of upfront control of what gets uploaded, and it generally takes the mass threat of boycott to get advertisers to demand changes, according to BrandSimple consultant Allen Adamson. "The social media backlash is what you're worried about," he said. At the same time, politicians are having trouble keeping up with the changing landscape, evident by how ill-informed many senators and congresspeople appeared during questioning of Facebook CEO Mark Zuckerberg earlier this month. "We're in the early stages of trying to figure out what kind of regulation makes sense here," said Larry Chiagouris, professor of marketing at Pace University in New York. "It's going to take quite some time to sort that out." < Here >
  9. Coalitions representing more than 670 companies and 240,000 members from the entertainment sector has written to Congress urging a strong response to the Facebook privacy fiasco. The groups, which include all the major Hollywood studios and key players from the music industry, are calling for Silicon Valley as a whole to be held accountable for whatever appears on their platforms. It has been a tumultuous few weeks for Facebook, and some would say quite rightly so. The company is a notorious harvester of personal information but last month’s Cambridge Analytica scandal really brought things to a head. With Facebook co-founder and Chief Executive Officer Mark Zuckerberg in the midst of a PR nightmare, last Tuesday the entrepreneur appeared before the Senate. A day later he faced a grilling from lawmakers, answering questions concerning the social networking giant’s problems with user privacy and how it responds to breaches. What practical measures Zuckerberg and his team will take to calm the storm are yet to unfold but the opportunity to broaden the attack on both Facebook and others in the user-generated content field is now being seized upon. Yes, privacy is the number one controversy at the moment but Facebook and others of its ilk need to step up and take responsibility for everything posted on their platforms. That’s the argument presented by the American Federation of Musicians, the Content Creators Coalition, CreativeFuture, and the Independent Film & Television Alliance, who together represent more than 650 entertainment industry companies and 240,000 members. CreativeFuture alone represents more than 500 companies, including all the big Hollywood studios and major players in the music industry. In letters sent to the Senate Committee on the Judiciary; the Senate Committee on Commerce, Science, and Transportation; and the House Energy and Commerce Committee, the coalitions urge Congress to not only ensure that Facebook gets its house in order, but that Google, Twitter, and similar platforms do so too. The letters begin with calls to protect user data and tackle the menace of fake news but given the nature of the coalitions and their entertainment industry members, it’s no surprise to see where this is heading. “In last week’s hearing, Mr. Zuckerberg stressed several times that Facebook must ‘take a broader view of our responsibility,’ acknowledging that it is ‘responsible for the content’ that appears on its service and must ‘take a more active view in policing the ecosystem’ it created,” the letter reads. “While most content on Facebook is not produced by Facebook, they are the publisher and distributor of immense amounts of content to billions around the world. It is worth noting that a lot of that content is posted without the consent of the people who created it, including those in the creative industries we represent.” The letter recalls Zuckerberg as characterizing Facebook’s failure to take a broader view of its responsibilities as a “big mistake” while noting he’s also promised change. However, the entertainment groups contend that the way the company has conducted itself – and the manner in which many Silicon Valley companies conduct themselves – is supported and encouraged by safe harbors and legal immunities that absolve internet platforms of accountability. “We agree that change needs to happen – but we must ask ourselves whether we can expect to see real change as long as these companies are allowed to continue to operate in a policy framework that prioritizes the growth of the internet over accountability and protects those that fail to act responsibly. We believe this question must be at the center of any action Congress takes in response to the recent failures,” the groups write. But while the Facebook fiasco has provided the opportunity for criticism, CreativeFuture and its colleagues see the problem from a much broader perspective. They suck in companies like Google, which is also criticized for shirking its responsibilities, largely because the law doesn’t compel it to act any differently. “Google, another major global platform that has long resisted meaningful accountability, also needs to step forward and endorse the broader view of responsibility expressed by Mr. Zuckerberg – as do many others,” they continue. “The real problem is not Facebook, or Mark Zuckerberg, regardless of how sincerely he seeks to own the ‘mistakes’ that led to the hearing last week. The problem is endemic in a system that applies a different set of rules to the internet and fails to impose ordinary norms of accountability on businesses that are built around monetizing other people’s personal information and content.” Noting that Congress has encouraged technology companies to prosper by using a “light hand” for the past several decades, the groups say their level of success now calls for a fresh approach and a heavier touch. “Facebook and Google are grown-ups – and it is time they behaved that way. If they will not act, then it is up to you and your colleagues in the House to take action and not let these platforms’ abuses continue to pile up,” they conclude. But with all that said, there is an interesting conflict that develops when presenting the solution to piracy in the context of a user privacy fiasco. In the EU, many of the companies involved in the coalitions above are calling for pre-emptive filters to prevent allegedly infringing content being uploaded to Facebook and YouTube. That means that all user uploads to such platforms will have to be opened and scanned to see what they contain before they’re allowed online. So, user privacy or pro-active anti-piracy filters? It might not be easy or even legal to achieve both. https://torrentfreak.com/facebook-privacy-fiasco-sees-congress-urged-on-anti-piracy-action-180420/
  10. This week, Facebook will begin asking users for permission to use their personal data, the first official move by the social network ahead of the GDPR compliance deadline set for May 25, Facebook said in a Tuesday night blog post. Starting first in the EU, Facebook will prompt users on the site and the app with "permission screens" that will ask users to approve whether they want Facebook to be able to use certain personal data for features like facial recognition or to target ads based on political, religious, and relationship details. That permissions prompt will roll out in coming months to users in the US and the rest of the world. Users won't be able to prevent ad targeting altogether. The prompt will only give users the option to "accept and continue" (meaning they must accept Facebook's policies before proceeding), or "manage data setting" (through which users will be able to manually limit the kinds of data that advertisers can use to target them). For now, Facebook will continue to operate under an advertising model powered by targeted ads, so there will be no all-encompassing option for users to opt out, said Facebook Deputy Chief Privacy Officer Rob Sherman per Reuters. To drive that point, Sherman said that if a user is uncomfortable with their data being used to target them with ads, they "can choose not to be on Facebook." Accepting targeted ads will be a condition of using the service, so even if a user opts out of sharing certain pieces of data, they will still be shown ads, and any ad on Facebook will be somewhat targeted based on whatever data users allow. For the time being, the move is unlikely to impact Facebook's overall business.Facebook CEO Mark Zuckerberg testified last week that there will always be a version of Facebook that's free, but speculation is growing about Facebook offering a paid service to users who would prefer to opt out of sharing their data. Facebook's ad revenue per user varies, but is highest in the US & Canada, where each user delivers the social network $82.44 a year, so the average US or Canadian user would need to pay an average $7 a month to use the platform, per TechCrunch analysis. Such an option doesn't seem likely to take off just yet, as 77% of Americans said they wouldn't pay for an ad-free Facebook, per recent research by Toluna per Recode. Meanwhile, of the remainder who said they'd be willing to pay for ad-free Facebook, 41.6% said they'd only pay between $1 and $5, while 25% said they'd pay between $6 and $10. < Here >
  11. Bill Gates said Facebook CEO Mark Zuckerberg did a good job in front of Congress last week, but must now play a leading role in helping the world figure out data privacy. He said Facebook will be "glad" to take a business hit to help people "feel good" about their data. Regulation is coming, Gates said, but should be done in a "thoughtful" way. Bill Gates has offered some reassuring words for embattled Facebook CEO Mark Zuckerberg — but warned that the company must play a leading role in helping the world figure out data privacy. Speaking on BBC Radio 4's "Today" programme, the Microsoft co-founder said Zuckerberg is responding "as best he can" to the scandal, which saw 87 million US Facebook accounts scraped for data in 2014. He described the CEO's congressional hearings last week as "a pretty strong mea culpa in terms of things they could have done better," adding that he did "quite a good job" under questioning from US lawmakers. But now, Gates said Facebook must show it is taking a leading role in helping solve the issue of data protection online. "They are in the hot seat of some very state-of-the-art issues," he said. "As a leader, Facebook has got to help the world figure this out." Asked what advice he would give Zuckerberg, Gates said: "This is his total focus now. He is somebody who takes a long-term view. He'll be glad to reduce the business' prospects, to make sure that the privacy promises make consumers feel good." Gates: Regulation is coming Gates was also philosophical about the prospect of greater government intervention in the tech sector. "We will end up with more regulation," he told the BBC, highlighting areas where governments are already taking action, including on hate speech. Regulation should be done in a "thoughtful way," he added, with "alignment between countries." The European Union is introducing General Data Protection Regulation privacy rules next month and Facebook is already rolling out changes globally in order to comply with the rules. For example, Facebook is about to ask whether users really want to share highly sensitive details about their life, such as religion, political leanings, or sexual orientation. Under the regulations, it is illegal to collect this kind of sensitive information unless people give explicit permission. < Here >
  12. Income, pregnancies, personal activities, all up for grabs Facebook’s advertising platform is riddled with loopholes that can help miscreants obtain private information on individual users, according to a recent study. Personally identifiable details – such as someone's email address, full name, date of birth, and home address – are used with their likes and dislikes to slot them into categories for targeted adverts. That means advertisers can zero in on their products' ideal buyers, and, say, sling expensive pet food ads at rich dog owners. However, these systems can also be exploited by scumbags to potentially slurp sensitive records. Researchers at the University of Southern California, in the US, studied Facebook’s targeted advertising capabilities in detail, and published their findings in a paper late last month. “We focus on three downsides: privacy violations, microtargeting (i.e., the ability to reach a specific individual or individuals without their explicit knowledge that they are the only ones an ad reaches) and ease of reaching marginalized groups,” the pair, Irfan Faizullabhoy and Aleksandra Korolova, stated in their paper's abstract. How it works Anyone with a Facebook profile can set up what's called a custom audience that defines a particular demographic for ad targeting. The trick here is to provide just enough information, and game the system, to narrow down the audience search results not to a select bunch of people, but down to just one unlucky person on the social network. Although Facebook treats that as an invalid demographic, its other tool, audience insights, which lets advertisers learn more about groups of netizens reached by adverts, can be used with that tiny custom audience to reveal that one person's private information. It means a miscreant can go on a fishing expedition, looking for a particular person or type of person, and extract private information, such as that person's age, income, how many people they live with, their personal activities, and so on, by combining the custom audience search function and the audience insights analytics. It seemingly gives anyone the power to learn more about strangers' lives – and there are more than 2,000 types of information that can be discerned per individual user. The researchers experimented with the analytics functions with the consent of their Facebook friends, who they asked to temporarily unfriend them. The duo found the audience insights results to be “highly accurate” when pulling up data on their pals. Information that can be gleaned from that tiny audience of one can range from hobbies to their family's details. “Questions such as, 'is this person [or] their wife pregnant?' 'how old are their children?' 'do they like to gamble?' 'are they living at home, or with roommates?' 'do they hunt?' can all be answered, efficiently and at no cost, by anyone,” Faizullabhoy and Korolova warned. The minimum number of people in a custom audience is, right now, 20. It’s a low number compared to 1,000 for Google, 300 for LinkedIn, and 500 for Twitter. By peppering in 19 fake or complicit accounts, for example, advertisers, and anyone else curious, can narrowly target and snoop on just a single person, or a group of people by going through them one at a time. Another potential flaw relates to Facebook allowing advertisers to refine their audience by location to within a one-mile radius. Small areas or even single houses can be targeted, as long as there are at least 20 users that match the advert’s criteria. It’s particularly concerning if those areas include vulnerable people who frequent planned parenthood clinics, rehab centers, or medical facilities, as they might be more easily picked out by ad campaigns. “It's difficult to predict how such a powerful tool can be abused by a clever and resourceful adversary, especially because neither researchers nor users have full transparency into what is feasible using Facebook's advertising platform and what data about them is being used when ad matching and reporting is performed,” Korolova, an assistant professor of computer science, told The Register. When the duo asked Facebook to increase the custom audience size to somewhere between 500 and 1,000, the Silicon Valley giant ignored the request, and still hasn't addressed it. For the geolocation targeting issue, the researchers were asked to “clarify how this bug is able to compromise the integrity of Facebook user data, circumvent the privacy protections of Facebook user data, or enable access to a system within Facebook’s infrastructure.” After the pair replied, Facebook did not respond, and even closed the bug bounty report, so that they could no longer engage in any sort of dialog. In the paper, Faizullabhoy and Korolova said they believed the reason why it was so easy to snoop on people via Facebook's ad platform was down to the website's careless approach to privacy. Facebook simply doesn’t care, they stated, to put it bluntly. “Facebook’s response to our white hat reports of 'single person targeting' that 'this is working as designed' shows an apathy toward micro-targeting and circumventions of the rudimentary micro-targeting protections Facebook has put in place,” the duo stated in their paper. Facebook declined to comment. Even when appearing before US Congress this week, CEO Mark Zuckerberg continued to dodge questions about the true nature of Facebook’s abilities to silently and secretly track millions of people’s online and offline activities, and how it that information may be passed to third parties with dodgy intentions. Facebook’s response: Do as little as possible When the researchers alerted Facebook to these vulnerabilities, they were stonewall. At one point, though, Facebook agreed to increase the number of people able to be targeted using the custom audience tool and audience insights analytics from one to 20. The researchers were required to submit video proof of the network's shortcomings, and were awarded $2,000 from the website's bug bounty program. When the duo asked Facebook to increase the custom audience size to somewhere between 500 and 1,000, the Silicon Valley giant ignored the request, and still hasn't addressed it. For the geolocation targeting issue, the researchers were asked to “clarify how this bug is able to compromise the integrity of Facebook user data, circumvent the privacy protections of Facebook user data, or enable access to a system within Facebook’s infrastructure.” After the pair replied, Facebook did not respond, and even closed the bug bounty report, so that they could no longer engage in any sort of dialog. In the paper, Faizullabhoy and Korolova said they believed the reason why it was so easy to snoop on people via Facebook's ad platform was down to the website's careless approach to privacy. Facebook simply doesn’t care, they stated, to put it bluntly. “Facebook’s response to our white hat reports of 'single person targeting' that 'this is working as designed' shows an apathy toward micro-targeting and circumventions of the rudimentary micro-targeting protections Facebook has put in place,” the duo stated in their paper. Facebook declined to comment. Even when appearing before US Congress this week, CEO Mark Zuckerberg continued to dodge questions about the true nature of Facebook’s abilities to silently and secretly track millions of people’s online and offline activities, and how it that information may be passed to third parties with dodgy intentions. Source
  13. March media reports revealed that a researcher sold Facebook user data to an outside firm, Cambridge Analytica. But the negative press hasn't deterred users, CEO Mark Zuckerberg said. When asked, Zuckerberg said there has not been dramatic falloff of the number of people that use Facebook. Although recent scrutiny has "clearly hurt" Facebook's mission, CEO Mark Zuckerberg said there has not been a dramatic decline in usership. Zuckerberg spoke on Tuesday at a joint hearing of the Senate Judiciary and Commerce committees, where he was questioned for more than four hours about Facebook's treatment of user data. The company has been under pressure after March media reports revealed that a researcher sold Facebook user data to an outside firm, Cambridge Analytica, which has been associated with President Donald Trump's campaign. But the negative press hasn't deterred users, Zuckerberg said, in response to a question from a Senator Ron Johnson, R-Wis. Johnson: With all this publicity, have you documented any kind of backlash from Facebook users? I mean, has there been a dramatic falloff of the number of people that utilize Facebook because of these concerns? Prior to the Cambridge Analytica scandal, Facebook was already struggling to address the fallout around the 2016 presidential election. Critics blamed Facebook for the spread of misinformation during the election, and Russian agents also used fake social media accounts to share divisive content in the U.S. In January, Zuckerberg said that Facebook took action to reduce deceptive content, and that those changes had already reduced time spent on the site by 50 million hours per day, or 5 percent. Source
  14. Last night, there were reports that Facebook Messages from Zuckerberg and other Facebook executives sent to users were mysteriously gone. Users don’t have to ability to delete sent messages, but those at the top of the platform did so anyway, leaving the received messages intact. The timing coincides with all the criticism that Facebook has gotten along with the Cambridge Analytica scandal. This kind of erratic action was seen as a breach of trust between Facebook executives and its users. In fact, Facebook confirmed to Tech Crunch that it did indeed delete some messages after Tech Crunch reported that it had email receipts as proof that messages were indeed deleted. In response, Facebook revealed to Tech Crunch its plans to launch feature to delete sent messages for Facebook users. According to Facebook, after the 2014 hacking of Sony’s emails, the company set up “a number of changes to protect our executive’s communications. These included limited the retention period for Mark’s messages in Messenger. We did so in full compliance with our legal obligations to preserve messages”. Facebook didn’t elaborate on how the feature would work and said the feature is still being planned. Facebook explains that it already offers a timer function that users can use to set messages to delete themselves after a specified amount of time. This feature, however, is only available to a conversation if it is started in “Secret conversation”. The thing is, Zuckerberg and Facebook executives deleted messages from regular, private conversations with users, so some are uneasy that they were deleted without notice or discretion. Perhaps the announcing the unsend feature is a way to soften the severity of deleting messages from users conversations without consent or notice by making it look like they were testing the feature. It was only until Tech Crunch told Facebook it had proof that it deleted messages that it actually came forward with its plans to launch such a feature. Are you looking forward to a feature for unsending Facebook messages? Do you think Facebook Messenger will be the same if unsending messages becomes a common feature? Gsmarena.com
  15. SAN FRANCISCO (Reuters) - Facebook Inc (FB.O) Chief Executive Mark Zuckerberg said on Tuesday the social network had no immediate plans to apply a strict new European Union law on data privacy in its entirety to the rest of the world, as the company reels from a scandal over its handling of personal information of millions of its users. Zuckerberg told Reuters in a phone interview that Facebook already complies with many parts of the law ahead of its implementation in May. He said the company wanted to extend privacy guarantees worldwide in spirit, but would make exceptions, which he declined to describe. “We’re still nailing down details on this, but it should directionally be, in spirit, the whole thing,” said Zuckerberg. He did not elaborate. His comments signal that U.S. Facebook users, many of them still angry over the company’s admission that political consultancy Cambridge Analytica got hold of Facebook data on 50 million members, may soon find themselves in a worse position than Europeans. The European law, called the General Data Protection Regulation (GDPR), is the biggest overhaul of online privacy since the birth of the internet, giving Europeans the right to know what data is stored on them and the right to have it deleted. Apple Inc (AAPL.O) and some other tech firms have said they do plan to give people in the United States and elsewhere the same protections and rights that Europeans will gain. Shares of Facebook closed up 0.5 percent on Tuesday at $156.11. They are down more than 15 percent since March 16, when the scandal broke over Cambridge Analytica. PUSH FOR DATA PRIVACY Privacy advocacy groups have been urging Facebook and its Silicon Valley competitors such as Alphabet Inc’s (GOOGL.O) Google to apply EU data laws worldwide, largely without success. “We want Facebook and Google and all the other companies to immediately adopt in the United States and worldwide any new protections that they implement in Europe,” said Jeff Chester, executive director of the Center for Digital Democracy, an advocacy group in Washington. Google and Facebook are the global leaders in internet ad revenue. Both based in California, they possess enormous amounts of data on billions of people. Google has declined to comment on its plans. Zuckerberg said many of the tools that are part of the law, such as the ability of users to delete all their data, are already available for people on Facebook. “We think that this is a good opportunity to take that moment across the rest of the world,” he said. “The vast majority of what is required here are things that we’ve already had for years across the world for everyone.” When GDPR takes effect on May 25, people in EU countries will gain the right to transfer their data to other social networks, for example. Facebook and its competitors will also need to be much more specific about how they plan to use people’s data, and they will need to get explicit consent. GDPR is likely to hurt profit at Facebook because it could reduce the value of ads if the company cannot use personal information as freely and the added expense of hiring lawyers to ensure compliance with the new law. Failure to comply with the law carries a maximum penalty of up to 4 percent of annual revenue. It should not be difficult for companies to extend EU practices and policies elsewhere because they already have systems in place, said Nicole Ozer, director of technology and civil liberties at the American Civil Liberties Union of California. Companies’ promises are less reassuring than laws, she said: “If user privacy is going to be properly protected, the law has to require it.” Source
  16. Emmanuel Macron, the French president, has warned that Google and Facebook are becoming too big to be governed and could face being dismantled. Internet giants could be forced to pay for the disruption they cause in society and submit to French or European privacy regulations, he suggested. In an interview with the magazine Wired, the president warned that artificial intelligence (AI) would challenge democracy and open a Pandora’s box of privacy issues. He was speaking after announcing a €1.5bn (£1.32bn) investment in artificial intelligence research to accelerate innovation and catch up with China and the US. Mr Macron said companies such as Google and Facebook were welcome in France, brought jobs and were “part of our ecosystem”. But he warned: “They have a very classical issue in a monopoly situation; they are huge players. At a point of time – but I think it will be a US problem, not a European problem – at a point of time, your government, your people, may say, ‘Wake up. They are too big.’ Facebook chief executive Mark Zuckerberg said last month he was open to governments regulating tech companies. “[The] question isn't ‘Should there be regulation or shouldn’t there be?’ – it’s ‘How do you do it?’” Mr Zuckerberg said. At the start of this year, billionaire investor and philanthropist George Soros added his voice to a chorus calling for major technology firms to be reined in, calling Google and Facebook monopolies in need of regulation. Apple's chief executive Tim Cook has called for more regulation of Facebook in the wake of the Cambridge Analytica data scandal. Mr Macron also hinted in the interview that the online giants might be forced to put more money towards compensation for disrupting traditional economic sectors. “We have to retrain our people,” he said. “These companies will not pay for that – the government will. “Today the GAFA [Google, Apple, Facebook, and Amazon] don’t pay all the taxes they should in Europe. So they don’t contribute to dealing with negative externalities they create. And they ask the sectors they disrupt to pay, because these guys, the old sectors pay VAT, corporate taxes and so on. That’s not sustainable.” He said people should remain sovereign on privacy rules. “I want to protect privacy in this way or in that way. You don’t have the same rule in the US. And speaking about US players, how can I guarantee French people that US players will respect our regulation? So at a point of time, they will have to create actual legal bodies and incorporate it in Europe, being submitted to these rules.” “Not just too big to fail, but too big to be governed. Which is brand new. “So at this point, you may choose to dismantle. That’s what happened at the very beginning of the oil sector when you had these big giants. That’s a competition issue.” Accountability and democracy happen at national or regional level but not at a global scale, he added. “If I don’t walk down this path, I cannot protect French citizens and guarantee their rights. If I don’t do that, I cannot guarantee French companies they are fairly treated. Because today, when I speak about GAFA, they are very much welcome – I want them to be part of my ecosystem, but they don’t play on the same level playing field as the other players in the digital or traditional economy.” He added: “All I know is that if I don’t, at a point of time, have this discussion and regulate them, I put myself in a situation not to be sovereign any more.” The president envisaged a European sovereignty in AI. “Artificial intelligence is a global innovation scheme in which you have private big players and one government with a lot of data – China. My goal is to recreate a European sovereignty in AI, especially on regulation. You will have sovereignty battles to regulate, with countries trying to defend their collective choices,” he said. “AI will raise a lot of issues in ethics, in politics, it will question our democracy and our collective preferences. For instance, if you take healthcare, you can totally transform medical care, making it much more predictive and personalised if you get access to a lot of data. We will open our data in France. “But the day you start dealing with privacy issues, the day you open this data and unveil personal information, you open a Pandora’s box, with potential use cases that will not be increasing the common good and improving the way to treat you. “In particular, it’s creating a potential for all the players to select you. This can be a very profitable business model: this data can be used to better treat people, it can be used to monitor patients, but it can also be sold to an insurer that will have intelligence on you and your medical risks, and could get a lot of money out of this information. The day we start to make such business out of this data is when a huge opportunity becomes a huge risk. It could totally dismantle our national cohesion and the way we live together. “This leads me to the conclusion that this huge technological revolution is in fact a political revolution.” Mr Zuckerberg, while accepting a need for regulation, warned last month against governments micromanaging tech companies, and how they handle privacy breaches, hate speech and offensive content. He attacked Germany’s new Network Enforcement Act, under which technology companies must immediately investigate hate speech complaints, delete hateful content within 24 hours or face fines of up to €50m. Source
  17. Judge dismisses lawsuit accusing Facebook of tracking users’ activity, saying responsibility was on plaintiffs to keep browsing history private A judge has dismissed a lawsuit accusing Facebook of tracking users’ web browsing activity even after they logged out of the social networking site. The plaintiffs alleged that Facebook used the “like” buttons found on other websites to track which sites they visited, meaning that the Menlo Park, California-headquartered company could build up detailed records of their browsing history. The plaintiffs argued that this violated federal and state privacy and wiretapping laws. US district judge Edward Davila in San Jose, California, dismissed the case because he said that the plaintiffs failed to show that they had a reasonable expectation of privacy or suffered any realistic economic harm or loss. Davila said that plaintiffs could have taken steps to keep their browsing histories private, for example by using the Digital Advertising Alliance’s opt-out tool or using “incognito mode”, and failed to show that Facebook illegally “intercepted” or eavesdropped on their communications. “Facebook’s intrusion could have easily been blocked, but plaintiffs chose not to do so,” said Davila, who dismissed an earlier version of the five-year-old case in October 2015. Clicking on the Facebook “like” button on a third party website – for example, theguardian.com – allows people to share pieces of content to Facebook without having to copy and paste the link into a status update on the social network. When a user visits a page with an embedded “like” button, the web browser sends information to both Facebook and the server where the page is located. “The fact that a user’s web browser automatically sends the same information to both parties does not establish that one party intercepted the user’s communication with the other,” said Davila. The plaintiffs cannot bring privacy and wiretapping claims again, Davila said, but can pursue a breach of contract claim again. Australian internet security blogger Nik Cubrilovic first discovered that Facebook was apparently tracking users’ web browsing after they logged off in 2011. Responding to Cubrilovic, Facebook engineer Gregg Stefancik confirmed that Facebook has cookies that persist after log-out as a safety measure (to prevent others from trying to access the account) but that the company does not use the cookies to track users or sell personal information to third parties. However, in 2014 Facebook started using web browsing data for delivering targeted “interest-based” advertising – which explains why you see ads for products you have already been looking at online appear in your Facebook feed. To address privacy concerns, Facebook introduced a way for users to opt out of this type of advertising targeting from within user settings. “We are pleased with the court’s ruling,” said a Facebook spokeswoman. Source
  18. Facebook has been collecting call records and SMS data from Android devices for years. Several Twitter users have reported finding months or years of call history data in their downloadable Facebook data file. A number of Facebook users have been spooked by the recent Cambridge Analytica privacy scandal, prompting them to download all the data that Facebook stores on their account. The results have been alarming for some. “Oh wow my deleted Facebook Zip file contains info on every single phone cellphone call and text I made for about a year,” says ‏Twitter user Mat Johnson. Another, Dylan McKay, says “somehow it has my entire call history with my partner’s mum.” Others have found a similar pattern where it appears close contacts, like family members, are the only ones tracked in Facebook’s call records. Ars Technica reports that Facebook has been requesting access to contacts, SMS data, and call history on Android devices to improve its friend recommendation algorithm and distinguish between business contacts and your true personal friendships. Facebook appears to be gathering this data through its Messenger application, which often prompts Android users to take over as the default SMS client. Facebook has, at least recently, been offering an opt-in prompt that prods users with a big blue button to “continuously upload” contact data, including call and text history. It’s not clear when this prompt started appearing in relation to the historical data gathering, and whether it has simply been opt-in the whole time. Either way, it’s clearly alarmed some who have found call history data stored on Facebook’s servers. FACEBOOK HASN’T BEEN ABLE TO COLLECT THIS DATA ON IPHONES THANKS TO APPLE’S PRIVACY CONTROLS While the recent prompts make it clear, Ars Technica points out the troubling aspect that Facebook has been doing this for years, during a time when Android permissions were a lot less strict. Google changed Android permissions to make them more clear and granular, but developers could bypass this and continue accessing call and SMS data until Google deprecated the old Android API in October. It’s not yet clear if these prompts have been in place in the past. Facebook has responded to the findings, but the company appears to suggest it’s normal for apps to access your phone call history when you upload contacts to social apps. “The most important part of apps and services that help you make connections is to make it easy to find the people you want to connect with,” says a Facebook spokesperson, in response to a query from Ars Technica. “So, the first time you sign in on your phone to a messaging or social app, it’s a widely used practice to begin by uploading your phone contacts.” The same call record and SMS data collection has not yet been discovered on iOS devices. While Apple does allow some specialist apps to access this data in limited ways like blocking spam calls or texts, these apps have to be specifically enabled through a process that’s similar to enabling third-party keyboards. The majority of iOS apps cannot access call history or SMS messages, and Facebook’s iOS app is not able to capture this data on an iPhone. Facebook may need to answer some additional questions on this data collection, especially around when it started and whether Android users truly understood what data they were allowing Facebook to collect when they agreed to enable phone and SMS access in an Android permissions dialogue box or Facebook’s own prompt. The data collection revelations come in the same week Facebook has been dealing with the fall out from Cambridge Analytica obtaining personal information from up to 50 million Facebook users. Facebook has altered its privacy controls in recent years to prevent such an event occurring again, but the company is facing a backlash of criticism over the inadequate privacy controls that allowed this to happen. CEO Mark Zuckerberg has also been summoned to explain how data was taken without users’ consent to a UK Parliamentary committee. Source
  19. Updated at 6:30 p.m. ET Facebook CEO Mark Zuckerberg issued a lengthy post Wednesday on his personal Facebook page promising to protect the data of platform users. He said Facebook will provide users with tools to show who has access to their data and how it is shared. Facebook will also "restrict developers' data access even further to prevent other kinds of abuse." "We have a responsibility to protect your data, and if we can't, then we don't deserve to serve you," he wrote. The post marks the first public comments made by Zuckerberg about the controversy involving Cambridge Analytica's use of personal data posted by Facebook users. Zuckerberg said there was a "breach of trust" involving Cambridge Analytica, Facebook, and a Cambridge University researcher named Aleksandr Kogan, who created an app to collect data that was later shared with Cambridge Analytica. "But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that," he wrote. Zuckerberg said Facebook will restrict third-party developers from accessing data beyond names, profile photos and email addresses. The company will also require developers to sign a contract before asking Facebook users for access to their posts or other private data. Users will also see a tool at the top of their news feeds showing the what apps they have used and will have an easy way to revoke the apps' access to their data. Zuckerberg concluded, Sen. Richard Blumenthal, D-Conn., a member of the Senate Judiciary and Commerce Committees, said he was not satisfied with Zuckerberg's statement, calling it "damage control." Blumenthal told NPR's Ailsa Chang on All Things Considered: Source
  20. With the recent report of Facebook user's data being harvested and used for information warfare, many people are looking to delete their accounts or at least their Facebook posts in order to have a clean slate. Deleting posts, though, can be a very time consuming task as you normally would have to go into each and every post and manually remove them. Manually Deleting a Facebook Post Thankfully there is a Chrome extension called Social Book Post Manager that makes it much easier to bulk delete or unlike your Facebook posts by automating the process. Social Book Post Manager Extension Using the Social Book Post Manager Extension Extension To Delete Facebook Posts Before you delete your posts, you may want to first create a backup of your Facebook data. This data includes your posts, photos and videos, your messages and chat conversations, and info in your About section of your profile. To create a backup, you can go into the Settings and then the General Account Settings (click this link to go to the page) screen. On that screen will be a link titled "Download a copy of your Facebook data." as shown below. Open Facebook Menu When you click on the Download a copy link, follow the prompts and Facebook will start creating an backup of your data that you can download. When this backup is ready, they will email you on your registered email address. When you have downloaded your data backup, install the Social Book Post Manager extension, open Facebook, and go to your Activity Log. To access the Activity Log, click on the down arrow next to the question mark in the Facebook navigation header as shown below. Open Facebook Menu Once you open the Facebook menu, you should see a list of options, including the Activity Log. Open Activity Log Click on the Activity Log link and you will now be at a page that displays all of the activity you have had on your Facebook profile including the friends you added, the posts you have made, and the posts you have liked. While on this page, click on the Filter that you wish to delete from the left hand side of the page. For example, we want to delete only our posts right now, so as shown in the image below we are going to click on the Posts filter to only show posts we made to our Facebook profile. Click on the Facebook Posts Filter Now click on Social Book Post Manager's icon to open the extension. Open the Social Book Post Manager Extension Once you click on the extension's icon, the Social Book Post Manager extension will open and present a list of filters that you can use to delete posts on Facebook. Open the Social Book Post Manager Filters These filters determine what posts will be removed and you can filter for posts made in certain years, months, and even if they contain certain strings. There is also a Prescan on Page option that when checked will cause the extension to show what posts will be removed before actually removing them. If you like the posts that were selected, you can then click on the Confirmation button to actually delete them as shown below. Confirm to Delete You can then click the OK button to close the alert, review the activity log, and if you are happy with the posts that were selected to delete, click on the Confirm to delete button at the top of the page to actually delete the selected posts. On the other hand, if you are not happy with the selection, you can simply refresh the page to see the activity log as it was before. There have been some reports that utilizing the prescan on heavily populated activity logs can cause problems. Therefore, if your goal is to delete all of your Facebook posts before you delete the Facebook account or just want to have a completely clean slate, you can uncheck the Prescan on Page and just let the extension delete everything. Bleepingcomputer.com
  21. It may be time to leave the world’s biggest social network If you’ve finally given up on the world’s most popular social media network and want to get rid of Facebook, it’s not too complicated to remove yourself from the service. But before you delete all of those pictures, posts, and Likes, you should download your personal information from Facebook first. Your Facebook archives contain just about all of the pertinent information related to your account, including your photos, active sessions, chat history, IP addresses, facial recognition data, and which ads you clicked, just to name a few. That’s a ton of personal information that you should probably maintain access to. To download your archive, go to “Settings” and click “Download a copy of your Facebook data” at the bottom of General Account Settings, and then click “Start My Archive.” After you’ve finished downloading your archive, you can now delete your account. Beware: once you delete your account, it cannot be recovered. If you are ready to delete your account, you can click this link, which will take you to the account deletion page. (Facebook doesn’t have the delete account option in its settings, for some reason.) Once you click “Delete My Account,” your account will be marked for termination, and inaccessible to others using Facebook. The company notes that it delays termination for a few days after it’s requested. If you log back in during that period, your deletion request will be canceled. So don’t sign on, or you’ll be forced to start the process over again. Certain things, like comments you’ve made on a friend’s post, may still appear even after you delete your account. Facebook also says that copies of certain items like log records will remain in its database, but notes that those are disassociated with personal identifiers. The company says it can take up to 90 days to fully delete your account and the information associated with it, but it notes that your account will be inaccessible to other people using Facebook during that time. If you’re really serious about quitting Facebook, remember that the company owns several other popular services as well, like Instagram and WhatsApp, so you should delete your accounts there as well. Source
  22. The probes follow a weekend of turmoil for Facebook after reports that Cambridge Analytica gained access to the data of more than 50 million users. Shares of Facebook fell nearly 5 percent Tuesday, after falling as much as 8 percent on Monday. UK officials are also investigating the alleged mishandling of data. The Federal Trade Commission is investigating whether the use of personal data from 50 million Facebook users by Cambridge Analytica violated a consent decree the tech company signed with the agency in 2011, Bloomberg reported Monday. The probe follows a weekend of turmoil for the social media giant. Reports this weekend said the research firm improperly gained access to the data of more than 50 million Facebook users. "We are aware of the issues that have been raised but cannot comment on whether we are investigating. We take any allegations of violations of our consent decrees very seriously as we did in 2012 in a privacy case involving Google," a spokesman for the FTC said Tuesday. A violation of the consent decree could carry a penalty of $40,000 per violation, which could mean a fine conservatively estimated to be "many millions of dollars in fines" for Facebook, The Washington Post reported over the weekend, citing a former FTC official. Facebook will brief members of the House Energy and Commerce Committee Wednesday, according to a committee spokesman. Facebook officials have also tentatively agreed to brief lawmakers on the House Judiciary Committee on the matter as early as Wednesday, Bloomberg reported. Facebook has maintained the mishandling of data was the result of abuse on the part of Cambridge Analytica and app developer Aleksandr Kogan. "We reject any suggestion of violation of the consent decree. We respected the privacy settings that people had in place. Privacy and data protections are fundamental to every decision we make," Facebook said in a statement to the Post on Saturday. The consent decree requires that Facebook notify users and receive explicit permission before sharing personal data beyond their specified privacy settings. Weekend reports by The Observer newspaper in the U.K. and The New York Times allege Facebook users willingly provided their data to a psychology quiz app developed by Kogan, who then passed the data along to Cambridge Analytica without the users' knowledge — constituting a potential violation. "We remain strongly committed to protecting people's information. We appreciate the opportunity to answer questions the FTC may have," Rob Sherman, deputy chief privacy officer at Facebook said in a statement. Shares of Facebook fell nearly 5 percent Tuesday, after skidding as much as 8 percent on Monday. UK officials are also investigating, ordering auditors hired by Facebook to stand down and summoning CEO Mark Zuckerberg to provide evidence for review. Source
  23. (Reuters) - Facebook Inc's shares fell more than 4 percent in premarket trading after media reports that a political consultancy that worked on President Donald Trump's campaign gained inappropriate access to data on 50 million Facebook users. The move would knock $23.8 billion off the social network's market value of $538 billion as of Friday's close and shares in other social media companies including Twitter Inc and Snap Inc also dipped in early deals in New York. One Wall Street analyst said the reports raised 'systemic problems' with Facebook's business model and a number said it could spur far deeper regulatory scrutiny of the platform. The head of European Parliament said on Monday that EU lawmakers will investigate whether the data misuse has taken place, adding the allegation is an unacceptable violation of citizens' privacy rights. Facebook was already facing new calls for regulation from U.S. Congress and questions about personal data safeguards after the reports from the New York Times and London's Observer over the weekend. The papers reported on Saturday that private information from more than 50 million Facebook users improperly ended up in the hands of data analytics firm Cambridge Analytica, and that the information had not been deleted despite Facebook demands dating back to 2015. "We think this episode is another indication of systemic problems at Facebook," said Brian Wieser, analyst at New York-based brokerage Pivotal Research Group, which already has a "sell" rating on a stock that rose 60 percent last year. Wieser argued that regulatory risks for the company would intensify and enhanced use of data in advertising would be at greater risk than before. He added, however, that it was unlikely to have a meaningful impact on the company's business for now, with advertisers unlikely to "suddenly change the trajectory of their spending growth on the platform". "This episode appears likely to create another and potentially more serious public relations 'black eye' for the company and could lead to additional regulatory scrutiny," said Peter Stabler, analyst at Wells Fargo. The losses would be Facebook's biggest daily fall since a broader market pullback in February. In January, when Facebook announced changes to its newsfeed which it said would hit user engagement in the near term, shares fell 4.5 percent in one day. "It's clear with more 'heat in the kitchen from the Beltway' that further modest changes to their business model around advertising and news feeds/content could be in store over the next 12 to 18 months," said Daniel Ives, research analyst at GBH Insights. He also argued that the issue was "background noise" on which Facebook could calm any regulatory nerves through further investments in security, ad content AI, improved content algorithms and screening mechanisms. No analysts had so far changed their price targets or recommendations on Facebook in response to the reports. Wall Street is largely bullish on the stock with 40 of 44 analysts recommending the stock "buy" or higher. Shares of the company were down 4.4 percent at $177.90 by 09.13 a.m. ET. Source: https://ca.news.yahoo.com/facebook-shares-slide-reports-data-misuse-120513980--finance.html
  24. According to an unnamed employee who worked at Facebook, the company deploys “secret police” in order to crack down on would-be leakers. The employee found out about Facebook’s anti-leaking efforts after becoming the target of Facebook’s investigative team who had compiled proof that he’s been leaking company information to the press. In an interview, the anonymous employee said: The anonymous employee was lured into a meeting by the investigators after receiving a message from his manager at Facebook telling him he was in line for a promotion. When he arrived at the meeting room on the following day, however, he came face to face with the company’s head of investigations, Sonya Ahuja. Source
  25. ICO probe: No legal basis for Facebook slurps WhatsApp has agreed not to share users' data with parent biz Facebook after failing to demonstrate a legal basis for the ad-fuelling data slurp in the EU. The move comes after a years-long battle between the biz and European data protection agencies, which argued that changes to WhatsApp's small print hadn't been properly communicated and didn't comply with EU law. An investigation by the UK's Information Commissioner's Office, which reported today, confirmed the biz has failed to identity a legal basis for sharing personal data in a way that would benefit Facebook's business. Moreover, any such sharing would have been in breach of the Data Protection Act. In response, WhatsApp has agreed to sign an undertaking (PDF) in which it commits not to share any EU user data to any other Facebook-owned company until it can comply with the incoming General Data Protection Regulation. The ICO celebrated the deal as a "win for the data protection of UK customers" – a statement that Paul Bernal, IP and internet law expert at the University of East Anglia, said he agreed with only up to a point. "This is indeed a 'win', but a limited one," he told The Register. "It's only a commitment until they believe they've worked out how to comply with the GDPR – and I suspect they'll be working hard to find a way to do that to the letter rather than to the spirit of the GDPR." Using consent as the lawful basis? No dice At the heart of the issue is consent. In summer 2016, a privacy policy update said that, although it would continue to operate as a separate service, WhatsApp planned to share some account information, including phone numbers, with Facebook for targeted advertising, business analysis and system security. Although users could withhold consent for targeted advertising, they could not for the other two purposes – any users that didn't like the terms would have to stop using WhatsApp. The EU data protection bodies have previously said that this "like it or lump it" approach to service use doesn't constitute freely given consent – as required by EU rules. Similarly, they felt that WhatsApp's use of pre-ticked boxes was not "unambiguous" and that the information provided to users was "insufficiently specific". The ICO has also noted that matching account data might lead to "privacy policy creep", with further uses of data slipping into the Ts&Cs unnoticed by users. The investigation – which looked only at situations where WhatsApp wanted to share information with Facebook for business interests, not service support – confirmed concerns that the policy wasn't up to scratch. Information commissioner Elizabeth Denham said WhatsApp had not identified a lawful basis for processing, or given users "adequate fair processing information" about any such sharing. "In relation to existing users, such sharing would involve the processing of personal data for a purpose that is incompatible with the purpose for which such data was obtained," she said. She added that if the data had been shared, the firm "would have been in contravention of the first and second data protection principles" of the UK's Data Protection Act. WhatsApp has maintained that it hasn't shared any personal data with Facebook in the EU, but in a letter to the biz's general counsel Anne Hoge, Denham indicated that this had not been made clear at the outset. Denham wrote that the initial letter from WhatApp had only stated data sharing was paused for targeted ads. It was, she said, "a fair assumption for me to make" that WhatsApp may have shared data for the other two purposes, "but have at some point since that letter decided to pause" this too. However, she said that since WhatsApp has "assured" the ICO that "no UK user data has ever been shared with Facebook", she could not issue the biz with a civil monetary penalty and had to ask WhatsApp to sign the undertaking instead. Next up: Legitimate interests Denham's letter makes it clear that the companies will be working to make sure that data sharing can go ahead in a lawful way, particularly for system security purposes, for which it may consider using the "legitimate interests" processing condition. She noted that there would be "a range" of legitimate interests – such as fighting spam or for business analytics – but that in all cases it would need to show that processing was necessary to achieve it, and balance it against individuals' rights. Bernal said that if the biz had any plans to use the consent condition for processing, it "will need huge scrutiny". "It's almost impossible for most users to understand what they're really consenting to," he said. "And if ordinary users can't understand, how can they consent?" Jon Baines, data protection adviser at Mishcon de Reya, also noted that the fact WhatsApp had held its ground on what he described as a "key point" could put the ICO in a difficult position down the line. "It's very interesting that the ICO is classing this as a 'win', because – although on the surface it seems like a success – it's notable that WhatsApp have reserved their position on a key point, which is whether the processing in question falls under the UK's remit by virtue of the fact that it takes place in the UK on users' devices," he said. "Normally the effect of an informal undertaking will be to encourage a data controller voluntarily to take or cease action, to avoid the need for legal enforcement which would otherwise be available. "Here, should WhatsApp subsequently fail to perform the undertaking, the ICO might be compromised if there is no clear basis on which it can follow up with enforcement action." In a statement sent to The Register, WhatsApp emphasised the pause it had put on data sharing. "As we've repeatedly made clear for the last year we are not sharing data in the ways that the UK Information Commissioner has said she is concerned about anywhere in Europe." It added that it "cares deeply" about users' privacy and that "every message is end-to-end encrypted". Source
×