Jump to content

Search the Community

Showing results for tags 'facebook'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 351 results

  1. Facebook to introduce auto expiring posts and more Facebook, one of the most popular social networking, has slowly become more of noise. The original idea where it could have been used for conversation, connecting people seems to be getting lost. It is almost taken over by videos, stuff with people to watch, but soon get tired. I had to uninstall Facebook because all it showed was videos after videos. It looks like Facebook can no more ignore it. Mark Zuckerberg recently shared how Facebook can change in the coming years. Facebook to introduce auto expiring posts In his post, he mostly emphasizes on Privacy. Something which is not really on the side of Facebook thanks to thousands of connected apps an services. Facebook wasn’t really at fault when they started it. People did like to share everything. Right from a morning selfie to childbirth to even announcing their date. Things got too far. It took time for everyone to realize that they have shared everything about them to the world. According to Facebook analysis, users are now also looking for Privacy. Facebook is planning to build something on lines of WhatsApp. The first plan is to make sure it’s secure and then develop more ways for people to interact which includes calls, video chats, businesses, payments. Messaging System Evolution It sounds like a messaging system which can integrate almost everything but completely private. Here is an outline to what looks like in the works. 1] Private interactions to make sure one else can see what you share. 2] Encryption: End-to-end encryption prevents anyone — including Facebook. 3] Expiring posts: Whatever you post on Social Network, stays. Facebook plans for Expiring Posts or messages. Users can choose so they can disappear and no longer haunts them. Here are the key points: Content automatically expires or is archived over time. Messages could be deleted after a month or a year by default. One would have the ability to change the timeframe or turn off auto-deletion for your threads if you wanted Limit the amount of time we store messaging metadata. 4] Communicate Across Network: Facebook now owns WhatsApp and Instagram. Messenger is in its core. With phone numbers and email id connected, users will be able to communicate across networks easily and securely. Say for example if you are Facebook Market place, but your primary way of communication is WhatsApp. Then this feature will allow one to send an encrypted message to someone’s phone number in WhatsApp from Messenger. There will be no need to display your phone number in public. The second use case can be where you can share stories or status instantly across Facebook, WhatsApp, and Instagram. And lastly, with WhatsApp payments in place, one can make payment across platforms. All this can make things seamless, but of course, this will be still walled in Facebook own castle. A Platform for private sharing Facebook has heavily invested in the messaging platform. They bought WhatsApp and Instagram, and now it’s all making sense. Billions of users are hooked into the system, and there is more to join. Secure Chat, Payment system cross portability, we could be looking at the evolution of the messaging system aka platforms for private sharing Facebook plans for this over the next year with a lot of things are already in works. Facebook claims to be taking advice from experts, advocates, industry partners, and governments — including law enforcement and regulators — around the world to get these decisions right. In my opinion, what we will matter is how well Facebook will be able to turn this into business. Throughout his posts, Zuckerberghas talked about companies and payments. If Facebook goes all out to take over the payment system, it will not be surprising. Source
  2. Facebook blames server tweak for blackout issues Facebook has only just offered an explanation for the problems it has experienced over the past 24 hours Facebook has said that a "server configuration change" was to blame for the worst outage in its history. It said it had "triggered a cascading series of issues" for its platforms, including WhatsApp and Instagram. The disruption, which lasted for more than 14 hours, left most of its products inaccessible around the globe. It took the social network giant a full day from when the problems began to offer any explanation. It added that everything was now back to normal. 'Very sorry' "Yesterday, we made a server configuration change that triggered a cascading series of issues," facebook said. "As a result, many people had difficulty accessing our apps and services. "We have resolved the issues and our systems have been recovering over the last few hours. "We are very sorry for the inconvenience and we appreciate everyone's patience." 'Screw-ups' Commentators have questioned the length of time it took the social network to issue an explanation for the disruption, which affected advertisers who have marketing campaigns on the platform as well as consumers. Independent security analyst Graham Cluley told the BBC: "Facebook's motto always used to be 'move fast and break things'. That's fine when you're an innovative start-up, but when billions of people are using your site every month it's not a good way to run the business." Some early reports suggested that the social network could be under cyber-attack, something that Facebook was quick to deny on rival platform Twitter. "When popular sites like these go dark many people often think there must be a sinister explanation - such as a hacker attack," said Mr Cluley. "However, anyone who has worked in IT for any length of time knows that screw-ups are all too common. It doesn't always have to be cyber-criminals who are to blame." Source
  3. Facebook reportedly under criminal probe for data sharing practices with ‘partners’ Federal prosecutors reportedly are probing Facebook’s data sharing partnerships with electronics companies, including smartphone makers, and a grand jury has subpoenaed information from at least two firms. The partnerships – with more than 150 companies, according to a report by the New York Times – made it possible for the partner companies to gain access to Facebook user data. Last year, in a blog post, Facebook defended sharing user data with at least 60 mobile device manufacturers, in an effort to make its services and experiences available to device owners via integrated APIs. “Given that these APIs enabled other companies to recreate the Facebook experience, we controlled them tightly from the get-go,”Ime Archibong, Facebook’s VP of product partnerships, wrote, explaining that mobile device manufacturers are considered trusted partners who essentially act as extensions of Facebook. “These partners signed agreements that prevented people’s Facebook information from being used for any other purpose than to recreate Facebook-like experiences. Partners could not integrate the user’s Facebook features with their devices without the user’s permission. And our partnership and engineering teams approved the Facebook experiences these companies built.” But some believe Facebook’s legal comeuppance is long overdue. “Anyone who has illegally trafficked in personal data – or failed to report the loss of it – should face criminal charges,” said Shane Green, the CEO of digi.me and co-founder and chair of UBDI who claims that Facebook has done both. “Their business model makes it hard for them to know the difference.” Green said it should come as no surprise that the social media giant is under investigation. “It’s high time government authorities held them accountable,” he said. Source
  4. Please, Facebook, give these chatbots a subtext! Facebook’s latest research on intelligent agents shows the field continues to be divorced from the underlying motivations and impulses that make dialogue and interaction rewarding. Imagine yourself in a restaurant, waiting for a table. A stranger approaches and asks, "Do you come here often?" Do you Reply, "Yes, and how come I've never seen your type here before?" because you agree to pick up on the hint and flirt? Glower and say nothing because you want to be left alone? Say, "Excuse me," because the maître d' is signaling your table is ready and you want to get out of this exchange and get to your table? People do and say things because of a subtext, what they're really, ultimately after. The subtext underlies all human interaction. Machines, at least in the form of current machine learning, lack a subtext. Or, rather, their subtext is as dull as dishwater. That much is clear from the latest work by Facebook's AI Research scientists. They trained a neural network model to make utterances, select actions, and choose ways to emote, based on the structure of a text-based role-playing game. The results suggest interactions with chatbots and other artificial agents won't be compelling anytime soon. The underlying problem, as seen in another recent work by the team, last year's Conversational Intelligence Challenge, is that these bots don't have much of a subtext. What subtext they do have is merely forming context-appropriate outputs. As such, there's no driving force, no real reason for speaking or acting, and the results aren't pretty. The research, "Learning to Speak and Act in a Fantasy Text Adventure Game," is posted on the arXiv pre-print server. It is authored by several of the Facebook researchers who helped organize last year's Challenge, including Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Emily Dinan, Tim Rocktächel, Douwe Kiela, Arthur Szlam, Samuel Humeau, and Jason Weston. Snippets of text generated from Facebook's "LIGHT" artificial agent system. The machine plays the role of "Traveler" in response to the human-generated "Friend" exclamations. Facebook AI Research The authors offer a new setting for language tasks, called "LIGHT," which stands for "Learning in Interactive Games with Humans and Text." LIGHT is constructed via a crowd-sourcing approach, getting people to invent descriptions of made-up locations, including countryside, bazaar, desert, and castle, totaling 663 in all. The human volunteers then populated those settings with characters, from humans to animals to orcs, and objects that could be found in them, including weapons and clothing and food and drink. They also had the humans create almost 11,000 example dialogues as they interacted in pairs in the given environment. The challenge, then, was to train the machine to pick out things to do, say, and "emote" by learning from the human examples. As the authors put it, to "learn and evaluate agents that can act and speak within" the created environments. For that task, the authors trained four different kinds of machine learning models. Each one is attempting to learn various "embeddings," representations of the places and things and utterances and actions and emotions that are appropriate in combination. The first model is a "baseline" model based on StarSpace, a 2017 model, also crafted by Facebook, that can perform a wide variety of tasks such as applying labels to things and conducting information retrieval. The second model is an adaptation by Dinan and colleagues of the "transformer" neural network developed in 2017 at Google, whose use has exploded in the last two years, especially for language tasks. The third is "BERT," an adaptation by Google of the transformer that makes associations between elements in a "bi-directional" fashion (think left to right, in the case of strings of words.) The fourth neural network approach tried is known as a "generative" network, using the transformer not only to pay attention to information but to output utterances and actions. The test for all of this is how the different approaches perform in producing dialogue and actions and emoting, once given a human prompt. The short answer is that the transformer and the BERT models did better than baseline results, while the generative approach didn't do so well. Most important, "Human performance is still above all these models, leaving space for future improvements in these tasks," the authors write. Indeed, even though only a few examples are provided of the machine's output, it seems the same overall problem crops up again as appeared in last year's Challenge. In that chatbot competition, the over-arching goal, for human and machine, could be described as "make friends." The subtext was to show interest in one's interlocutor, to learn about them, and to get the other side to know a little bit about oneself. On that task, the neural networks in Challenge failed hard. Chatbots repeatedly spewed out streams of information that were repetitive and that seemed poorly attuned to the cues and clues coming from the human interlocutor. Similarly, in the snippets of computer-generated dialogue shown in the LIGHT system, the machine generates utterances and actions appropriate to the setting and to the utterance of the human interlocutor, but it is strictly a prediction of linguistic structure, absent of much purpose. When the authors performed an ablation, meaning, tried removing different pieces of information, such as actions and emotions, they found the most significant thing the machine could be supplied was the history of dialogue for a given scene. Their work remains essentially a task of word prediction, in other words. Artificial agents of this sort may have an expanded vocabulary, but it's unlikely they have any greater sense of why they're interacting. They still lack a subtext like their human counterparts. All is not lost. The Facebook authors have shown how they can take word and sentence prediction models and experiment with their text abilities by combining more factors, such as a sense of place and a sense of action. Until, however, their bots have a sense of why they're interacting, it's quite likely artificial-human interactions will remain a fairly dull affair. Source
  5. Facebook sues Ukrainian browser extension makers for scraping user data Facebook said the malicious extensions were installed by more than 63,000 users. Funnytest.pro - one of the sites cited in the Facebook civil complaint Image: ZDNet Facebook has filed a suit against two Ukrainian developers for creating Facebook apps and browser extensions that harvested user data and injected ads into users' timelines. The two developers cited in a lawsuit Facebook filed late Friday, March 8, are named Gleb Sluchevsky and Andrey Gorbachov, both based out of Kiev, and working for a company called the Web Sun Group. According to court documents, Sluchevsky and Gorbachov ran at least four web apps that provided quizzes on various topics. The web apps were advertised and shared on Facebook but they were hosted on a multitude of third-party websites such as megatest.online, supertest.name, testsuper.su, testsuper.net, fquiz.com, and funnytest.pro. Named "Supertest," "FQuiz," "Megatest," and "Pechenka," the web apps were mainly advertised toward Russian and Ukrainian-speaking audiences, and enticed users with themes of "Do you have royal blood?, "You are yin. Who is your yang?" and "What kind of dog are you according to your zodiac sign?," among many. Sluchevsky and Gorbachov ran their scheme between 2016 and 2018, Facebook said. Once users landed on these sites, they'd be prompted to enable push notifications in their browsers, which at later points would prompt the user to install various browser extensions. These extensions contained malicious code that would scrape the user's profile for public and non-public data, and insert authentic-looking ads into victims' timelines. Other social networking sites were also targeted, but Facebook didn't name other victimized sites in its civil complaint. The extensions were promoted on at least three official browser stores and sent back user data to servers in the Netherlands under the two suspects' control. In total, Facebook said that the malicious extensions were installed more than 63,000 times. "Defendants used the compromised app users as a proxy to access Facebook computers without authorization," Facebook said, which is now looking for an injunction and restraining order against the two developers to prohibit them from creating any more apps targeting Facebook users. The company is also requesting financial relief for its efforts of investigating the defendants' operation and restitution of any funds the two made through the scheme. The Daily Beast and Law360 first reported the lawsuit on Friday. This is Facebook's second lawsuit of this kind. A week before, on March 1, Facebook sued four companies and three people in China for operating a network that sold fake accounts, likes and followers on Facebook and Instagram. Source
  6. Facebook finds UK-based 'fake news' network Facebook has removed more than 130 accounts, pages and groups it says were part of a UK-based misinformation network. The company said it was the first time it had taken down a UK-based group targeting messages at British citizens. The same group set up pages posing both as far-right outlets and anti-fascist activists. Facebook said it had shared its discovery with law enforcement and the government. The group was able to gain followers by setting up innocent-looking pages and groups. It later renamed them, and started posting politically-motivated content. MP Damian Collins, who chairs a committee investigating fake news, said it was the "tip of the iceberg". Hate speech Facebook said about 175,000 people followed at least one of the fake pages, which included 35 profiles on Instagram. The company said the pages "engaged in hate speech and spread divisive comments on both sides of the political debate in the UK". "They frequently posted about local and political news including topics like immigration, free speech, racism, LGBT issues, far-right politics, issues between India and Pakistan, and religious beliefs including Islam and Christianity. "We're taking down these pages and accounts based on their behaviour, not the content they posted. In each of these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action." The BBC understands Facebook discovered the network of inauthentic accounts while investigating hate speech about the UK Home Secretary Sajid Javid. Inauthentic activity One of the posts tried to create tension between Christians and LGBT people Another Facebook post tried to insult "leftists" Facebook said the pages had spent about $1,500 (£1,140) on advertising between them. The earliest advert was placed in December 2013, and the most recent in October 2018. Facebook said it had not completed its "review of the organic content coming from these accounts". Separately, the company has also removed 31 pages, groups and accounts for engaging in "co-ordinated inauthentic behaviour" in Romania. These accounts, not linked to the UK network, posted biased news in support of the Social Democratic Party. Source
  7. Facebook to refocus messaging around encryption and privacy CEO Mark Zuckerberg said Facebook will retool its messaging services to be more interoperable, ephemeral, and with end-to-end encryption. Facebook chief executive Mark Zuckerberg said on Wednesday that the company will rebuild many of its services around encryption and privacy. The changes will take years, Zuckerberg said, with the ultimate end-goal being the creation of a privacy-focused social platform that blends the community of a public social network with the intimacy and security of private messaging on WhatsApp. The initial phase of Facebook's security overhaul will tackle messaging. Zuckerberg said he expects that Facebook users will eventually shift to Messenger and WhatsApp for social communications, rather than posting publicly on their profile or News Feed. To that end, Zuckerberg said Facebook will retool its messaging services to be more interoperable, ephemeral, and with end-to-end encryption. The idea is to give people more control over their private messages and how long they're stored, and to reduce the permanence of the content that people share. Interoperability will also enable Facebook to integrate messaging between the Messenger, Instagram and WhatsApp platforms. "We plan to build this the way we've developed WhatsApp: focus on the most fundamental and private use case -- messaging -- make it as secure as possible, and then build more ways for people to interact on top of that, including calls, video chats, groups, stories, businesses, payments, commerce, and ultimately a platform for many other kinds of private services," Zuckerberg wrote in a lengthy blog post. Zuckerberg also goes on to detail how Facebook will approach secure data storage, including new considerations about where it builds data centers and whether it will abide by foreign data storage laws when governments seek access to people's information. Zuckerberg also acknowledged that his company will likely face challenges of confidence as it undertakes such a significant security turnaround. "I understand that many people don't think Facebook can or would even want to build this kind of privacy-focused platform -- because frankly we don't currently have a strong reputation for building privacy protective services, and we've historically focused on tools for more open sharing," Zuckerberg wrote in the post. "But we've repeatedly shown that we can evolve to build the services that people really want, including in private messaging and stories." Source
  8. Facebook is now rolling out a dark mode to Messenger users You have to moon someone telegram messenger Facebook has been overhauling its Messenger app recently, slimming it down to focus more on chat, and promised that it would eventually roll out a dark mode for users. That mode is now rolling out, but there’s a tongue-in-cheek trick you need to do in order to activate it: you have to moon someone. The trick, spotted by Android Police, 9to5Mac, and others, is simple: send someone (or yourself) a crescent moon emoji ( ). Once you do so, a shower of moons appears in the chat window, and you’ll get a prompt to activate the mode in settings. Go to your profile page in the app, and it’ll present you with an option to turn the mode on. Image: Andrew Liptak Android Police notes that the mode has apparently not rolled out to everyone, and that some have gotten a message saying that it’s still a work in progress. Source
  9. Facebook Held Liable for Copyright Infringing ‘Links’ in Italy Social networking giant Facebook must pay damages for failing to remove a link to a copyright infringing YouTube video, an Italian court has ruled. The case, which was filed by the Italian media conglomerate Mediaset, centered around a Facebook group where infringing and defamatory clips of the cartoon “Kirarin Revolution” were posted. Similar to other sites that rely on user-generated content, Facebook has to battle a constant stream of unauthorized copyright material. When it comes to targeting infringement, Facebook has rolled out a few anti-piracy initiatives in recent years. The company has a “Rights Manager” tool that detects infringing material automatically. In addition, it also processes some takedown requests manually. Not all of these notices result in a takedown. For a variety of reasons, Facebook may choose not to intervene. In Italy, this resulted in a drawn-out legal battle which has now come to a conclusion at the Court of Rome. The case in question was filed by Mediaset, the media conglomerate founded by former Italian prime minister Silvio Berlusconi. Mediaset noticed that links to copyrighted clips of the cartoon “Kirarin Revolution” were posted on Facebook. The actual content was hosted by YouTube, but Mediaset’s legal team went after Facebook instead. It asked the social network to remove the postings of a particular Facebook group, which were shared with derogatory comments about the people involved in the show. This week the Court of Rome ruled that Facebook is indeed liable for failing to remove the copyright-infringing hyperlinks. The company was ordered to pay €9,000 in damages as a result. As Facebook is also held liable for defamation, the total damages add up to €35,000. While the damages amount is not groundbreaking, for Mediaset this was a matter of principle. TorrentFreak spoke to Mediaset lawyer Alessandro La Rosa, who handled the case together with colleagues at the Previti law firm. The Court of Rome’s order shows intermediaries such as Facebook can be held liable when they fail to respond in copyright infringement allegations. “Despite Facebook’s role as a passive hosting provider, in this case, it’s obliged to take down and prevent access to illicit information uploaded on its website. The provider is expected to carry out its economic activity with the due diligence that’s reasonably expected to identify and prevent the reported illegal activities,” Mr. La Rosa tells us. Mediaset sent the first notice regarding the infringing activity in 2010, but Facebook decided not to take any action at the time. Although the offensive group was identified, the takedown requests didn’t include a link to the infringing content, Facebook argued. The Court considered this defense but concluded that a link to the infringing hyperlink isn’t necessary. Facebook was alerted to the group and the alleged activities and could have taken action based on this information. “According to the Court, the identification of URL is only technical data which doesn’t coincide with the individual harmful content present on the platform, but only indicates the ‘place’ where this content is found and, therefore, isn’t an indispensable prerequisite for its identification,” Mr. La Rosa says. It’s worth noting that Mediaset never attempted to remove the actual infringing clip from YouTube. Mediaset mostly wanted Facebook to deal with the Facebook group which posted the infringing material in a defamatory context. Mediaset is happy with the outcome according to its legal team. Since the verdict is partly based on EU law, Mr. La Rosa believes that it will help other rightsholders to make a case against Facebook and similar platforms going forward. Facebook is currently considering whether to file an appeal. The company can easily pay the damages, but it may be worried about the broader implications of the ruling. “We are examining the decision of the Court of Rome,” a Facebook spokesperson told Adnkronos in response, adding that it takes the protection of copyright holders “very seriously”. < here >
  10. Facebook Awarded $25000 Bounty For Reporting a CSRF Vulnerability Facebook is already going through tough times since Cambridge Analytica scandal. Nonetheless, their vigilance towards the security of their platform comes in as good news for bug bounty hunters. Particularly, after this report, many bug bounty hunters would be happy to find vulnerabilities in the Facebook platform. A hacker has discovered a critical CSRF vulnerability that made Facebook accounts vulnerable to attack. Facebook acknowledged his effort with a $25,000 bounty. Critical CSRF Vulnerability Discovered In Facebook Recently, a bug bounty hunter Youssef Sammouda found a critical cross-site request forgery bug in the Facebook platform. This CSRF vulnerability could allow an attacker to take over accounts effortlessly. Sammouda has elaborated the details of his findings in a blog post. Explaining about the flaw, he wrote, “This is possible because of a vulnerable endpoint which takes another given Facebook endpoint selected by the attacker along with the parameters and make a POST request to that endpoint after adding the fb_dtsg parameter.” The vulnerable endpoint, as highlighted, was https://www.facebook.com/comet/dialog_DONOTUSE/?url=XXXX. Here, XXXX denotes the parameter serving for the POST request. According to Sammouda, the vulnerability existed in the endpoint located under the domain “www.facebook.com”. Thus, it became easier for a potential attacker to exploit the flaw. An attacker could simply hijack Facebook accounts by simply tricking the victims to click on a malicious link. In fact, the hacker himself has demonstrated a range of functions he could perform by exploiting this link. This includes making a timeline post, deleting profile picture, or even tricking the user to delete the account. Demonstrating the exploit further, he explained that the same link could be used to take over accounts. All it required was adding a new phone number and email address to the target account. Facebook Awarded $25000 Bounty Perhaps, winning a hefty amount of $25000 as bounty for reporting a single bug is not easy. However, it seems Facebook has realized the critical nature of the vulnerability Sammouda reported. As he wrote in his blog, the vulnerability could let an attacker take over any random account. “The attack seems long but it’s done in a blink of an eye and it’s dangerous because it doesn’t target a specific user but anyone who visits the link” In simple words, this CSRF vulnerability had put almost all Facebook accounts on the verge of hacking. Thanks to Sammouda that he promptly reported the flaw to Facebook. Also, Facebook acted quickly to fix the flaw within five days from the initial report. Although, the recent report highlights a critical security flaw that Facebook patched in time. However, it isn’t the first instance where Facebook had to deal with a critical vulnerability. Last year, we heard of numerous instances, where Facebook bugs exposed users’ photos and triggered account hacks in millions. Source
  11. Vaccine misinformation and Infowars: Researchers wary of Facebook's embrace of 'Groups' Two years ago, Facebook CEO Mark Zuckerberg announced a renewed focus on communities and its Groups feature, boasting that it would help create a more “meaningful” social infrastructure. It worked. Last month, Zuckerberg told investors that “hundreds of millions” of users reported belonging in “meaningful groups,” up from one million in 2017, when the company began focusing on Groups. “Connecting with communities of people that you're interested in is going to be as central to the experience as connecting with friends and family,” he told investors last month. But that shift has also moved Facebook activity out of public view, leaving researchers to warn that they now know less about what is happening on the social network and how the company’s algorithm-driven recommendations are funneling people to fringe communities and misinformation. “As a researcher, we can get content but content isn’t what you need to research Facebook and see how these groups work,” said Jonathan Albright, director of research at Columbia University's Tow Center for Digital Journalism. “We need to know more about the networks and members of groups that spread false information and target individuals.” “It would take a lot of work and more of a partnership than Facebook is willing to establish, especially for this kind of work,” Albright said. Groups gained public attention this week when The Guardian uncovered a vast network of groups spreading false information about vaccines. “STOP Mandatory Vaccination,” one of the largest anti-vaccine private groups at more than 126,000 members, is led by Larry Cook, a self-described “social media activist” with no children or medical training. Cook and his members promote the dispoven theory that vaccines cause autism and spread conspiracies that outbreaks of preventable diseases are “hoaxes” perpetrated by the government. In response to the negative press, Facebook told Bloomberg News it was considering “reducing or removing this type of content” from the recommendations it offers users. Those recommendations have also come under increased scrutiny. Part of Facebook’s plan for growth has relied on it’s recommendation engine — the algorithm the company relies on to get users to join more groups. The algorithm that powers the right rail of “suggested groups” isn’t public, and a Facebook spokesperson declined to give details other than saying they are tailored to individual users. But researchers like Renée DiResta, who studies online disinformation as director of research at cybersecurity company New Knowledge, have found Facebook “actively pushes” users down a rabbit hole of increasingly misinformed, conspiratorial and radical communities. Zuckerberg has acknowledged the significance of the recommendation engine, writing in 2017, “Most don't seek out groups on their own — friends send invites or Facebook suggests them.” Facebook is not the only tech company to face criticism for developing automated recommendations that push users toward misinformation. YouTube recently announced it would be changing its own recommendation engine to stop suggesting conspiracy videos, after years of similar outcry from researchers and former employees critical of what they claimed was the company’s bargain in which it suggested extreme content to keep people watching. Source
  12. Facebook broke law, must be regulated to protect democracy, politicians say A report on fake news from Parliament condemns Mark Zuckerberg's "contempt" towards the UK and other countries around the world. British politicians accused Facebook of knowingly and intentionally violating data privacy and antitrust law in a damning report into fake news published late Sunday. The report says tech and social media companies should be forced to comply with a compulsory code of ethics overseen by an independent regulator, which should have powers to take legal action against companies breaching the code. The report comes as a result of an inquiry conducted last year by Parliament's Digital, Culture, Media and Sport (DCMS) Committee into fake news and the spread of disinformation. When the revelations about the data consultancy firm Cambridge Analytica gaining access to millions of Facebook users' data came to light in March 2018, the committee looked closely at the social network's role in the scandal. In the course of its investigations, the committee examined the ways in which Facebook might have impacted the outcome of elections, including possible Russian interference, ad targeting and access to user data that violated the privacy rights of users. The report concluded that current electoral law is not fit for purpose in the digital age, leaving democracy at risk from online threats and that regulating social media will help curb these risks. The document also condemned some of Facebook's policies and practices -- in particular the way in which it prevented some smaller companies from accessing data, effectively killing their business. "Companies like Facebook exercise massive market power, which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers." said Damian Collins, chair of the DCMS Committee in a statement. "The guiding principle of the 'move fast and break things' culture often seems to be that it is better to apologise than ask permission." Facebook rejected the committee's claims that it breached antitrust and data privacy laws and said it had found no evidence of coordinated foreign interference on Facebook during the Brexit referendum. Karim Palant, UK public policy manager at Facebook, pointed to changes that Facebook has made over the past 12 months, including new rules about how it authorizes political ads and the tripling in size of the team working to detect and protect users from bad content. "No other channel for political advertising is as transparent and offers the tools that we do," he said. "While we still have more to do, we are not the same company we were a year ago." Zuck's 'contempt' for Parliament On multiple occasions throughout 2018, the committee invited company CEO Mark Zuckerberg to give evidence in person or via video link. But representatives for Zuckerberg rebuffed every invitation, including one from the International Grand Committee investigating fake news, made up of representatives from nine countries across the world, which met in Parliament last November. Instead, other Facebook executives, including CTO Mike Schroefper and VP of Public Policy for Europe Richard Allan (who also sits in Parliament's House of Lords), appeared in his place. A protester wearing a model head of Facebook CEO Mark Zuckerberg poses for media outside Parliament on Nov. 27, 2018 in London. Jack Taylor/Getty Images "We share the committee's concerns about false news and election integrity and are pleased to have made a significant contribution to their investigation over the past 18 months, answering more than 700 questions and with four of our most senior executives giving evidence," said Palant. But the politicians responsible for questioning the Facebook executives had a different view of the Facebook executives' performances as witnesses in the inquiry. During questioning, they consistently expressed their dismay at Facebook's inability to provide full, clear answers. "We believe that in its evidence to the committee, Facebook has often deliberately sought to frustrate our work, by giving incomplete, disingenuous and at times misleading answers to our questions," said Collins in a statement. The committee members speculated in the report that the Facebook executives who appeared before Parliament may have deliberately not been briefed on certain issues. The report also described Zuckerberg as showing "contempt" towards the UK and the others legislators for his decision not to attend or respond to the invitations personally. Collins, who has led the calls for Zuckerberg to appear, was particularly outspoken on the CEO's decision to duck questioning: "Even if Mark Zuckerberg doesn't believe he is accountable to the UK Parliament, he is to the billions of Facebook users across the world. Evidence uncovered by my committee shows he still has questions to answer yet he's continued to duck them, refusing to respond to our invitations directly or sending representatives who don't have the right information. Mark Zuckerberg continually fails to show the levels of leadership and personal responsibility that should be expected from someone who sits at the top of one of the world's biggest companies." The committee has now officially concluded its inquiry, but Collins has made it clear on several occasions that he would still like to hear from Zuckerberg personally. If the CEO enters the UK, it's possible Parliament could issue a formal summons that would compel him to appear for questioning. Facebook didn't respond specifically to comments about Zuckerberg's non-attendance at evidence sessions. Source
  13. Facebook, FTC reportedly negotiating massive fine to settle privacy issues The multibillion-dollar fine would be the largest ever imposed by the FTC, according to The Washington Post. Facebook CEO Mark Zuckerberg. Facebook and the Federal Trade Commission are negotiating a multibillion-dollar fine to settle an investigation into the social network's privacy practices, The Washington Post reported Thursday. It'd be the largest fine ever imposed by the agency, according to the Post, though the exact amount hasn't yet been determined. Facebook was initially concerned with the FTC's demands, a person familiar with the matter told the publication. If the two parties don't come to an agreement, the FTC could reportedly take legal action. A Facebook representative said the company isn't commenting on the Post report, but added: "We are cooperating with officials in the US, UK, and beyond. We've provided public testimony, answered questions, and pledged to continue our assistance as their work continues." The FTC didn't immediately respond to a request for comment. The FTC began investigating Facebook last year after it was revealed that Cambridge Analytica, a digital consultancy linked to the Trump presidential campaign, improperly accessed data from as many as 87 million Facebook users. The agency is looking into whether Facebook's actions violated a 2011 agreement with the government in which it pledged to improve its privacy practices. Facebook has said it didn't violate the consent decree. Under the agreement, Facebook agreed to get permission from users before sharing their data with third parties. In addition, the tech giant is required to have a third party conduct audits every two years for the next 20 years to ensure the program is effective. Facebook reportedly could reach a deal with the government by agreeing to pay a fine and altering some of its business practices. A judge would have to approve the settlement, according to the Post. The FTC could impose new rules forcing Facebook to go through more stringent, regular checkups to ensure it's in compliance with the settlement, people familiar with the matter told the Post. Alternatively, the tech giant could reportedly opt to challenge the FTC over its findings and suggested penalties. The FTC's last record-setting fine against a tech company for breaking a privacy agreement was reportedly against Google in 2012, for $22.5 million. Source
  14. How to Delete Accidentally Sent Messages, Photos on Facebook Messenger Ever sent a message on Facebook Messenger then immediately regretted it, or an embarrassing text to your boss in the heat of the moment at late night, or maybe accidentally sent messages or photos to a wrong group chat? Of course, you have. We have all been through drunk texts and embarrassing photos many times that we later regret sending but are forced to live with our mistakes. Good news, Facebook is now giving us a way to erase our little embarrassments. After offering a similar feature to WhatsApp users two years ago, Facebook is now rolling out a long-promised option to delete text messages, photos, or videos inside its Messenger application starting from Tuesday, February 5. You Have 10 Minutes to Delete Sent Facebook Messages The unsend feature allows users to delete a message within 10 minutes of sending it, for both individual and group chats. Previously, Messenger offered the "delete" option that allowed users to only delete messages for them—but the recipient can still see the message. Now, the option includes two choices "remove for everyone" and "remove for you," giving users more control over their already sent messages. The social network promised the unsend feature in Messenger after it was revealed last year that Facebook CEO Mark Zuckerberg had an option to "delete" messages that were sent on the messaging app. As promised, the company has now made the unsend option available to all users. Obviously, unsend does not mean unseen. If you send a message and the receiver see it immediately after receiving it, and before you think of deleting it, the unsend feature won't help you. But your quickest move might help you unsend the message so that it is not seen on the other side of the conversation. Here's How to Unsend Messages on Facebook Messenger It is quite simple and straightforward. Long press on the message you want to remove. You will get both a standard emoji response window on the top of that message, as well as three options at the bottom of the screen: Copy, Remove, and Forward. Selecting the Remove option will then display two options: "Remove for Everyone" and "Remove for You." You know what you have to do now. Tapping the "Remove for Everyone" option will remove the message from the chat so that nobody can see the message after that. It should be noted that the unsend feature also works for removing photos and videos sent to a user. Just like WhatsApp, Messenger will replace the removed chat bubble with a text message notifying everyone in the conversation that the message has been removed. But remember, you will have up to 10 minutes to remove the message after being sent. The Remove for You option will function in the same way the previous Delete option works. Facebook is not the first one to offer an "unsend" feature in its chat services, including WhatsApp and Messenger. Secure messaging app Telegram has also been allowing its users to remove messages since years. Source
  15. Photographer Kristen Pierson Reilly has filed a lawsuit against Facebook for failing to respond properly to a DMCA notice. The social network refused to remove a copy of her photo, stating that it wasn't clear whether its use was infringing. In a complaint filed in a federal court in New York, Pierson now demands compensation for the damage she suffered. Every day millions of people post photos online, without approval from the rightsholder. This is particularly prevalent on social media platforms such as Facebook. Many photographers don’t have the time or resources to go after these types of infringements, but some are clearly drawing a line in the sand. This week, photographer Kristen Pierson filed a complaint against Facebook at a New York District Court. Pierson accuses the social media platform of hosting and displaying one of her works without permission. Normally these issues are resolved with a DMCA takedown notice but in this case that didn’t work. Last year, Pierson noticed that the Facebook account “Trusted Tech Tips” had used one of her works, a photo of Rhode Island politician Robert Nardolillo, without permission. When she requested Facebook to remove it, the company chose to leave it up instead. “Hi-, Thanks for your report. Based on the information you’ve provided, it is not clear that the content you’ve reported infringes your copyright,” the Facebook representative wrote in reply. “It appears that the content you reported is being used for the purposes of commentary or criticism. For this reason, we are unable to act on your report at this time.” Facebook’s reply The takedown notice was sent March last year and the post in question remains online at the time of writing, with the photo included. This prompted Pierson to file a complaint at a New York Federal Court this week accusing Facebook of copyright infringement. According to the Rhode Island-based photographer, Facebook failed to comply with the takedown request and can’t rely on its safe harbor protection. “Facebook did not comply with the DMCA procedure on taking the Photograph down. As a result, Facebook is not protected under the DMCA safe harbor as it failed to take down the Photograph from the Website,” the complaint reads. The ‘infringing’ post (exhibit d) The short five-page complaint accuses Facebook of copyright infringement and Pierson requests compensation for the damages she suffered. “Facebook infringed Plaintiff’s copyright in the Photograph by reproducing and publicly displaying the Photograph on the Website. Facebook is not, and has never been, licensed or otherwise authorized to reproduce, publically display, distribute and/or use the Photograph,” it reads. The photographer is not new to these types of lawsuits. She has filed similar cases against other outlets such as Twitter. The latter case was eventually dismissed, likely after both parties reached an agreement. In the present case, Pierson requests a trial by jury but it wouldn’t be a surprise if this matter is settled behind closed doors, away from the public eye. A copy of the complaint against Facebook is available here (pdf). Original Article.
  16. Facebook paid teenagers to mine device data Facebook denies it specifically targeted teens with its research programme Facebook is halting a scheme that gathered highly personal data from paid volunteers, after it was exposed. TechCrunch said participants - including those aged 13-17 - had been paid up to $20 (£15.30) a month to open up their phones to deep analysis. Apple has said Facebook misused its privileges to distribute the app involved. The iPhone-maker has now restricted Facebook's ability to issue iOS apps that are not listed on its App Store. This will disrupt the social network's ability to distribute test versions of its software among staff and for the employees to run apps designed for their exclusive use, which are used to do things such as book transportation. This could add to tensions between the two companies. A spokeswoman for the social network was unable to say whether it ran the programme in the UK or other countries outside the US. TechCrunch reported that Facebook used social media ads to target teenagers for the scheme. Facebook denies this. Personal data The app had the potential to provide Facebook with "nearly limitless access" to a user's device including: the contents of private messages in chat apps including photos and videos emails web browsing activity logs of what apps were installed, and when they were used a location history of where the owner had physically been data usage In addition, TechCrunch reported that users were asked to provide screenshots of their Amazon orders. When the BBC visited one of the sign-up pages, it stated that Facebook would use the information to improve its services. It added that "there are some instances when we will collect this information even where the app uses encryption, or from within secure browser sessions". It added that participants had to agree not to disclose "any information about this project to third parties". The social network said everyone involved in the programme had consented, and that market research was standard practice. However, in the hours after TechCrunch's report was published, Facebook said it would end the programme on Apple devices. It has not, however, suspended a parallel effort on Android. The research focused on users aged 13-35, and those under 18 were asked to get signed parental consent, Facebook said. Image copyrightGETTY IMAGES Image captionFacebook remains the most popular social media app among UK 12-to-15-year-olds according to a recent Ofcom report However, when the BBC identified itself as a 14-year-old boy during its test, it was able to download the app without any parental consent being sought. A page did state, however, that users should be over the age of 18. A reporter from BuzzFeed News tried signing up via an alternative registration page, where obtaining parental consent involved sharing an email address and clicking a tick box. He said this form did not mention Facebook by name. 'Not spying' In a statement, Facebook took issue with TechCrunch's characterisation of the programme. "Key facts about this market research programme are being ignored," a spokeswoman said via email. "Despite early reports, there was nothing 'secret' about this; it was literally called the Facebook Research App. It wasn't 'spying' as all of the people who signed up to participate went through a clear on-boarding process asking for their permission and were paid to participate. "Finally, less than 5% of the people who chose to participate in this market research program were teens. All of them with signed parental consent forms." When asked by the BBC how exactly the parental consent was obtained, Facebook said it was handled by a third party and did not elaborate. Apple has accused Facebook of abusing a system designed to distribute software to staff to carry out the scheme. "We designed our Enterprise Developer Programme solely for the internal distribution of apps within an organisation," it said in a statement. "Facebook has been using their membership to distribute a data-collecting app to consumers, which is a clear breach of their agreement with Apple. "Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data." Facebook has yet to respond to the punishment. Image [email protected] Image [email protected] There is also an app for devices running on Android distributed outside of Google's Play Store. TechCrunch’s detailed report explained Facebook had previously conducted market research using a virtual private network (VPN) app called Onavo, which it acquired in 2013. Image copyrightGETTY IMAGES Internal documents, published online in December, revealed Facebook had used the data gathered to decide to takeover WhatsApp and track usage of rivals including Snapchat and Twitter's former video service Vine. But in August last year, Facebook removed the app from the App Store after Apple complained that violated its data-collection rules. Not unusual However, Facebook had another research app that it had been running since 2016. It circumvented Apple’s App Store by using testing tools typically used to install software that is still in development. The app installed a “root certificate”, which enabled deeper access to a phone’s software including functions not reachable by typical apps. Apple allows the installation of root certificates in narrow cases, such as for companies that provide employees with iPhones but want to install internal apps, monitoring capabilities and extra security. But Apple’s Developer Enterprise Program License Agreement makes it clear that these certificates must only be used for “specific business purposes” and “only for use by your employees”. There are scenarios that allow exceptions to the rule, the policy goes on to say. But market research is not one of them. Facebook had earlier insisted its market research policies were not unusual. "Like many companies, we invite people to participate in research that helps us identify things we can be doing better,” it said. "Since this research is aimed at helping Facebook understand how people use their mobile devices, we've provided extensive information about the type of data we collect and how they can participate. "We don't share this information with others and people can stop participating at any time." Source
  17. Facebook Inc has removed hundreds of Indonesian accounts, pages and groups from its social network after discovering they were linked to an online group accused of spreading hate speech and fake news. Indonesian police uncovered the existence of the group, called Saracen, in 2016 and arrested three of its members on suspicion of being part of a syndicate being paid to spread incendiary material online through social media. “These accounts and pages were actively working to conceal what they were doing and were linked to the Saracen Group, an online syndicate in Indonesia,” Nathaniel Gleicher, Facebook’s head of Cybersecurity Policy, said on Friday. “They have using deceptive messaging and... networks of concealed pages and accounts to drive often divisive narratives over key issues of public debates in Indonesia,” Gleicher told Reuters in an interview. The world’s largest social network has been under pressure from regulators around the globe to fight spread of misinformation on its platform. In January, it announced two new regional operations centers focused on monitoring election-related content in its Dublin and Singapore offices. Indonesia is currently in the run-up to a presidential election set to take place in April, with internet watchdogs flagging the impact of fake news as a concern. Indonesia is estimated to be Facebook’s third largest markets, with over a 100 million users. Apple bans Facebook app that paid teens for data Indonesia’s police cyber crime unit has previously told Reuters that Saracen was posting material involving religious and ethnic issues, as well as fake news and posts that defamed government officials. The country has an ethnically diverse population of 260 million people, with a big majority of Muslims but with significant religious minorities, and ensuring unity across the archipelago has been a priority of governments. Gleicher said Facebook’s investigation found Saracen agents would target and compromise accounts, but stressed the removal of the accounts was due to “coordinated deceptive behavior (by Saracen)... not due to the content they had shared”. The pages and accounts deleted had 170,000 followers on Facebook and more than 65,000 on Instagram, but the reach of the people exposed to the content is believed to be higher. Police alleged there were financial links between Saracen and a handful of organizers of 2016 protests against the former governor of Jakarta, who was condemned for blasphemy after a doctored video of supposed anti-Islam comments went viral. However, the Indonesian supreme court ruled in April 2018 that Saracen had not been guilty of spreading hate speech and that the police’s case could not be proven. A national police spokesman said they were continuing to monitor Saracen’s social media activity and would ask Facebook for the data from their investigation. A lawyer for Jasriadi, whom prosecutors allege was one of the masterminds of the Indonesian syndicate, said “that based on the facts of the case and our hearing, there remains no evidence that Saracen exists”. Source
  18. Facebook removes nearly 800 fake pages and accounts traced to Iran KEY POINTS Facebook on Thursday announced it removed 783 pages, groups and accounts with ties to Iran. The pages and accounts were used to promote Iranian propaganda. The accounts and pages had activity in various regions, including the U.S., Germany and France. Facebook founder and CEO Mark Zuckerberg arrives to testify following a break during a Senate Commerce, Science and Transportation Committee and Senate Judiciary Committee joint hearing about Facebook on Capitol Hill in Washington, DC. Saul Loeb | AFP | Getty Images Facebook on Thursday announced it removed 783 pages, groups and accounts with ties to Iran as part of the company’s continued effort to rid misinformation from its services. The company said the Iranian accounts and pages were used to push Iranian propaganda “on topics like Israel-Palestine relations and the conflicts in Syria and Yemen, including the role of the US, Saudi Arabia, and Russia,” Facebook said in a blog post. At least one of the pages had about 2 million followers. Altogether, the accounts spent less than $30,000 on Facebook and Instagram ads, the company said. The accounts and pages also hosted eight events, dating back to May 2014. As many as 210 people expressed interest in attending at least one of the events, Facebook said. Arabia, and Russia,” Facebook said in a blog post. At least one of the pages had about 2 million followers. Altogether, the accounts spent less than $30,000 on Facebook and Instagram ads, the company said. The accounts and pages also hosted eight events, dating back to May 2014. As many as 210 people expressed interest in attending at least one of the events, Facebook said. Source
  19. (Reuters) - Facebook Inc could be subjected to at least two more state probes in the United States on the alleged mishandling of user data, Bloomberg reported on Thursday. The report, which cited people familiar with the matter, said Pennsylvania Attorney General Josh Shapiro and his Illinois counterpart Kwame Raoul have joined forces with Connecticut to focus on investigating existing allegations. The state probes are coalescing into two main groups, the report said. The states of New York, New Jersey and Massachusetts are also probing the social media giant and are seeking to uncover any potential unknown violations. The Illinois attorney general’s office declined to comment on Bloomberg’s report when contacted by Reuters, while the office of the attorney general of Pennsylvania did not immediately respond to a request for comment. Facebook told Reuters in an emailed statement it was having “productive conversations” with attorneys general from a number of states. “Many officials have approached us in a constructive manner, focused on solutions that ensure all companies are protecting people’s information, and we look forward to continuing to work with them,” Facebook’s vice president of state and local public policy, Will Castleberry, said. Facebook and other tech giants have been under pressure for over a year after it was revealed that British consultancy Cambridge Analytical acquired data on millions of U.S. users to target election advertising. That led to heads of several tech companies testifying before Congress last year. Source
  20. Facebook has been no stranger to controversy and scandal over the years, but things have been particularly bad over the last twelve months. The latest troubles find Mozilla complaining to the European Commission about the social network's lack of transparency, particularly when it comes to political advertising. Mozilla's Chief Operating Officer, Denelle Dixon, has penned a missive to Mariya Gabriel, the European Commissioner for Digital Economy and Society. She bemoans the fact that Facebook makes it impossible to conduct analysis of ads, and this in turn prevents Mozilla from offering full transparency to European citizens -- something it sees as important in light of the impending EU elections. Dixon calls on the Commission to raise its concerns with Facebook, and to put pressure on the social network to make it Ad Archive API publicly available. Mozilla believes that the inability to conduct analysis of ads "prevents any developer, researcher, or organization to develop tools, critical insights, and research designed to educate and empower users to understand and therefore resist targeted disinformation campaigns". The letter is written as both Mozilla and the European Commission try to battle fake news and misinformation online. Dixon writes: She goes on to complain: In calling for the API to be made public, Dixon says that "transparency cannot just be on the terms with which the world’s largest, most powerful tech companies are most comfortable". While Mozilla has been in talks with Facebook about the matter, Dixon makes it clear that it has been "unable to identify a path towards meaningful public disclosure of the data needed", hence calling on the Commission for help. Source
  21. This time, Facebook is not apologizing for the latest scandal This time, Facebook didn't say sorry. After it was revealed that the social network had been paying people to install a "Facebook Research" app that revealed their every move on their mobile phones, Facebook was defiant. "There was nothing 'secret' about this," the company said in a statement. "It wasn't 'spying' as all of the people who signed up to participate went through a clear on-boarding process asking for their permission and were paid to participate." The back of a young man wearing a Facebook shirt working at a computer screen filled with code. FACEBOOK But after TechCrunch broke the story, Facebook said it would end the program on Apple. Which, in turn, said it had banned Facebook from having the app in its iOS App Store. The app is nowhere to be found in the Google Play app Store as well. However, an earlier data collection app by Facebook called Onavo is still available in the Google store. Apple said Facebook was distributing a data-collecting app to consumers that represented a "a clear breach of their agreement with Apple." Google's policy says developers "must be transparent in how you handle user data (e.g., information collected from or about a user, including device information). That means disclosing the collection, use, and sharing of the data, and limiting the use of the data to the purposes disclosed, and the consent provided by the user." On Twitter, security expert Will Strafach had a different take from Facebook. He said the social network's moves to get the app to consumers was "the most defiant behavior I have ever seen. It's mind blowing." Facebook's Onavo Protect app analyzes personal info So what's the big deal? Let's explain. Facebook has a history of harvesting information from its users, which it then uses to sell targeted ads. That's how the system works, explained Facebook co-founder Mark Zuckerberg in a recent Wall Street Journal op-ed. "Here you get our services for free—and we work separately with advertisers to show you relevant ads. This model can feel opaque, and we’re all distrustful of systems we don’t understand." In 2018, Facebook found itself in hot water when it was revealed that a rogue app developer had passed on personal information from some 87 million Facebook users to the political consulting firm Cambridge Analytica, which had ties to the Donald Trump presidential campaign. The information was gleaned from users who had clicked on a survey app on Facebook. The social network apologized, said Cambridge violated their policies, and changed rules to ensure that this wouldn't happen again. Another scandal hit by the end of the year, when Facebook said accounts of nearly 50 million users had been breached. It apologized again. Data Harvesting The latest Facebook scandal is all about data harvesting, which is what Facebook looked to do with the Facebook Research app.It was billed as a way to learn more about how people use their data. The app gave Facebook "unlimited power to eavesdrop on everything that happened on the targeted users phones," says Bennett Cyphers, a staff technologist with the advocacy group the Electronic Frontier Foundation. " The Facebook Research app was a virtual private network, a way to have private browsing. A VPN can intercept the data on your phone and go direct to Apple and Google servers. Apple had removed the earlier app, Facebook Protect Onavo, from the iOS App Store due to privacy concerns. The app is still available for Android phones on the Google Play Store. Apple's policy says that app developers are not allowed to offer apps "for the purposes of analytics or advertising/marketing." The workaround for Facebook was installing what's called an “enterprise developer certificate," and that's used by developers to make apps for internal use, without publishing them to the App Store. "We designed our Enterprise Developer Program solely for the internal distribution of apps within an organization," Apple said in a statement. "Any developer using their enterprise certificates to distribute apps to consumers will have their certificates revoked, which is what we did in this case to protect our users and their data." Jeremiah Owyang, an analyst with Kaleido Insights, says the latest scandal is small potatoes compared to past Facebook transgressions. The information collected is the "same information any marketer wants to know," he says. "How people use their apps, where else they go on their phones. That's not a bad thing. They want to understand how we interact beyond Facebook and how we interact with other chat programs." However, Cyphers says it's a "really big deal," as witnessed by Apple booting Facebook out of the app developer enterprise program. "Apple has taken a drastic step quickly by kicking Facebook out," he says. "It will make it more difficult for Facebook to build and test all their apps in the App store. Hopefully this will put more fuel to the fires about regulating these tech giants." Jim Steyer, the CEO of advocacy group Common Sense Media, put out a statement attacking Facebook. “Once again, Facebook has been exposed for putting profits before people. The company’s manipulative tactics and desire to gather every waking thought about its users at any cost is unacceptable." Meanwhile, the back and forth between Facebook and Apple is all about corporate politics, says Owyang. Apple itself fell into hot privacy water this week when it was revealed that a bug in the FaceTime video calling app could eavesdrop on your conversations even if you don't answer the call. Apple has since disabled the app's group chat function. Source
  22. Facebook confirmed it has hired 3 of its fiercest critics after two years growing privacy scandals. The company said in a statement to Business Insider that it believes it is “important to bring in new perspectives to the privacy team” and “the new hires we are making will challenge us to build better approaches to privacy in the future.” Robyn Greene, an attorney for Open Technology Institute, and Nathan White from Access Now will work out of Facebook’s DC office, while Nate Cardozo, an attorney formerly of the Electronic Frontier Foundation, will be based out of Menlo Park, California. Former Electronic Frontier Foundation senior staff attorney Nate Cardozo is joining Facebook as the privacy policy manager for WhatsApp. After 2 years of growing privacy scandals, Facebook confirmed it has hired 3 privacy law activists, who are among some its fiercest critics. The company announced Tuesday that it hired Robyn Greene, an attorney for Open Technology Institute, Nathan White from Access Now, and Nate Cardoza formerly of Electronic Frontier Foundation. Greene and White will work with Facebook’s Washington DC office while Cardoza will be based in its headquarters in Menlo Park, California. “We think it’s important to bring in new perspectives to the privacy team at Facebook, including people who can look at our products, policies and processes with a critical eye,” said Facebook’s Rob Sherman, deputy chief privacy officer, in a statement to Business Insider. We know that we have a lot of work to do not only to restore people’s trust in Facebook, but also to improve their privacy experiences,” he said. Cardoza wrote that Facebook’s business model “depends on deception and apathy” in an op-ed in The Mercury News in 2015 and that the company’s responses to its privacy scandals are “disingenuous and fundamentally unfair.” He will primarily be working with Facebook’s WhatsApp as its privacy policy manager. Greene responded to the Cambridge Analytica scandal by saying “Facebook basically pimped out its users,” arguing that the company covered up “corporate malfeasance.” Greene joins Facebook’s privacy policy team as a manager on law enforcement access and data protection issues. “We hope that the new hires we are making will challenge us to build better approaches to privacy in the future and we’re excited to have them onboard,” said Sherman. The move received applause from some privacy activists on social media, who expressed optimism for Facebook’s future and its ability to listen to critics. Source
  23. Senators to Facebook: We want answers on data collecting app The social media giant is under fire for getting teens to download a tracking app. Sen. Mark Warner, shown here in 2018, said Wednesday that he's drafting legislation to require major tech platforms to get informed consent from users whose data is collected in market research. Pete Marovich / Getty Images After a year of asking Facebook to explain its approach to privacy, at least two US senators still have questions. Sen. Mark Warner, a Democrat from Virginia, and Sen. Edward Markey, a Democrat from Massachusetts, each took the company to task Wednesday in light of reports that it paid teens as young as 13 to download an app that could track their every action on their phones. In separate press releases, the senators pointed to growing frustration with the social media giant for not being clear about its data collection practices. For his part, Warner said he is drafting a bill to require major companies to get informed consent from users whose data becomes the target of market research. "I have concerns that users were not appropriately informed about the extent of Facebook's data-gathering and the commercial purposes of this data collection," Warner wrote in a letter to Facebook CEO Mark Zuckerberg. Facebook didn't immediately respond to a request for comment. But a spokesperson told CNET earlier Wednesday that Facebook didn't share the data it collected and users knew what they were signing up for. Less than 5 percent of the users were teens and all signed parental consent forms, the spokesperson said. Warner also asked Facebook for a fuller accounting of how it came to approve the apps, which had deep access to track activity on mobile phones, including apps that Facebook was competing with. Markey took issue with the ethics of targeting teens with an offer they might not have the maturity to access. "It is inherently manipulative to offer teens money in exchange for their personal information when younger users don't have a clear understanding how much data they're handing over and how sensitive it is," he said. Source
  24. Facebook users flock to the social network even as scandals grow CEO Mark Zuckerberg also oversaw a jump in sales and profit. Facebook CEO and co-founder Mark Zuckerberg. Facebook said Wednesday it continued to gain users, a bit of good news for the scandal-plagued social network which started the day with a fight with iPhone maker Apple. Monthly active users rose 9 percent year-over-year to 2.32 billion in the fourth quarter, which ended on Dec. 31. User growth is crucial for Facebook, which makes it money by selling advertising that's targeted to user interests. The rise in users comes as the company faced yet another black eye. Overnight, Apple blocked a research app the social network was distributing to iPhone users after it was discovered Facebook had sidestepped the review process. The Facebook Research app paid users between the ages of 13 to 35 up to $20 per month in exchange for letting Facebook access a user's phone and web activity, including personal messages. The scrape with Apple comes as Facebook, which still posted a 61 percent rise in profits, faces the most serious crisis in its 15-year history. The leadership of CEO Mark Zuckerberg and COO Sheryl Sandberg has been called into question, particularly after The New York Times reported the executives ignored warnings and deflected blame as scandals mounted. The company has also earned criticism for reportedly giving other tech companies greater access to user data than was previously disclosed. And Facebook's image wasn't helped by a software bug that exposed the photos of up to 6.8 million people to third-party app developers. On Wednesday, Facebook reported that its revenue grew by 30 percent to $16.9 billion in the fourth quarter, beating the $16.3 billion that analysts surveyed by Thomson Reuters expected. Source
  25. The ACLU is suing a California sheriff for blocking activists on Facebook President Trump was caught up in a similar lawsuit last year Illustration by Alex Castro / The Verge The American Civil Liberties Union (ACLU) Foundation of Northern California filed a lawsuit on Wednesday accusing a Sacramento sheriff of unlawfully blocking Black Lives Matter leaders from his official Facebook page. According to the ACLU, two Black Lives Matter Sacramento leaders were blocked by Sheriff Scott Jones on Facebook after Jones refused to investigate the death of Mikel McIntyre, who was killed by Sacramento deputies in 2017. This past fall, Jones posted on his official Facebook page to seek support, but was met with criticism which prompted him to block BLM leaders Tanya Faison and Sonia Lewis. When a page blocks someone on social media platforms, the blocked user is no longer able to view or interact with posts on that page. Because the page in question was operated by the sheriff, a government official, the block raises unique constitutional issues. “THIS CASE IS ABOUT ENSURING THAT EVERY VOICE IS HEARD” “The sheriff’s decision to silence them based on their views violates their free speech rights, undermines public trust of government, and offends democratic values,” ACLU senior staff attorney Sean Riordan said. “Free speech must be protected from government censorship on social media just as it is in a public meeting or any forum where people debate politics, religion, and other social issues. The methods may change but the protections of the Constitution don’t.” The ACLU is seeking that the courts declare that Jones’ actions were unconstitutional, damages, and an injunction that would require him to unblock Faison and Lewis. A number of lawsuits have been filed in states like Maine and Maryland on behalf of users who have accused public officials who blocked people of hindering their First Amendment free speech rights. There is precedent as well. Last May, a federal judge in New York ruled that President Trump’s tweets were part of a public forum and that by blocking people, he had violated their free speech rights under the Constitution. “This case is about ensuring that every voice is heard,” said John Heller of Rogers Joseph O’Donnell law firm. “The First Amendment requires no less.” Source
×
×
  • Create New...