Welcome to nsane.forums

Welcome to nsane.forums, like most online communities you need to register to view parts of our community or to make contributions, but don't worry: this is a free and simple process that requires minimal information. Be a part of nsane.forums by signing in or creating an account.

  • Access special members only forums
  • Start new topics and reply to others
  • Subscribe to topics and forums to get automatic updates

Search the Community

Showing results for tags 'facebook'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Found 262 results

  1. Social networks to be forced to remove hate speech in EU The European Union has had it with hate speech and plans to force Facebook and YouTube and other social networks to remove toxic content if they won't do it by free will. This week, European Union approved a proposal to implement the first mandatory social media restrictions. Facebook, YouTube and Twitter are just some of the companies that would have to comply to these new restrictions which would see them blocking videos that incite hatred or promote terrorism. While the proposal has passed with ease,t he new regulations need to get to a final form before passing the European Parliament in order to become official. The European Union has a pretty standard approach to social media and people's freedom of speech. With everything that has happened in recent years, however, it seems they're starting to draw a line at hate speech. The regulations the EU plans to impose on social media sites apply to recorded videos and not live streamed content, so, at the very least, they're being realistic about what companies can and cannot do when reacting to content shared by billions of users. On the other hand, this will be seen as an effort to put restrictions on people's freedom of speech. Unkept promises For their part, Facebook, Twitter, YouTube and other companies have already embarked in an effort to clean up their platforms from specifically this type of content. Facebook, for instance, has announced it will hire a few thousand people to help with the efforts to properly review content being shared on the platform. The EU believes, however, that all these companies have failed to deliver on a promise to remove content that was flagged as hateful within 24 hours, which is part of why they're working on new legislation. While the promises were nice, the EU wants to see actual action being taken Source
  2. Google, Facebook and Twitter sued for San Bernardino terrorist attack Social media companies Facebook and Twitter, as well as Google, are being sued for allegedly enabling ISIS to spread its extremist messages ahead of the San Bernardino attack of 2015. The families of three victims are behind the lawsuit, which claims that these companies aided and abetted the terrorist attacks and are, therefore, liable for wrongful death, reports the Los Angeles Times. "Even if Farook and Malik had never been directly in contact with ISIS, ISIS' use of social media directly influenced their actions on the day of the San Bernadino massacre," reads the lawsuit, referring to Syed Rizwan Farook and Tashfeen Malik. The two were known ISIS supporters and pledged their allegiance to the group on Facebook ahead of the attack. The main idea behind the lawsuit is that because Facebook, Twitter and Google's YouTube allow everyone, including ISIS members, to post on their platforms, they are somehow at fault for indoctrinating the couple. A flawed and dangerous idea This seems like the type of lawsuit that will get thrown out quite quickly, mainly because there are billions of people using social media. Facebook is heading towards the 2 billion milestone, Twitter has over 315 million monthly users and Google is probably used by most people with an Internet access, except for those where other similar tools are available and locally promoted, like China. YouTube, for its part, doesn't necesarily have a number of users who view content, but it does release the number of hours watched by users every month - 3.25 billion. Among these billions of people who use three of some of the most popular tools on the Internet, there are some bad seeds, of course, including ISIS members and other extremists. Most of these people are aware of ISIS and haven't gone to the dark side just because they exist and promote their content online. And for the record, their content does get removed and their accounts shut down. The companies, logically, deny liability and say that it's a tenuous and potentially very dangerous chain of blame that led to them being sued. Basically, any social network can be blamed for terrorism around the world simply because the attackers may have had the smallest connection to the platform. That idea is deeply flawed. Source
  3. Facebook fights fake news, again Now that the French election is a thing of the past, Facebook is taking it upon itself to continue the fight against fake news in another European country - the United Kingdom. The social network is taking some precautionary steps ahead of the June General Elections in the UK by buying ad space in British newspapers and printing an anti-fake news leaflet. Facebook took out ads in several major newspapers, including The Times, the Daily Telegraph and The Guardian. The ads list ten things that users should look out for when deciding whether the information they read is genuine. Among the tips the social network is sharing with users is checking the headlines, URLs, photos and even the dates of the articles since it often happens for old articles to be shared after months or years and to be taken as if they were brand new by the readers. They also advise people to investigate the source, to watch for unusual formatting, to check the evidence, look at other reports and, quite importantly, to figure out if the story is a joke in the first place. So many times over the years, articles from satire sites were taken seriously by people without a second thought. A growing trouble The fake news issue has been growing over the past few years, although it certainly reached its peak during the US presidential election last year. Since then, Facebook has taken numerous steps to fight this problem on its network, including removing fake accounts responsible for spreading such articles. Ahead of the French elections, the company announced it had shut down 30,000 such accounts. "People want to see accurate information on Facebook and so do we. That is why we are doing everything we can to tackle the problem of false news," said Simon Milner, Facebook director of policy for the UK. The company admitted that it wasn't going to solve the problem on its own, which is why they started working with third-party fact-checkers during the elections so they can independently assess facts and stories. Source
  4. Facebook's marketing department is using algorithms to identify emotionally vulnerable and insecure youth as young as 14, The Australian reported today after reporters managed to get their hands on a 23-page report from Facebook's Australian office. The document, dated this year and flagged as "Confidential: Internal Only," presents Facebook's advertising capabilities and highlights the social network's ability to detect, determine, and categorize user moods using sophisticated algorithms. The leaked file, authored by two of Facebook Australia's top execs is a presentation that the company is willing to share with potential customers only under a "non-disclosure agreement only." Facebook using algorithms to categorize emotional states In it, Facebook reveals that by monitoring posts, photos, and interactions, they can approximate a person's emotional state into categories such as "silly," "overwhelmed," "anxious," "nervous," "defeated," "stressed," "stupid," "useless," or a "failure." Facebook claims that it can identify not only emotions and state of mind, but also how and when this changes in real-time and across periods of time. While this was to be expected from a company as advanced as Facebook, the document reveals the social network won't shy away from using its algorithms against youth as young as 14, data which it then makes available to advertisers, so they can target teens that feel insecure or are emotionally vulnerable. The social network is using these points to lure in advertisers to its network, alluding it could help in targeting users, including young teens, in their most vulnerable states when most people tend to buy products and make themselves feel better. The document specifically mentions Facebook's ability to target over 6.4 million "high schoolers," "tertiary students," and "young Australians and New Zealander [...] in the workforce." Facebook confirmed validity of leaked document Contacted by reporters, Facebook admitted the document was real, apologized, and said it would start an investigation into the practice of targeting its younger userbase. Current privacy laws allow companies to collect data on users if its anonymized and stripped of any personally-identifiable information, such as names, precise addresses, or photos. Facebook said it respects privacy laws, but reporters said Facebook is in breach of the Australian Code for Advertising & Marketing Communications to Children, an advertising guideline which states that advertisers must get permission from the child's parent or guardian before collecting any data. Facebook, who currently boasts of a total monthly active userbase of over 1.85 billion, is the second online advertisers behind Google. A recent report revealed that Google and Facebook are cannibalizing 99% of all the money in digital advertising, and have established a de-facto duopoly. In 2014, news broke out that Facebook meddled with people's news feed algorithms to test if it could influence people's moods. Source
  5. A 20-year-old Thai man streamed the murder of his 11-month-old daughter on Facebook Live on Monday, in another appalling case where Facebook's live streaming service has been used exactly for what Facebook never intended. According to Thai news media, the man, named Wuttisan Wongtalay, murdered his daughter on the rooftop of an abandoned hotel in the Thai town of Phuket. Facebook took nearly a day to take down the videos He streamed the murder in two video feeds started on Monday, at 16:50 and 16:57, local time (09:50 and 09:57 GMT). The videos show the father tying a rope around his daughter's neck and dropping her off the side of the hotel. Following his heinous act, the father took his own life shortly after, but he didn't stream his own death. Thai authorities said the suspect had a fight with his wife, and acted in desperation as he believed his wife didn't love him anymore and wanted to leave him. The live stream was converted into a video and hosted in the father's Facebook profile, where it remained for almost a full day, as Facebook miserably failed to act on numerous user reports. The video was finally taken down on Tuesday, after Thai police and ministry officials reached out to the company. Nonetheless, by that time the video was copied and uploaded to several other platforms such as YouTube, DailyMotion, and others. Live feature is turning into Facebook's biggest nightmare Facebook's tardy response comes after the company failed to act in the murder of an elderly man in Cleveland earlier this month, an event also streamed on the platform. In March, a group of teenagers used Facebook Live to broadcast the sexual assault of a 15-year-old teen. Facebook Live was also used to stream the brutal beating of a Trump supporter in January. Also in January, three men from Uppsala, Sweden, streamed another sexual assault using Facebook's platform. The three men are now in custody. These are only the major incidents that took place this year. Multiple incidents took place in 2016, and all in all, the common factor was that Facebook's staff failed to act in due time, leaving videos online for hours, despite reports from its users. Following the Cleveland incident, Facebook promised to improve its live video feed monitoring and review systems. The recent wave of live stream incidents, coupled with the flood of fake news stories are currently Facebook's biggest problems. On a side note, Facebook appears to be losing the battle of establishing itself as a source of reliable news. Big news agencies have already dropped the company's Instant Articles service, while others acused the company of intentionally burying their articles. This happens all while governments in Europe are threatening the company with fines due to the increasing number of fake news articles shared on its network. The next few months will be crucial for Facebook's upcoming future. Source
  6. Facebook fights against fake news Facebook is taking its fight against fake news seriously, it seems. After releasing a number of tools to help fight against this plague, the company took a more proactive approach and took down some 30,000 fake accounts linked to France ahead of the presidential elections. According to a statement released by the company, Facebook is trying to "reduce the spread of material generated through inauthentic activity, including spam, misinformation, or other deceptive content that is often shared by creators of fake accounts." The company explained in a blog post that there have been many changes brought to the platform lately, including some that run "under the hood." Some additional improvements that were recently made help detect fake accounts on Facebook more effectively, including those that are hard to spot. "We've made improvements to recognize these inauthentic accounts more easily by identifying patterns of activity - without assessing the content itself. For example, our systems may detect repeated posting of the same content, or an increase in messages sent," the company explains. Facebook admits that these changes will not result in the removal of every fake account, but as time goes by, the effectiveness will grow. Their priority, for now, is to remove the accounts with the largest footprint, with a high amount of activity and a broad reach. The game is on For its part, Facebook seems to be putting in a lot of work to stop the distribution of misinformation, as the company puts it, as well as spam and false news. "We've found that a lot of false news is financially motivated, and as part of our work to promote an informed society, we have focused on making it very difficult for dishonest people to exploit our platform or profit financially from false news sites using Facebook." In the past few weeks, Facebook has released a list of tips to help users spot fake news, and has signed up for the News Integrity Initiative. Recently, World Wide Web creator Sir Tim Berners-Lee has said that Facebook and Google have a lot to do to tackle this problem and, unfortunately for them, they're the ones that should do it because so many people use them. Both companies have done a lot to this extent and will continue to work in this direction. Source
  7. Facebook has issued a statement after a video showing a fatal shooting was uploaded onto the social media network by the alleged murderer. Cleveland Police say that Steve Stephens broadcast the killing of an unidentified elderly man on Facebook on Sunday evening and is the target of a manhunt as of this writing. [Update: The victim has been identified as 74-year-old Robert Godwin Sr. Facebook issued a statement clarifying that the shooting was uploaded by Stephens after the murder, not broadcast on Facebook Live.] Stephens also posted two more videos in which he claimed to have to committed other murders and said he was going to “kill as many people as I can,” before his account was shut down by Facebook. In a statement to journalists, a company spokesperson said “This is a horrific crime and we do not allow this kind of content on Facebook. We work hard to keep a safe environment on Facebook, and are in touch with law enforcement in emergencies when there are direct threats to physical safety.” Though Facebook’s policy prohibits content that glorifies or incites violence, that rule is inherently difficult to enforce on a social media platform that encourages its users to post photos and videos in real time or soon after they are taken. Facebook Live launched to all users almost exactly one year ago and while the majority of videos are innocuous, the feature has broadcast, both accidentally and on purpose, heinous acts of violence. These include the shooting of a toddler, the torture of a teenager with special needs and sexual assaults in Chicago and Sweden. The Chicago case prompted questions about whether people who watch crimes live but don’t report them can be legally charged and what jurisdictions are responsible. Furthermore, once media has been put on platforms like Facebook, Twitter and YouTube, it’s easy for other users to save and re-share. This means victims and families are forced to re-experience the trauma and is an especially insidious problem in cases where livestreaming was arguably used by perpetrators as a psychological weapon. Source
  8. Horrifying: Facebook Users Are Reporting Getting Friend Requests From The Dead Back from the dead: Facebook users getting friend requests from deceased family members and friends What happens when you get a friend request on Facebook from someone you know; however, this someone in this case is not alive? You will be stunned, petrified, scared, etc. right? That is what is happening to some of the users of Facebook who have reported of getting friend requests from dead friends and family members. While dealing with a closed one’s death and handling their Facebook accounts after their death in itself is horrifying, and with such friend requests coming in, it is only causing more grievances for such people. However, what is more worrying is the fact that cyber-criminals and scammers are using this social media platform to trick people to steal money from them or for running some other frauds. So, how does this whole thing work? Basically, such friend requests are likely the result of cloning or hacking scams. For instance, the first method involves cloning someone’s account (in this case, the profile of the deceased) and stealing all the information in that profile, which is then used to setup a new account that is actually controlled by someone else. Further, the other method involves hacking into a deceased’s Facebook account and taking control of that account. In both the cases, the scammers have a complete hold over the account, which allows them to send messages, while pretending to be someone’s friend. Then, the scammer sends friend requests to the friends of the account they cloned or hacked into, in the hope that a number of them will accept the request under the thinking that it is the friend that has either created a new account of that they were accidentally deleted and being duly re-added. Once an invitation has been accepted, the scammer can now see information on that account. The scammer can then carry various kinds of scams, hoaxes and cons on the person who accepted the friend request. For example, the trick known as “friends in crisis” scam, where a person claims they are stuck somewhere and need money to get out of a problem. Or the fake account may be used to send users links to malicious websites that will attempt to install malware onto their computer when visited. Or they will be sent to a survey scam, which gather personal information by luring them into completing intrusive questionnaires. Or the fake account may be used to check on a person’s statuses and other information to impersonate them or steal from them. The above tricks and scams are applicable for those Facebook users who are alive too. If you too come across such a horrifying friend request on Facebook, please submit a request to either have the account of the dead to be memorialized (so that it will still be visible on Facebook, but no one will be able to log into it) or deactivated. This request can be made by filling the contact form given here. You can also find more information on how to report a deceased person or memorialize an account on Facebook by clicking here. Source
  9. Used An iPhone And Social Media Pre-2013? You May Be Due A Tiny Payout Twitter, Instagram, and others are stumping up $5.3m to settle a privacy suit with implications for those who used social-media apps on an iPhone in 2012 or earlier. Given the millions who downloaded the social-media apps in question, it's likely the settlement will result in a very small payment for each individual. Eight social-media firms, including Twitter and Instagram, have agreed to pay $5.3m to settle a lawsuit over their use of Apple's Find Friends feature in iOS. The main problem that complainants had with the accused firms was that their apps, which used Apple's Find Friends, didn't tell users that their contact lists would be uploaded to company servers. The lawsuit alleged the privacy incursions occurred between 2009 and 2012, the year the class action suit began. Instagram, Foursquare, Kik, Gowalla, Foodspotting, Yelp, Twitter, and Path have agreed to pay in to the settlement fund, which will be distributed to affected users via Amazon.com, according to Venture Beat. Yelp had previously argued it was necessary to store user contact lists to enable the Find Friends feature, which consumers understood would occur in the context of using a mobile app. However, US District Judge Jon Tigar countered that the key question was whether Apple and app developers "violated community norms of privacy" by exceeding what people reasonably believe they consented to. "A 'reasonable' expectation of privacy is an objective entitlement founded on broadly based and widely accepted community norms," said Tigar. If the judge approves the settlement, Apple and LinkedIn would be the only remaining defendants among 18 firms originally accused of the privacy violation. Given the millions of people who downloaded these apps, it's likely the settlement will result in a very small payment for each individual. However, people who took part in the class action suit could receive up to $15,000 each. Source
  10. Facebook has a new tool to help victims of revenge porn Facebook is rolling out a new tool to help victims of revenge porn over its platforms, including Messenger and Instagram. The new tools are meant to help people when intimate images of them are shared over Facebook without their permission. Now, when cases of revenge porn are reported to the company, they can prevent the footage from being shared over all platforms. The company's announcement refers to a study regarding US victims of non-consensual intimate images. 93% of them reported significant emotional distress and 82% reported significant impairment in social, occupational or other important areas of life. In short, spreading such pictures or videos of people without their consent has a great impact on their lives. How does it work? The new tool isn't overly complicated. In fact, all you need to do if you see an intimate image on Facebook that looks like it was shared without permission, you can report it by using the "report" link that pops up when you tap on the downward arrow or the "..." sign next to a post. Facebook's Comunity Operations team will review the image and remove it if it violates the Community Standards. "In most cases, we will also disable the account for sharing intimate images without permission. We offer an appeals process if someone believes an image was taken down in error," the announcement reads. Up until this point, there's nothing really new about what the company is doing. From here on out, however, things are different. Facebook will put to work its photo-matching technologies to help thwart further attempts to share the image on Facebook, Messenger, and Instagram. If someone tries to share the image after it's been reported and removed, they will receive an alert telling them that it violates the company's policies and that their attempt to post the image or video was stopped. "These tools, developed in partnership with safety experts, are one example of the potential technology has to help keep people safe. Facebook is in a unique position to prevent harm, one of our five areas of focus as we help build a global community," Facebook notes. The company has worked with the Cyber Civil Rights Initiative, as well as other companies, to create a one-stop destination for victims and others to report this content to the major technology companies. Others have also helped shape Facebook's new tool - the National Network to End Domestic Violence, Center for Social Research, the Revenge Porn Helpline (UK) and the Cyber Civil Rights Initiative. Source
  11. Facebook has a new plan to make you trust real news First, Facebook started off by saying it wants to fight against fake news. Now, the social network wants to help people trust real news again. How does it plan to do that? Well, it starts off by launching a $14 million program called the "News Integrity Initiative." In collaboration with the likes of Mozilla, Craig Newmark (Craiglist founder), the Knight Foundation, the Tow Foundation, City University of New York and others, Facebook has begun working on this new project that deepens its involvement in the news. The company has spent many years so far denying that it has anything to do with the news and saying it cannot be categorized as such. On the other hand, it seems to be coming to terms with the fact that people will share news over Facebook, so much, in fact, that it has become the most dominant platform for news. Tim Berners-Lee, the inventor of the World Wide Web, event went on to say that Facebook and Google should be the ones leading the fight against fake news due to the size of the masses they reach. Lots of pressure on Facebook Nonetheless, Facebook has been heavily criticized for what has happened over its platform in recent years, particularly as false information runs riot among people who don't exactly know how to pick their sources. "We’re excited to announce we are helping to found and fund the News Integrity Initiative, a diverse new network of partners who will work together to focus on news literacy. The initiative will address the problems of misinformation, disinformation and the opportunities the internet provides to inform the public conversation in new ways," reads the announcement signed by Campbell Brown, the head of News Partnerships at Facebook. Jeff Jarvis, professor of journalism at CUNY and one of the leaders of the initiative, says that this isn't a problem that's exclusive to Facebook. "My greatest hope is that this Initiative will provide the opportunity to work with Facebook and other platforms on reimagining news, on supporting innovation, on sharing data to study the public conversation, and on supporting news literacy broadly defined," Jarvis wrote in a blog post. Source
  12. US tech giants react to UK Home Secretary Rudd Big Tech has told the UK government it will do more to remove extremist content from their networks, but has refused to offer concessions on encryption. Following a meeting between Britain's Home Secretary Amber Rudd and communication service providers, called in the aftermath of the murders in Westminster, senior executives from Facebook, Google, Microsoft and Twitter put out a joint statement. "Our companies are committed to making our platforms a hostile space for those who seek to do harm and we have been working on this issue for several years," the statement reads, adding: "We share the Government's commitment to ensuring terrorists do not have a voice online." In order to do that, the companies said they would "look at all options for structuring a forum to accelerate and strengthen this work." The letter outlines three ways to do that: Improve automatic tools to remove extremist content. Help other companies to do the same. Support efforts from "civil society organizations" to "promote alternative and counter-narratives." The statement is more notable for its omissions than its promises, however. There is no mention of timelines either on taking down such content, or on taking action. There is no promise to remove such content. There is no offer of firm resources. And the only actual project referred to is the "innovative video hash sharing database that is currently operational in a small number of companies." Crucially, there is no mention at all of the other pressing issue – encryption. Reading material Two days after the attack, Amber Rudd made headlines by arguing that the authorities must have access to the communications of the attacker – Khalid Masood/Adrian Ajao – and specifically highlighted Facebook-owned chat app WhatsApp that she said Masood had used on the day of the attacks. The Home Office put out its own short statement following the meeting in which it also glossed over the encryption issue, noting that the meeting "focused on the issue of access to terrorist propaganda online." Rudd said she "welcomes the commitment from the key players to set up a cross-industry forum," but pointedly notes that she would "like to see the industry go further and faster in not only removing online terrorist content but stopping it going up in the first place." Another recent critic of tech companies on this topic, chair of the Home Affairs Select Committee Yvette Cooper, called the outcome "a bit lame." "All the Government and social media companies appear to have agreed is to discuss options for a possible forum in order to have more discussions," Cooper complained. "Having meetings about meetings just isn't good enough." Social media companies in particular are under fire in Europe over the ready availability of extremist material and the apparent ease with which extremists communicate among themselves and with others on systems run by large Western corporations. The issue is complicated by the fact that most of those corporations are based in the United States and so have a strong belief that removing or even blocking content is tantamount to censorship and breaks the First Amendment. Europe takes a different approach to what constitutes fair or free speech and has threatened to introduce legislation obliging social media companies to remove extremist content or face large fines and lawsuits. By Kieren McCarthy https://www.theregister.co.uk/2017/03/31/tech_giants_uk_home_sec_encryption/
  13. Facebook has a new feature A new Facebook feature that has been in testing for a while now has finally gone live, enabling users to more easily follow their elected representatives. Town Hall is a feature designed for users in the United States which should help them find state and federal representatives based on their location. Users can then follow these individuals for update or go ahead and contact them directly via their listed phone number and address or via Facebook Messenger if they are on Facebook. "Building a civically-engaged community means building new tools to help people engage in a thoughtful and informed way. The starting point is knowing who represents you and how you can make your voice heard on the decisions that affect your life," Zuckerberg wrote in a Facebook post. He adds that the more you engage with the political process, the more you can ensure it reflects your values. "This is an important part of feeling connected to your community and your democracy, and it's something we're increasingly focused on a Facebook," the Facebook CEO said. How it works Town Hall includes state and federal officials and will soon receive an expansion to include local elected officials for the top 150 biggest cities in the United States, a feature that Facebook hopes to expand in the future. If users like or comment on a post created by one of their elected officials from their news feed, they will see a feature that allows them to directly contact the representative. If they go through and send them a message, users are invited to post about contacting the lawmaker to let others know about their initiative, and, perhaps, push them to do the same. Talks about this particular feature have been around for a while as Facebook rolled out Town Hall as a test to a small number of users. Now that it has finally gone live, it remains to be seen just how much it will be put to use. This is how Town Hall looks like on mobile Source
  14. As a follow up to its study which found up to $16.4 billion could be lost to ad fraud in 2017, The&Partnership is, well, basically demanding that Google, Facebook, et al open up their walled gardens and allow inside third party purveyors of ad verification solutions such as Adloox, a company The&Partnership partnered with for the study. The&Partnership argues ad spend lost to ad fraud could be reduced to single digits if only the giants would allow in solutions such as Adlooz. Currently the big boys don't allow in third party solutions of this type. Arguing for a doorway into the walled garden, The&Partnership Founder Johnny Hornby said, "Without this, not only are these platforms denying our clients the clean, brand-safe environments they quite rightly demand - but advertisers also lack full transparency and visibility in terms of the money they are losing to fraudulent advertising and advertising that never gets seen. If Google wants to see advertisers returning to YouTube in significant numbers. it is going to have to move quickly." Hornby suggests Google needs to do two things, "Firstly, Google needs to stop marking its own homework, fully opening up its walled gardens to independent, specialist ad verification software, to give brands the visibility and transparency they deserve. Secondly, Google will need to start looking at brand safety from completely the other end of the telescope. Instead of allowing huge volumes of content to become ad-enabled every minute, and then endeavoring to convince advertisers that the dangerous and offensive content among it will be found and weeded out, it should be presenting advertisers only with advertising opportunities that have already been pre-vetted and found to be 100% safe." Does anyone think Google is actually going to allow this? Of course, they could just buy Adloox and then there might be some actual headway. By Richard Whitman http://www.mediapost.com/publications/article/297997/agency-urges-google-to-allow-third-party-ad-verifi.html
  15. WhatsApp can't hand over messages End-to-end encryption services like WhatsApp are once more being slammed for offering protection for users everywhere. This time, the UK is doing all the finger pointing, and it's because of the terrorist attack that took place on Wednesday. British Home Secretary Amber Rudd has accused WhatsApp of giving terrorists "a place to hide," after the company has failed to comply with a demand to hand over the last messages sent by London attacker, Adrian Ajao, the Telegraph reports. "This terrorist sent a WhatsApp message, and it can't be accessed," Rudd said. She also said that it is completely unacceptable for end-to-end encryption to be offered because there should be no place for terrorists to hide. "We need to make sure that organizations like WhatsApp - and there are plenty of others like that - don't provide a secret place for terrorists to communicate with each other," she added. The British authorities are complaining that Scotland Yard and the security services cannot access encrypted messages sent via WhatsApp, so they cannot know who Ajao contacted or what the told them before the attack. Not only did Rudd slam WhatsApp, but also went after Google and social media platforms which have been known for being late to take down extremist material or refusing to take it down altogether due to their protection of "free speech" and the way their Terms are worded. A much-desired backdoor This isn't the first time, nor will it be the last time, when WhatsApp and other similar services, as well as encrypted email tools, are slammed by authorities. End-to-end encryption is supposed to protect users from hackers, but also mass-surveillance, such as that exposed by Edward Snowden's NSA files. The way it works, a message is encrypted the second it is sent by one user, and it only gets decrypted once it reached the recipient. In this way, WhatsApp doesn't have access to any plain-text messages, which means it cannot share anything with authorities. In recent months there have been more and more voices asking for encryption backdoors for authorities, something that tech companies will likely never agree to; not without losing users in droves. Source
  16. Facebook Live has been used to broadcasts dozens of acts of violence since its launch a year ago. Police are searching for five to six men who sexually assaulted a 15-year-old girl as dozens watched on live video. Chicago police are searching for five to six men who sexually assaulted a 15-year-old girl in an attack viewed by dozens of people on Facebook Live. The live video, which has been removed from the social network, was viewed by approximately 40 people, but none reported the attack to police, authorities said Tuesday. The incident marks the second time in recent months Chicago police have investigated an apparent attack streamed on Facebook Live. In January, four people were arrested in the beating of a special-needs teenager that was livestreamed on the tool. Facebook Live, which lets anyone with a phone and internet connection livestream video directly to Facebook's 1.8 billion users, has become a centerpiece feature for the social network. In the past few months, everyone from Hamilton cast members to the Donald Trump campaign have turned to Facebook to broadcast in real time. But the focus on video has prompted some tough philosophical questions, like what Facebook should and shouldn't show. In the year since its launch, the feature has been used to broadcast at least 50 acts of violence, according to the Wall Street Journal, including murder and suicides Chicago police detectives found the girl Tuesday, a day after the girl's mother approached a police superintendent as he was leaving a news conference and showed him screen grabs of the attack, according to police. "It's disgusting. It's so disgusting," the girl's mother said, describing the apparent assault in an interview with CBS Chicago, before the girl was found. "I didn't really want to look at it that much, but from what I saw, they were pouring stuff on her, and just... she was so scared." Facebook representatives did not immediately respond to a request for comment. Source
  17. We may soon see GIFs in comments Facebook is finally embracing the GIF, after many years of resisting the change. According to TechCrunch, Facebook will begin testing a GIF button that allows users to post GIFs from services such as Giphy or Tenor as comments. "Everyone loves a good GIF, and we know that people want to be able to use them in comments. SO we're about to start testing the ability to add GIFs to comments, and we'll share more when we can, but for now we repeat that this is just a test," Facebook said in a statement. As per usual with any Facebook tests, the new feature will only be available to a small group of Facebook users, but it has the potential to roll out to everyone if it proves popular. Taking into consideration the high usage of GIFs on other platforms, it's pretty much certain that we'll all see this feature in our news feeds soon. Borrowing from Messenger The feature will apparently work similarly to the GIF button you can find in Facebook Messenger, allowing users to browse trending animations, or to search for specific reactions. While sharing GIFs as News Feed posts won't be possible, you will be able to comment on other people's posts with them. For many years now, Facebook has shied away from fully embracing GIFs, mostly out of "fear" that it would distract users from the News Feed and what it is supposed to be at the core - a way to connect to your friends and to find out the latest things they are interested in. Then again, this was a valid concern back before the News Feed became inundated with auto-play videos that are just as flashy and distracting, and even more annoying than the GIFs. While some changes have been made to permit GIFs, namely to share them with direct URLs, this is the first time sharing them will be made easy. Source
  18. Facebook live now works on desktop Facebook is now allowing desktop users to broadcast live videos too, a decision that will certainly impact the platform quite a bit. It's been about a year since Facebook introduced its ability to broadcast live videos for mobile users. It looks like the company had been working on bringing the feature to other platforms, most likely following the success the mobile version had. Truthfully, the feature had been around for a while, but only a few select desktop users had the ability to actually broadcast live video. Now, everyone will have the same powers. What does this mean for Facebook? Well, it just turned the platform into a major competitor for the likes of Twitch and YouTube Gaming. That's because desktop streamers will be able to broadcast video from external hardware, as well as streaming software. In short, live Facebook videos will now include gameplay footage and on-screen graphics, as well as picture-in-picture videos for those who like this particular feature. Easy peasy to go live Streaming content from your desktop is just as easy as it is to do from your mobile device. You just have to select Live Video from the posting area on top of the News Feed or Timeline, hit "next" and start broadcasting immediately. One thing people are grumbling about online is the fact that Facebook live videos don't bring users any money, unlike, let's say, Twitch. It looks, however, that Facebook is working on making this happen. Whenever it does, it will be more than welcome, especially considering the massive user base the platform has and the increased reach this type of videos can have. This is an interesting step to take for Facebook and one that will certainly make a difference for a very large number of users. On that note, beware of the increased influx of live videos on your feed. Source
  19. Mark youself as safe on Facebook Facebook activated its Safety Check feature for people in London following the events that took place on Wednesday afternoon when a suspected terrorist ran a car into pedestrians and then stabbed a police officer as he tried to get into the House of Parliament in Westminster. Safety check is a feature that was introduced back in October 2014, and it allows people to inform their friends and family that they're safe following a disaster or other incident, such as terrorist attacks or a large accident. For instance, the last time Safety Check was activated in London was back in 2016 when a tram crash took place in Croydon. The Met Police was called to Westminster Bridge this afternoon after a car crashed into railings and gunshots were heard outside the Parliament. It was soon discovered that the car had crashed into pedestrians before the driver got out of the car and headed for one of the entrances of the Parliament where he stabbed a police officer before being taken down. Attack with casualties Four people died as a result of the attack, including the police officer, and another 20 people were injured. Half of these were individuals were treated right at the scene. The area around the Houses of Parliament was placed on lockdown, and tube stations around the area were closed. As police remain vigilant, people were invited by Facebook to mark themselves as "safe" online, so their dear ones know not to worry too much about them right now. A few weeks ago, Facebook announced that it was expanding the Safety Check feature to include the option to offer help to those in need in times of disaster. This could particularly come in handy when wildfire takes over, or when earthquakes leave people without a home and in need of a roof over their heads. Source
  20. Facebook Bans Devs From Creating Surveillance Tools With User Data Without a hint of irony, Facebook has told developers that they may not use data from Instagram and Facebook in surveillance tools. The social network says that the practice has long been a contravention of its policies, but it is now tidying up and clarifying the wording of its developer policies. American Civil Liberties Union, Color of Change and the Center for Media Justice put pressure on Facebook after it transpired that data from users' feeds was being gathered and sold on to law enforcement agencies. The re-written developer policy now explicitly states that developers are not allowed to "use data obtained from us to provide tools that are used for surveillance." It remains to be seen just how much of a difference this will make to the gathering and use of data, and there is nothing to say that Facebook's own developers will not continue to engage in the same practices. Deputy chief privacy officer at Facebook, Rob Sherman, says: Transparency reports published by Facebook show that the company has complied with government requests for data. The secrecy such requests and dealings are shrouded in means that there is no way of knowing whether Facebook is engaged in precisely the sort of activity it is banning others from performing. Source
  21. Fake news is the plague we need to fight against Tim Berners-Lee, the man we have to thank for inventing the World Wide Web, believes there are several things that need to be done to ensure the future of the web in order to make this a platform that benefits humanity - fight against fake news, political advertisements and data sovereignty. As the World Wide Web turns 28, Berners-Lee celebrates the occasion. He writes that over the past year he has become increasingly worried about three trends that he believes harm the web. The first thing the world needs to fight against is fake news. "Today, most people find news and information on the web through just a handful of social media sites and search engines. These sites make more money when we click on the links they show us. And, they choose what to show us based on algorithms which learn from our personal data that they are constantly harvesting," Berners-Lee writes. "The net result is that these sites show us content they think we’ll click on – meaning that misinformation, or ‘fake news’, which is surprising, shocking, or designed to appeal to our biases can spread like wildfire. And through the use of data science and armies of bots, those with bad intentions can game the system to spread misinformation for financial or political gain." He takes things a step forward and gives names. He believes that we must push back against misinformation by encouraging gatekeepers such as Google and Facebook to continue their efforts to combat the problem, while also avoiding the creation of any central bodies that decide what's true or not because that's another problem altogether. It's not just fake-news to fight against Another thing we need to fight against is government over-reach in surveillance laws, including in court, if need be. "We need more algorithmic transparency to understand how important decisions that affect our lives are being made, and perhaps a set of common principles to be followed," he adds. Political advertising online needs to be more transparent, he believes, especially considering that during the 2016 US elections as many as 50,000 variations of adverts were being served every single day on Facebook. There are many problems plaguing the world wide web, but some are more pressing than others, it seems. TheWeb Foundation, which Berners-Lee leads, will be working on many of these issues as part of the new five-year strategy. "I may have invented the web, but all of you have helped to create what it is today. All the blogs, posts, tweets, photos, videos, applications, web pages and more represent the contributions of millions of you around the world building our online community. [...] It has taken all of us to build the web we have, and now it is up to all of us to build the web we want – for everyone," Tim Berners-Lee concludes. Source
  22. A new report into U.S. consumers’ attitude to the collection of personal data has highlighted the disconnect between commercial claims that web users are happy to trade privacy in exchange for ‘benefits’ like discounts. On the contrary, it asserts that a large majority of web users are not at all happy, but rather feel powerless to stop their data being harvested and used by marketers. The report authors’ argue it’s this sense of resignation that is resulting in data tradeoffs taking place — rather than consumers performing careful cost-benefit analysis to weigh up the pros and cons of giving up their data (as marketers try to claim). They also found that where consumers were most informed about marketing practices they were also more likely to be resigned to not being able to do anything to prevent their data being harvested. “Rather than feeling able to make choices, Americans believe it is futile to manage what companies can learn about them. Our study reveals that more than half do not want to lose control over their information but also believe this loss of control has already happened,” the authors write. By misrepresenting the American people and championing the tradeoff argument, marketers give policymakers false justifications for allowing the collection and use of all kinds of consumer data often in ways that the public find objectionable. Moreover, the futility we found, combined with a broad public fear about what companies can do with the data, portends serious difficulties not just for individuals but also — over time — for the institution of consumer commerce.” “It is not difficult to predict widespread social tensions, and concerns about democratic access to the marketplace, if Americans continue to be resigned to a lack of control over how, when, and what marketers learn about them,” they add. The report, entitled The Tradeoff Fallacy: How marketers are misrepresenting American consumers and opening them up to exploitation, is authored by three academics from the University of Pennsylvania, and is based on a representative national cell phone and wireline phone survey of more than 1,500 Americans age 18 and older who use the internet or email “at least occasionally”. Key findings on American consumers include that — 91% disagree (77% of them strongly) that “If companies give me a discount, it is a fair exchange for them to collect information about me without my knowing” 71% disagree (53% of them strongly) that “It’s fair for an online or physical store to monitor what I’m doing online when I’m there, in exchange for letting me use the store’s wireless internet, or Wi-Fi, without charge.” 55% disagree (38% of them strongly) that “It’s okay if a store where I shop uses information it has about me to create a picture of me that improves the services they provide for me.” The authors go on to note that “only about 4% agree or agree strongly” with all three of the above propositions. And even with a broader definition of “a belief in tradeoffs” they found just a fifth (21%) were comfortably accepting of the idea. So the survey found very much a minority of consumers are happy with current data tradeoffs. The report also flags up that large numbers (often a majority) of U.S. consumers are unaware of how their purchase and usage data can be sold on or shared with third parties without their permission or knowledge — in many instances falsely believing they have greater data protection rights than they are in fact afforded by law. Examples the report notes include — 49% of American adults who use the Internet believe (incorrectly) that by law a supermarket must obtain a person’s permission before selling information about that person’s food purchases to other companies. 69% do not know that a pharmacy does not legally need a person’s permission to sell information about the over-the-counter drugs that person buys. 65% do not know that the statement “When a website has a privacy policy, it means the site will not share my information with other websites and companies without my permission” is false. 55% do not know it is legal for an online store to charge different people different prices at the same time of day. 62% do not know that price-comparison sites like Expedia or Orbitz are not legally required to include the lowest travel prices Data-mining in the spotlight One thing is clear: the great lie about online privacy is unraveling. The obfuscated commercial collection of vast amounts of personal data in exchange for ‘free’ services is gradually being revealed for what it is: a heist of unprecedented scale. Behind the bland, intellectually dishonest facade that claims there’s ‘nothing to see here’ gigantic data-mining apparatus have been manoeuvered into place, atop vast mountains of stolen personal data. Stolen because it has never been made clear to consumers what is being taken, and how that information is being used. How can you consent to something you don’t know or understand? Informed consent requires transparency and an ability to control what happens. Both of which are systematically undermined by companies whose business models require that vast amounts of personal data be shoveled ceaselessly into their engines. This is why regulators are increasingly focusing attention on the likes of Google and Facebook. And why companies with different business models, such as hardware maker Apple, are joining the chorus of condemnation. Cloud-based technology companies large and small have exploited and encouraged consumer ignorance, concealing their data-mining algorithms and processes inside proprietary black boxes labeled ‘commercially confidential’. The larger entities spend big on pumping out a steady stream of marketing misdirection — distracting their users with shiny new things, or proffering up hollow reassurances about how they don’t sell your personal data. Make no mistake: this is equivocation. Google sells access to its surveillance intelligence on who users are via its ad-targeting apparatus — so it doesn’t need to sell actual data. Its intelligence on web users’ habits and routines and likes and dislikes is far more lucrative than handing over the digits of anyone’s phone number. (The company is also moving in the direction of becoming an online marketplace in its own right — by adding a buy button directly to mobile search results. So it’s intending to capture, process and convert more transactions itself — directly choreographing users’ commercial activity.) These platforms also work to instill a feeling of impotence in users in various subtle ways, burying privacy settings within labyrinthine submenus. And technical information in unreadable terms and conditions. Doing everything they can to fog rather than fess up to the reality of the gigantic tradeoff lurking in the background. Yet slowly, but slowly this sophisticated surveillance apparatus is being dragged into the light. The privacy costs involved for consumers who pay for ‘free’ services by consenting to invasive surveillance of what they say, where they go, who they know, what they like, what they watch, what they buy, have never been made clear by the companies involved in big data mining. But costs are becoming more apparent, as glimpses of the extent of commercial tracking activities leak out. And as more questions are asked the discrepancy between the claim that there’s ‘nothing to see here’ vs the reality of sleepless surveillance apparatus peering over your shoulder, logging your pulse rate, reading your messages, noting what you look at, for how long and what you do next — and doing so to optimize the lifting of money out of your wallet — then the true consumer cost of ‘free’ becomes more visible than it has ever been. The tradeoff lie is unraveling, as the scale and implications of the data heist are starting to be processed. One clear tipping point here is NSA whistleblower Edward Snowden who, two years ago, risked life and liberty to reveal how the U.S. government (and many other governments) were involved in a massive, illegal logging of citizens’ digital communications. The documents he released also showed how commercial technology platforms had been appropriated and drawn into this secretive state surveillance complex. Once governments were implicated, it was only a matter of time before the big Internet platforms, with their mirror data-capturing apparatus, would face questions. Snowden’s revelations have had various reforming political implications for surveillance in the U.S. and Europe. Tech companies have also been forced to take public stances — either to loudly defend user privacy, or be implicated by silence and inaction. Another catalyst for increasing privacy concerns is the Internet of Things. A physical network of connected objects blinking and pinging notifications is itself a partial reveal of the extent of the digital surveillance apparatus that has been developed behind commercially closed doors. Modern consumer electronics are hermetically sealed black boxes engineered to conceal complexity. But the complexities of hooking all these ‘smart’ sensornet objects together, and placing so many data-sucking tentacles on display, in increasingly personal places (the home, the body) — starts to make surveillance infrastructure and its implications uncomfortably visible. Plus this time it’s manifestly personal. It’s in your home and on your person — which adds to a growing feeling of being creeped out and spied upon. And as more and more studies highlight consumer concern about how personal data is being harvested and processed, regulators are also taking notice — and turning up the heat. One response to growing consumer concerns about personal data came this week with Google launching a centralized dashboard for users to access (some) privacy settings. It’s far from perfect, and contains plentiful misdirection about the company’s motives, but it’s telling that this ad-fueled behemoth feels the need to be more pro-active in its presentation of its attitude and approach to user privacy. Radical transparency The Tradeoff report authors include a section at the end with suggestions for improving transparency around marketing processes, calling for “initiatives that will give members of the public the right and ability to learn what companies know about them, how they profile them, and what data lead to what personalized offers” — and for getting consumers “excited about using that right and ability”. Among their suggestions to boost transparency and corporate openness are — Public interest organizations and government agencies developing clear definitions of transparency that reflect consumer concerns, and then systematically calling out companies regarding how well or badly they are doing based on these values, in order to help consumers ‘vote with their wallets’ Activities to “dissect and report on the implications of privacy policies” — perhaps aided by crowdsourced initiatives — so that complex legalize is interpreted and implications explained for a consumer audience, again allowing for good practice to be praised (and vice versa) Advocating for consumers to gain access to the personal profiles companies create on them in order for them to understand how their data is being used “As long as the algorithms companies implement to analyze and predict the future behaviors of individuals are hidden from public view, the potential for unwanted marketer exploitation of individuals’ data remains high. We therefore ought to consider it an individual’s right to access the profiles and scores companies use to create every personalized message and discount the individual receives,” the report adds. “Companies will push back that giving out this information will expose trade secrets. We argue there are ways to carry this out while keeping their trade secrets intact.” They’re not the only ones calling for algorithms to be pulled into view either — back in April the French Senate backed calls for Google to reveal the workings of its search ranking algorithms. In that instance the focus is commercial competition to ensure a level playing field, rather than user privacy per se, but it’s clear that more questions are being asked about the power of proprietary algorithms and the hidden hierarchies they create. Startups should absolutely see the debunking of the myth that consumers are happy to trade privacy for free services as a fresh opportunity for disruption — to build services that stand out because they aren’t predicated on the assumption that consumers can and should be tricked into handing over data and having their privacy undermined on the sly. Services that stand upon a futureproofed foundation where operational transparency inculcates user trust — setting these businesses up for bona fide data exchanges, rather than shadowy tradeoffs. By Natasha Lomas https://techcrunch.com/2015/06/06/the-online-privacy-lie-is-unraveling/
  23. Messenger Day Last year, Facebook announced that it started testing a new feature that allows users to take pictures and share them with friends on Messenger. The feature resembled Snapchat’s Stories and Facebook has just announced that it’s now available globally. Since its launch at the end of last year, billions of pictures and videos were shared using Messenger’s built-in camera. Messenger users added various effects and stickers, as well as frames to the images that they’ve shared on Messenger. Facebook also began testing a new feature in which users could add pictures to their Messenger Day for friends to see and reply to them. The images would disappear in 24 hours, very similar to Snapchat’s Stories section. Today, Facebook announced that it started rolling out Messenger Day globally, to all Android and iOS smartphones. Share with all or just a group of people In order to add pictures, users simply need to tap on the camera icon, snap a picture and tap on “Add to your day” in Messenger. Users can pick from effects and smiley face icons to add to the image or video. They can also add text over images or overlay a drawing. Add images to your day with Messenger Day Moreover, Messenger Day allows users to save images and videos to their camera roll or choose to send them to a specific person or group of people. Images or videos automatically disappear on their own after 24 hours. Images or videos in conversations can be added to the Messenger section and Facebook offers users the option to share images with all of their friends on Messenger or only a few users. In addition, they can take down the shared image if they decide that they wish to delete them. Recently, Facebook started testing reaction emojis in its Messenger app, similar to the emojis that can be added to Facebook posts. One of the emojis is a thumbs down dislike icon, but it’s unsure when it will roll out to everyone. Source
  24. Facebook is in the process of implementing new suicide prevention tools, including streamlined reporting on its Facebook Live application, in the wake of two suicides livestreamed from the platform earlier this year, according to the Associated Press. One new tool released Wednesday allows viewers of a livestream to report if the broadcast is suicidal in nature and prompt Facebook to intervene by reaching out to emergency services, the AP reports. It will also provide the user broadcasting with onscreen resources such as the option to talk to a friend or contact a helpline. Users who report a suicidal broadcast will receive resources for helping the person in crisis until further help arrives, according to Facebook’s announcement yesterday. As the announcement points out, Facebook has provided users with suicide prevention resources for over a decade and worked with organizations including the National Suicide Prevention Lifeline and Crisis Text Line to better understand how to support users in crisis. However, this new reporting tool expands the options for users to both report suicidal content and receive crisis support specifically through the Facebook Live application. The suicide of 14-year-old Nakia Venant, which was streamed live on Facebook from her Florida foster home on Jan. 22, was one of at least three incidents of livestreamed suicide this year, according to the AP. One day after Venant’s suicide, 33-year-old Frederick Jay Bowdy used Facebook Live to broadcast his suicide from his car in North Hollywood, the Los Angeles Times reported. As previously reported by Forensic Magazine, viewers of each broadcast attempted to send help, but emergency services could not arrive in time to prevent the suicides. Founder and CEO of Facebook Mark Zuckerberg briefly mentioned livestreamed suicides in a letter in mid-February, which included a section on building a safe community and preventing self-harm as well as providing support during any kind of crisis. “There have been terribly tragic events—like suicides, some live streamed—that perhaps could have been prevented if someone had realized what was happening and reported them sooner,” the letter states. “These stories show we must find a way to do more.” In addition to the new reporting tools, Facebook is also beginning a video campaign, according to yesterday's announcement, that will highlight suicide prevention awareness and include a collaboration with its partners in mental health and crisis support. Yesterday, Facebook and Crisis Text Line also announced they will be partnering to provide users with 24/7 crisis support that will allow a user who is considering self-harm or suicide to reach a trained crisis counselor directly through Facebook Messenger, according to a statement from Crisis Text Line. Currently in testing is a way for Facebook to use artificial intelligence to spot suicidal posts and take emergency action if necessary, according to Facebook’s announcement. This process, called pattern recognition, would identify concerning posts and either highlight the option for viewers to report them, or automatically flag them for review by members of Facebook’s Community Operations team. In his Feb. 16 statement, Zuckerberg said the technology was “very early in development,” but that it could be used both to aid in suicide prevention and help spot terrorist propaganda, making the website’s community safer overall. “One of our greatest opportunities to keep people safe is building artificial intelligence to understand more quickly and accurately what is happening across our community,” Zuckerberg stated. Source
  25. Do you know how much of your information is out there? Facebook is a powerful platform, and maybe more so than you realize. If you really understand the quirks of its search function, for example, you can snoop for all photos posted by single females that a particular friend has liked. Creepy, right? When Facebook launched a feature called Graph Search in 2013 that allowed users to easily do just this, a lot of people thought so, too. Facebook has quietly back-burnered the service and focused on other aspects of search. But Graph Search is still functional, although most folks probably don't use it due to its complexity, and the fact that Facebook is no longer pushing it as a discrete service. Now, Belgian "ethical hacker" Inti De Ceukelaire has created a web interface that lets you make the most out of Graph Search, aptly called Stalkscan. Stalkscan, which launched today, is meant to highlight how much information Facebook users post about themselves, perhaps without thinking about the privacy implications, De Ceukelaire told me over email. "Graph Search and its privacy issues aren't new, but I felt like it never really reached the man on the street," De Ceukelaire wrote. "With my actions and user-friendly tools I want to target the non-tech-savvy people because most of them don't have a clue what they are sharing with the public." Because Graph Search is only available in English on Facebook, the feature wasn't known to many in De Ceukelaire's native Belgium until his tool drew attention to it. Now the Belgian media is having a shitfit, and local reports say that the country's top privacy official has called for an investigation into whether Facebook adequately protects users. It's important to note that Stalkscan only allows you to use Facebook's existing search functions, and that it won't circumvent privacy settings. In other words, if you're not someone's friend on Facebook already and they've set it so that only friends can see their posts, you won't be able to get around that with Stalkscan. What it does do is generate boutique search links that Facebook understands. This allows you to make hyper-specific searches that would be nigh-impossible to pull off without Stalkscan. How would one even formulate a sentence to search for, to use the example again, all photos posted by single females liked by a friend? With Stalkscan, that search takes just a few clicks. "Like most services, we offer a search feature, but search on Facebook is built with privacy in mind," a Facebook spokesperson said in an emailed statement. "[Stalkscan] merely redirects to Facebook's existing search result page. As with any search on Facebook, you can only see content that people have chosen to share with you." I did manage to use Stalkscan in one instance that would seem to, in spirit at least, violate someone's privacy. One Facebook friend chooses to unlist the "events" button on their public page so that stalkers can't easily find out which parties they've attended. Stalkscan showed me a list of all the past events they've attended when I searched their profile. As for what people can do to make sure that information they thought was hidden doesn't appear on Stalkscan, De Ceukelaire had some advice. "I'd advise people to check themselves first while logged in into a friend's account," he wrote. "If they see stuff they don't want to, they may want to remove tags, likes or photos from their profile. This way, they at least know what other people can see." A Facebook spokesperson emphasized that the platform allows users to take control of their privacy, if they wish. "We offer a variety of tools to help people control their information, including the ability to select an audience for every post, a feature that limits visibility of past posts to only your friends, and education efforts launched in consultation with Belgian safety experts," the spokesperson wrote in a statement. By Jordan Pearson https://motherboard.vice.com/en_us/article/facebooks-creepiest-search-tool-is-back-thanks-to-this-site