Welcome to nsane.forums

Welcome to nsane.forums, like most online communities you need to register to view parts of our community or to make contributions, but don't worry: this is a free and simple process that requires minimal information. Be a part of nsane.forums by signing in or creating an account.

  • Access special members only forums
  • Start new topics and reply to others
  • Subscribe to topics and forums to get automatic updates

Search the Community

Showing results for tags 'facebook'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Found 250 results

  1. Facebook has a new feature A new Facebook feature that has been in testing for a while now has finally gone live, enabling users to more easily follow their elected representatives. Town Hall is a feature designed for users in the United States which should help them find state and federal representatives based on their location. Users can then follow these individuals for update or go ahead and contact them directly via their listed phone number and address or via Facebook Messenger if they are on Facebook. "Building a civically-engaged community means building new tools to help people engage in a thoughtful and informed way. The starting point is knowing who represents you and how you can make your voice heard on the decisions that affect your life," Zuckerberg wrote in a Facebook post. He adds that the more you engage with the political process, the more you can ensure it reflects your values. "This is an important part of feeling connected to your community and your democracy, and it's something we're increasingly focused on a Facebook," the Facebook CEO said. How it works Town Hall includes state and federal officials and will soon receive an expansion to include local elected officials for the top 150 biggest cities in the United States, a feature that Facebook hopes to expand in the future. If users like or comment on a post created by one of their elected officials from their news feed, they will see a feature that allows them to directly contact the representative. If they go through and send them a message, users are invited to post about contacting the lawmaker to let others know about their initiative, and, perhaps, push them to do the same. Talks about this particular feature have been around for a while as Facebook rolled out Town Hall as a test to a small number of users. Now that it has finally gone live, it remains to be seen just how much it will be put to use. This is how Town Hall looks like on mobile Source
  2. As a follow up to its study which found up to $16.4 billion could be lost to ad fraud in 2017, The&Partnership is, well, basically demanding that Google, Facebook, et al open up their walled gardens and allow inside third party purveyors of ad verification solutions such as Adloox, a company The&Partnership partnered with for the study. The&Partnership argues ad spend lost to ad fraud could be reduced to single digits if only the giants would allow in solutions such as Adlooz. Currently the big boys don't allow in third party solutions of this type. Arguing for a doorway into the walled garden, The&Partnership Founder Johnny Hornby said, "Without this, not only are these platforms denying our clients the clean, brand-safe environments they quite rightly demand - but advertisers also lack full transparency and visibility in terms of the money they are losing to fraudulent advertising and advertising that never gets seen. If Google wants to see advertisers returning to YouTube in significant numbers. it is going to have to move quickly." Hornby suggests Google needs to do two things, "Firstly, Google needs to stop marking its own homework, fully opening up its walled gardens to independent, specialist ad verification software, to give brands the visibility and transparency they deserve. Secondly, Google will need to start looking at brand safety from completely the other end of the telescope. Instead of allowing huge volumes of content to become ad-enabled every minute, and then endeavoring to convince advertisers that the dangerous and offensive content among it will be found and weeded out, it should be presenting advertisers only with advertising opportunities that have already been pre-vetted and found to be 100% safe." Does anyone think Google is actually going to allow this? Of course, they could just buy Adloox and then there might be some actual headway. By Richard Whitman http://www.mediapost.com/publications/article/297997/agency-urges-google-to-allow-third-party-ad-verifi.html
  3. WhatsApp can't hand over messages End-to-end encryption services like WhatsApp are once more being slammed for offering protection for users everywhere. This time, the UK is doing all the finger pointing, and it's because of the terrorist attack that took place on Wednesday. British Home Secretary Amber Rudd has accused WhatsApp of giving terrorists "a place to hide," after the company has failed to comply with a demand to hand over the last messages sent by London attacker, Adrian Ajao, the Telegraph reports. "This terrorist sent a WhatsApp message, and it can't be accessed," Rudd said. She also said that it is completely unacceptable for end-to-end encryption to be offered because there should be no place for terrorists to hide. "We need to make sure that organizations like WhatsApp - and there are plenty of others like that - don't provide a secret place for terrorists to communicate with each other," she added. The British authorities are complaining that Scotland Yard and the security services cannot access encrypted messages sent via WhatsApp, so they cannot know who Ajao contacted or what the told them before the attack. Not only did Rudd slam WhatsApp, but also went after Google and social media platforms which have been known for being late to take down extremist material or refusing to take it down altogether due to their protection of "free speech" and the way their Terms are worded. A much-desired backdoor This isn't the first time, nor will it be the last time, when WhatsApp and other similar services, as well as encrypted email tools, are slammed by authorities. End-to-end encryption is supposed to protect users from hackers, but also mass-surveillance, such as that exposed by Edward Snowden's NSA files. The way it works, a message is encrypted the second it is sent by one user, and it only gets decrypted once it reached the recipient. In this way, WhatsApp doesn't have access to any plain-text messages, which means it cannot share anything with authorities. In recent months there have been more and more voices asking for encryption backdoors for authorities, something that tech companies will likely never agree to; not without losing users in droves. Source
  4. Facebook Live has been used to broadcasts dozens of acts of violence since its launch a year ago. Police are searching for five to six men who sexually assaulted a 15-year-old girl as dozens watched on live video. Chicago police are searching for five to six men who sexually assaulted a 15-year-old girl in an attack viewed by dozens of people on Facebook Live. The live video, which has been removed from the social network, was viewed by approximately 40 people, but none reported the attack to police, authorities said Tuesday. The incident marks the second time in recent months Chicago police have investigated an apparent attack streamed on Facebook Live. In January, four people were arrested in the beating of a special-needs teenager that was livestreamed on the tool. Facebook Live, which lets anyone with a phone and internet connection livestream video directly to Facebook's 1.8 billion users, has become a centerpiece feature for the social network. In the past few months, everyone from Hamilton cast members to the Donald Trump campaign have turned to Facebook to broadcast in real time. But the focus on video has prompted some tough philosophical questions, like what Facebook should and shouldn't show. In the year since its launch, the feature has been used to broadcast at least 50 acts of violence, according to the Wall Street Journal, including murder and suicides Chicago police detectives found the girl Tuesday, a day after the girl's mother approached a police superintendent as he was leaving a news conference and showed him screen grabs of the attack, according to police. "It's disgusting. It's so disgusting," the girl's mother said, describing the apparent assault in an interview with CBS Chicago, before the girl was found. "I didn't really want to look at it that much, but from what I saw, they were pouring stuff on her, and just... she was so scared." Facebook representatives did not immediately respond to a request for comment. Source
  5. We may soon see GIFs in comments Facebook is finally embracing the GIF, after many years of resisting the change. According to TechCrunch, Facebook will begin testing a GIF button that allows users to post GIFs from services such as Giphy or Tenor as comments. "Everyone loves a good GIF, and we know that people want to be able to use them in comments. SO we're about to start testing the ability to add GIFs to comments, and we'll share more when we can, but for now we repeat that this is just a test," Facebook said in a statement. As per usual with any Facebook tests, the new feature will only be available to a small group of Facebook users, but it has the potential to roll out to everyone if it proves popular. Taking into consideration the high usage of GIFs on other platforms, it's pretty much certain that we'll all see this feature in our news feeds soon. Borrowing from Messenger The feature will apparently work similarly to the GIF button you can find in Facebook Messenger, allowing users to browse trending animations, or to search for specific reactions. While sharing GIFs as News Feed posts won't be possible, you will be able to comment on other people's posts with them. For many years now, Facebook has shied away from fully embracing GIFs, mostly out of "fear" that it would distract users from the News Feed and what it is supposed to be at the core - a way to connect to your friends and to find out the latest things they are interested in. Then again, this was a valid concern back before the News Feed became inundated with auto-play videos that are just as flashy and distracting, and even more annoying than the GIFs. While some changes have been made to permit GIFs, namely to share them with direct URLs, this is the first time sharing them will be made easy. Source
  6. Facebook live now works on desktop Facebook is now allowing desktop users to broadcast live videos too, a decision that will certainly impact the platform quite a bit. It's been about a year since Facebook introduced its ability to broadcast live videos for mobile users. It looks like the company had been working on bringing the feature to other platforms, most likely following the success the mobile version had. Truthfully, the feature had been around for a while, but only a few select desktop users had the ability to actually broadcast live video. Now, everyone will have the same powers. What does this mean for Facebook? Well, it just turned the platform into a major competitor for the likes of Twitch and YouTube Gaming. That's because desktop streamers will be able to broadcast video from external hardware, as well as streaming software. In short, live Facebook videos will now include gameplay footage and on-screen graphics, as well as picture-in-picture videos for those who like this particular feature. Easy peasy to go live Streaming content from your desktop is just as easy as it is to do from your mobile device. You just have to select Live Video from the posting area on top of the News Feed or Timeline, hit "next" and start broadcasting immediately. One thing people are grumbling about online is the fact that Facebook live videos don't bring users any money, unlike, let's say, Twitch. It looks, however, that Facebook is working on making this happen. Whenever it does, it will be more than welcome, especially considering the massive user base the platform has and the increased reach this type of videos can have. This is an interesting step to take for Facebook and one that will certainly make a difference for a very large number of users. On that note, beware of the increased influx of live videos on your feed. Source
  7. Mark youself as safe on Facebook Facebook activated its Safety Check feature for people in London following the events that took place on Wednesday afternoon when a suspected terrorist ran a car into pedestrians and then stabbed a police officer as he tried to get into the House of Parliament in Westminster. Safety check is a feature that was introduced back in October 2014, and it allows people to inform their friends and family that they're safe following a disaster or other incident, such as terrorist attacks or a large accident. For instance, the last time Safety Check was activated in London was back in 2016 when a tram crash took place in Croydon. The Met Police was called to Westminster Bridge this afternoon after a car crashed into railings and gunshots were heard outside the Parliament. It was soon discovered that the car had crashed into pedestrians before the driver got out of the car and headed for one of the entrances of the Parliament where he stabbed a police officer before being taken down. Attack with casualties Four people died as a result of the attack, including the police officer, and another 20 people were injured. Half of these were individuals were treated right at the scene. The area around the Houses of Parliament was placed on lockdown, and tube stations around the area were closed. As police remain vigilant, people were invited by Facebook to mark themselves as "safe" online, so their dear ones know not to worry too much about them right now. A few weeks ago, Facebook announced that it was expanding the Safety Check feature to include the option to offer help to those in need in times of disaster. This could particularly come in handy when wildfire takes over, or when earthquakes leave people without a home and in need of a roof over their heads. Source
  8. Facebook Bans Devs From Creating Surveillance Tools With User Data Without a hint of irony, Facebook has told developers that they may not use data from Instagram and Facebook in surveillance tools. The social network says that the practice has long been a contravention of its policies, but it is now tidying up and clarifying the wording of its developer policies. American Civil Liberties Union, Color of Change and the Center for Media Justice put pressure on Facebook after it transpired that data from users' feeds was being gathered and sold on to law enforcement agencies. The re-written developer policy now explicitly states that developers are not allowed to "use data obtained from us to provide tools that are used for surveillance." It remains to be seen just how much of a difference this will make to the gathering and use of data, and there is nothing to say that Facebook's own developers will not continue to engage in the same practices. Deputy chief privacy officer at Facebook, Rob Sherman, says: Transparency reports published by Facebook show that the company has complied with government requests for data. The secrecy such requests and dealings are shrouded in means that there is no way of knowing whether Facebook is engaged in precisely the sort of activity it is banning others from performing. Source
  9. Fake news is the plague we need to fight against Tim Berners-Lee, the man we have to thank for inventing the World Wide Web, believes there are several things that need to be done to ensure the future of the web in order to make this a platform that benefits humanity - fight against fake news, political advertisements and data sovereignty. As the World Wide Web turns 28, Berners-Lee celebrates the occasion. He writes that over the past year he has become increasingly worried about three trends that he believes harm the web. The first thing the world needs to fight against is fake news. "Today, most people find news and information on the web through just a handful of social media sites and search engines. These sites make more money when we click on the links they show us. And, they choose what to show us based on algorithms which learn from our personal data that they are constantly harvesting," Berners-Lee writes. "The net result is that these sites show us content they think we’ll click on – meaning that misinformation, or ‘fake news’, which is surprising, shocking, or designed to appeal to our biases can spread like wildfire. And through the use of data science and armies of bots, those with bad intentions can game the system to spread misinformation for financial or political gain." He takes things a step forward and gives names. He believes that we must push back against misinformation by encouraging gatekeepers such as Google and Facebook to continue their efforts to combat the problem, while also avoiding the creation of any central bodies that decide what's true or not because that's another problem altogether. It's not just fake-news to fight against Another thing we need to fight against is government over-reach in surveillance laws, including in court, if need be. "We need more algorithmic transparency to understand how important decisions that affect our lives are being made, and perhaps a set of common principles to be followed," he adds. Political advertising online needs to be more transparent, he believes, especially considering that during the 2016 US elections as many as 50,000 variations of adverts were being served every single day on Facebook. There are many problems plaguing the world wide web, but some are more pressing than others, it seems. TheWeb Foundation, which Berners-Lee leads, will be working on many of these issues as part of the new five-year strategy. "I may have invented the web, but all of you have helped to create what it is today. All the blogs, posts, tweets, photos, videos, applications, web pages and more represent the contributions of millions of you around the world building our online community. [...] It has taken all of us to build the web we have, and now it is up to all of us to build the web we want – for everyone," Tim Berners-Lee concludes. Source
  10. A new report into U.S. consumers’ attitude to the collection of personal data has highlighted the disconnect between commercial claims that web users are happy to trade privacy in exchange for ‘benefits’ like discounts. On the contrary, it asserts that a large majority of web users are not at all happy, but rather feel powerless to stop their data being harvested and used by marketers. The report authors’ argue it’s this sense of resignation that is resulting in data tradeoffs taking place — rather than consumers performing careful cost-benefit analysis to weigh up the pros and cons of giving up their data (as marketers try to claim). They also found that where consumers were most informed about marketing practices they were also more likely to be resigned to not being able to do anything to prevent their data being harvested. “Rather than feeling able to make choices, Americans believe it is futile to manage what companies can learn about them. Our study reveals that more than half do not want to lose control over their information but also believe this loss of control has already happened,” the authors write. By misrepresenting the American people and championing the tradeoff argument, marketers give policymakers false justifications for allowing the collection and use of all kinds of consumer data often in ways that the public find objectionable. Moreover, the futility we found, combined with a broad public fear about what companies can do with the data, portends serious difficulties not just for individuals but also — over time — for the institution of consumer commerce.” “It is not difficult to predict widespread social tensions, and concerns about democratic access to the marketplace, if Americans continue to be resigned to a lack of control over how, when, and what marketers learn about them,” they add. The report, entitled The Tradeoff Fallacy: How marketers are misrepresenting American consumers and opening them up to exploitation, is authored by three academics from the University of Pennsylvania, and is based on a representative national cell phone and wireline phone survey of more than 1,500 Americans age 18 and older who use the internet or email “at least occasionally”. Key findings on American consumers include that — 91% disagree (77% of them strongly) that “If companies give me a discount, it is a fair exchange for them to collect information about me without my knowing” 71% disagree (53% of them strongly) that “It’s fair for an online or physical store to monitor what I’m doing online when I’m there, in exchange for letting me use the store’s wireless internet, or Wi-Fi, without charge.” 55% disagree (38% of them strongly) that “It’s okay if a store where I shop uses information it has about me to create a picture of me that improves the services they provide for me.” The authors go on to note that “only about 4% agree or agree strongly” with all three of the above propositions. And even with a broader definition of “a belief in tradeoffs” they found just a fifth (21%) were comfortably accepting of the idea. So the survey found very much a minority of consumers are happy with current data tradeoffs. The report also flags up that large numbers (often a majority) of U.S. consumers are unaware of how their purchase and usage data can be sold on or shared with third parties without their permission or knowledge — in many instances falsely believing they have greater data protection rights than they are in fact afforded by law. Examples the report notes include — 49% of American adults who use the Internet believe (incorrectly) that by law a supermarket must obtain a person’s permission before selling information about that person’s food purchases to other companies. 69% do not know that a pharmacy does not legally need a person’s permission to sell information about the over-the-counter drugs that person buys. 65% do not know that the statement “When a website has a privacy policy, it means the site will not share my information with other websites and companies without my permission” is false. 55% do not know it is legal for an online store to charge different people different prices at the same time of day. 62% do not know that price-comparison sites like Expedia or Orbitz are not legally required to include the lowest travel prices Data-mining in the spotlight One thing is clear: the great lie about online privacy is unraveling. The obfuscated commercial collection of vast amounts of personal data in exchange for ‘free’ services is gradually being revealed for what it is: a heist of unprecedented scale. Behind the bland, intellectually dishonest facade that claims there’s ‘nothing to see here’ gigantic data-mining apparatus have been manoeuvered into place, atop vast mountains of stolen personal data. Stolen because it has never been made clear to consumers what is being taken, and how that information is being used. How can you consent to something you don’t know or understand? Informed consent requires transparency and an ability to control what happens. Both of which are systematically undermined by companies whose business models require that vast amounts of personal data be shoveled ceaselessly into their engines. This is why regulators are increasingly focusing attention on the likes of Google and Facebook. And why companies with different business models, such as hardware maker Apple, are joining the chorus of condemnation. Cloud-based technology companies large and small have exploited and encouraged consumer ignorance, concealing their data-mining algorithms and processes inside proprietary black boxes labeled ‘commercially confidential’. The larger entities spend big on pumping out a steady stream of marketing misdirection — distracting their users with shiny new things, or proffering up hollow reassurances about how they don’t sell your personal data. Make no mistake: this is equivocation. Google sells access to its surveillance intelligence on who users are via its ad-targeting apparatus — so it doesn’t need to sell actual data. Its intelligence on web users’ habits and routines and likes and dislikes is far more lucrative than handing over the digits of anyone’s phone number. (The company is also moving in the direction of becoming an online marketplace in its own right — by adding a buy button directly to mobile search results. So it’s intending to capture, process and convert more transactions itself — directly choreographing users’ commercial activity.) These platforms also work to instill a feeling of impotence in users in various subtle ways, burying privacy settings within labyrinthine submenus. And technical information in unreadable terms and conditions. Doing everything they can to fog rather than fess up to the reality of the gigantic tradeoff lurking in the background. Yet slowly, but slowly this sophisticated surveillance apparatus is being dragged into the light. The privacy costs involved for consumers who pay for ‘free’ services by consenting to invasive surveillance of what they say, where they go, who they know, what they like, what they watch, what they buy, have never been made clear by the companies involved in big data mining. But costs are becoming more apparent, as glimpses of the extent of commercial tracking activities leak out. And as more questions are asked the discrepancy between the claim that there’s ‘nothing to see here’ vs the reality of sleepless surveillance apparatus peering over your shoulder, logging your pulse rate, reading your messages, noting what you look at, for how long and what you do next — and doing so to optimize the lifting of money out of your wallet — then the true consumer cost of ‘free’ becomes more visible than it has ever been. The tradeoff lie is unraveling, as the scale and implications of the data heist are starting to be processed. One clear tipping point here is NSA whistleblower Edward Snowden who, two years ago, risked life and liberty to reveal how the U.S. government (and many other governments) were involved in a massive, illegal logging of citizens’ digital communications. The documents he released also showed how commercial technology platforms had been appropriated and drawn into this secretive state surveillance complex. Once governments were implicated, it was only a matter of time before the big Internet platforms, with their mirror data-capturing apparatus, would face questions. Snowden’s revelations have had various reforming political implications for surveillance in the U.S. and Europe. Tech companies have also been forced to take public stances — either to loudly defend user privacy, or be implicated by silence and inaction. Another catalyst for increasing privacy concerns is the Internet of Things. A physical network of connected objects blinking and pinging notifications is itself a partial reveal of the extent of the digital surveillance apparatus that has been developed behind commercially closed doors. Modern consumer electronics are hermetically sealed black boxes engineered to conceal complexity. But the complexities of hooking all these ‘smart’ sensornet objects together, and placing so many data-sucking tentacles on display, in increasingly personal places (the home, the body) — starts to make surveillance infrastructure and its implications uncomfortably visible. Plus this time it’s manifestly personal. It’s in your home and on your person — which adds to a growing feeling of being creeped out and spied upon. And as more and more studies highlight consumer concern about how personal data is being harvested and processed, regulators are also taking notice — and turning up the heat. One response to growing consumer concerns about personal data came this week with Google launching a centralized dashboard for users to access (some) privacy settings. It’s far from perfect, and contains plentiful misdirection about the company’s motives, but it’s telling that this ad-fueled behemoth feels the need to be more pro-active in its presentation of its attitude and approach to user privacy. Radical transparency The Tradeoff report authors include a section at the end with suggestions for improving transparency around marketing processes, calling for “initiatives that will give members of the public the right and ability to learn what companies know about them, how they profile them, and what data lead to what personalized offers” — and for getting consumers “excited about using that right and ability”. Among their suggestions to boost transparency and corporate openness are — Public interest organizations and government agencies developing clear definitions of transparency that reflect consumer concerns, and then systematically calling out companies regarding how well or badly they are doing based on these values, in order to help consumers ‘vote with their wallets’ Activities to “dissect and report on the implications of privacy policies” — perhaps aided by crowdsourced initiatives — so that complex legalize is interpreted and implications explained for a consumer audience, again allowing for good practice to be praised (and vice versa) Advocating for consumers to gain access to the personal profiles companies create on them in order for them to understand how their data is being used “As long as the algorithms companies implement to analyze and predict the future behaviors of individuals are hidden from public view, the potential for unwanted marketer exploitation of individuals’ data remains high. We therefore ought to consider it an individual’s right to access the profiles and scores companies use to create every personalized message and discount the individual receives,” the report adds. “Companies will push back that giving out this information will expose trade secrets. We argue there are ways to carry this out while keeping their trade secrets intact.” They’re not the only ones calling for algorithms to be pulled into view either — back in April the French Senate backed calls for Google to reveal the workings of its search ranking algorithms. In that instance the focus is commercial competition to ensure a level playing field, rather than user privacy per se, but it’s clear that more questions are being asked about the power of proprietary algorithms and the hidden hierarchies they create. Startups should absolutely see the debunking of the myth that consumers are happy to trade privacy for free services as a fresh opportunity for disruption — to build services that stand out because they aren’t predicated on the assumption that consumers can and should be tricked into handing over data and having their privacy undermined on the sly. Services that stand upon a futureproofed foundation where operational transparency inculcates user trust — setting these businesses up for bona fide data exchanges, rather than shadowy tradeoffs. By Natasha Lomas https://techcrunch.com/2015/06/06/the-online-privacy-lie-is-unraveling/
  11. Messenger Day Last year, Facebook announced that it started testing a new feature that allows users to take pictures and share them with friends on Messenger. The feature resembled Snapchat’s Stories and Facebook has just announced that it’s now available globally. Since its launch at the end of last year, billions of pictures and videos were shared using Messenger’s built-in camera. Messenger users added various effects and stickers, as well as frames to the images that they’ve shared on Messenger. Facebook also began testing a new feature in which users could add pictures to their Messenger Day for friends to see and reply to them. The images would disappear in 24 hours, very similar to Snapchat’s Stories section. Today, Facebook announced that it started rolling out Messenger Day globally, to all Android and iOS smartphones. Share with all or just a group of people In order to add pictures, users simply need to tap on the camera icon, snap a picture and tap on “Add to your day” in Messenger. Users can pick from effects and smiley face icons to add to the image or video. They can also add text over images or overlay a drawing. Add images to your day with Messenger Day Moreover, Messenger Day allows users to save images and videos to their camera roll or choose to send them to a specific person or group of people. Images or videos automatically disappear on their own after 24 hours. Images or videos in conversations can be added to the Messenger section and Facebook offers users the option to share images with all of their friends on Messenger or only a few users. In addition, they can take down the shared image if they decide that they wish to delete them. Recently, Facebook started testing reaction emojis in its Messenger app, similar to the emojis that can be added to Facebook posts. One of the emojis is a thumbs down dislike icon, but it’s unsure when it will roll out to everyone. Source
  12. Facebook is in the process of implementing new suicide prevention tools, including streamlined reporting on its Facebook Live application, in the wake of two suicides livestreamed from the platform earlier this year, according to the Associated Press. One new tool released Wednesday allows viewers of a livestream to report if the broadcast is suicidal in nature and prompt Facebook to intervene by reaching out to emergency services, the AP reports. It will also provide the user broadcasting with onscreen resources such as the option to talk to a friend or contact a helpline. Users who report a suicidal broadcast will receive resources for helping the person in crisis until further help arrives, according to Facebook’s announcement yesterday. As the announcement points out, Facebook has provided users with suicide prevention resources for over a decade and worked with organizations including the National Suicide Prevention Lifeline and Crisis Text Line to better understand how to support users in crisis. However, this new reporting tool expands the options for users to both report suicidal content and receive crisis support specifically through the Facebook Live application. The suicide of 14-year-old Nakia Venant, which was streamed live on Facebook from her Florida foster home on Jan. 22, was one of at least three incidents of livestreamed suicide this year, according to the AP. One day after Venant’s suicide, 33-year-old Frederick Jay Bowdy used Facebook Live to broadcast his suicide from his car in North Hollywood, the Los Angeles Times reported. As previously reported by Forensic Magazine, viewers of each broadcast attempted to send help, but emergency services could not arrive in time to prevent the suicides. Founder and CEO of Facebook Mark Zuckerberg briefly mentioned livestreamed suicides in a letter in mid-February, which included a section on building a safe community and preventing self-harm as well as providing support during any kind of crisis. “There have been terribly tragic events—like suicides, some live streamed—that perhaps could have been prevented if someone had realized what was happening and reported them sooner,” the letter states. “These stories show we must find a way to do more.” In addition to the new reporting tools, Facebook is also beginning a video campaign, according to yesterday's announcement, that will highlight suicide prevention awareness and include a collaboration with its partners in mental health and crisis support. Yesterday, Facebook and Crisis Text Line also announced they will be partnering to provide users with 24/7 crisis support that will allow a user who is considering self-harm or suicide to reach a trained crisis counselor directly through Facebook Messenger, according to a statement from Crisis Text Line. Currently in testing is a way for Facebook to use artificial intelligence to spot suicidal posts and take emergency action if necessary, according to Facebook’s announcement. This process, called pattern recognition, would identify concerning posts and either highlight the option for viewers to report them, or automatically flag them for review by members of Facebook’s Community Operations team. In his Feb. 16 statement, Zuckerberg said the technology was “very early in development,” but that it could be used both to aid in suicide prevention and help spot terrorist propaganda, making the website’s community safer overall. “One of our greatest opportunities to keep people safe is building artificial intelligence to understand more quickly and accurately what is happening across our community,” Zuckerberg stated. Source
  13. Do you know how much of your information is out there? Facebook is a powerful platform, and maybe more so than you realize. If you really understand the quirks of its search function, for example, you can snoop for all photos posted by single females that a particular friend has liked. Creepy, right? When Facebook launched a feature called Graph Search in 2013 that allowed users to easily do just this, a lot of people thought so, too. Facebook has quietly back-burnered the service and focused on other aspects of search. But Graph Search is still functional, although most folks probably don't use it due to its complexity, and the fact that Facebook is no longer pushing it as a discrete service. Now, Belgian "ethical hacker" Inti De Ceukelaire has created a web interface that lets you make the most out of Graph Search, aptly called Stalkscan. Stalkscan, which launched today, is meant to highlight how much information Facebook users post about themselves, perhaps without thinking about the privacy implications, De Ceukelaire told me over email. "Graph Search and its privacy issues aren't new, but I felt like it never really reached the man on the street," De Ceukelaire wrote. "With my actions and user-friendly tools I want to target the non-tech-savvy people because most of them don't have a clue what they are sharing with the public." Because Graph Search is only available in English on Facebook, the feature wasn't known to many in De Ceukelaire's native Belgium until his tool drew attention to it. Now the Belgian media is having a shitfit, and local reports say that the country's top privacy official has called for an investigation into whether Facebook adequately protects users. It's important to note that Stalkscan only allows you to use Facebook's existing search functions, and that it won't circumvent privacy settings. In other words, if you're not someone's friend on Facebook already and they've set it so that only friends can see their posts, you won't be able to get around that with Stalkscan. What it does do is generate boutique search links that Facebook understands. This allows you to make hyper-specific searches that would be nigh-impossible to pull off without Stalkscan. How would one even formulate a sentence to search for, to use the example again, all photos posted by single females liked by a friend? With Stalkscan, that search takes just a few clicks. "Like most services, we offer a search feature, but search on Facebook is built with privacy in mind," a Facebook spokesperson said in an emailed statement. "[Stalkscan] merely redirects to Facebook's existing search result page. As with any search on Facebook, you can only see content that people have chosen to share with you." I did manage to use Stalkscan in one instance that would seem to, in spirit at least, violate someone's privacy. One Facebook friend chooses to unlist the "events" button on their public page so that stalkers can't easily find out which parties they've attended. Stalkscan showed me a list of all the past events they've attended when I searched their profile. As for what people can do to make sure that information they thought was hidden doesn't appear on Stalkscan, De Ceukelaire had some advice. "I'd advise people to check themselves first while logged in into a friend's account," he wrote. "If they see stuff they don't want to, they may want to remove tags, likes or photos from their profile. This way, they at least know what other people can see." A Facebook spokesperson emphasized that the platform allows users to take control of their privacy, if they wish. "We offer a variety of tools to help people control their information, including the ability to select an audience for every post, a feature that limits visibility of past posts to only your friends, and education efforts launched in consultation with Belgian safety experts," the spokesperson wrote in a statement. By Jordan Pearson https://motherboard.vice.com/en_us/article/facebooks-creepiest-search-tool-is-back-thanks-to-this-site
  14. Following an unprecedented live streaming "piracy fest," Facebook and Foxtel are working on a new tool that should make it easier to shut down unauthorized streams in the future. Foxtel CEO Peter Tonagh compares piracy to "stealing a loaf of bread" and says the company will do everything in its power to stop live streaming from gaining traction. A week ago hundreds of thousands of people watched unauthorized Facebook live streams of a highly anticipated rematch between two Aussie boxers. Pay TV channel Foxtel, which secured the broadcasting rights for the event, was outraged by the blatant display of piracy and vowed to take the main offenders to court. This weekend, however, things had calmed down a bit. Foxtel did indeed reach out to the culprits, some of whom had more than 100,000 people watching their unauthorized Facebook streams. The company decided to let them off the hook if they published a formal apology. Soon after, the two major streaming pirates in this case both admitted their wrongdoing in similarly worded messages. “Last Friday I streamed Foxtel’s broadcast of the Mundine v Green 2 fight via my Facebook page to thousands of people. I know that this was illegal and the wrong thing to do,” streamer Brett Hevers wrote. “I unreservedly apologize to Anthony Mundine and Danny Green, to the boxing community, to Foxtel, to the event promoters and to everyone out there who did the right thing and paid to view the fight. It was piracy, and I’m sorry.” But that doesn’t mean that the streaming piracy problem is no longer an issue. Quite the contrary. Instead of investing time and money in legal cases, Foxtel is putting its efforts in stopping future infringements. In an op-ed for the Herald Sun, Foxtel CEO Peter Tonagh likens piracy to stealing, a problem that’s particularly common Down Under. “It is no less of a crime than stealing a loaf of bread from a supermarket or sneaking into a movie theater or a concert without paying. Yet, as a nation, Australians are among the worst offenders in the world,” Tonagh writes. Foxtel’s CEO sees illegal live streaming as the third wave of piracy, following earlier trends of smart card cracking and file-sharing. The Facebook piracy fest acted as a wake-up call and Tonagh says the company will do everything it can to stop it from becoming as common as the other two. “Rest assured we will work even harder to address this piracy before it gets out of control. The illegal streaming of the Mundine v Green fight nine days ago was a wake-up call. It was the first time that Foxtel had experienced piracy of a live event on a mass scale,” he notes. Over the past several days, Foxtel and Facebook have been working on a new technology which should be able to recognize pirated streams automatically and pull them offline soon after they are started. This sounds a lot like YouTube’s Content-ID system, but for live broadcasts. “We are working on a new tool with Facebook that will allow us to upload a large stream of our events to Facebook headquarters where it can be tracked,” Tonagh tells The Australian behind a paywall. “If that content is matched on users’ accounts where it’s being streamed without our authorisation then Facebook will alert us and pull it down,” he adds. The initiative will be welcomed by other rightsholders, who face the same problem. Having an option to have Facebook recognize infringing content on the fly, is likely to make it much easier to stop these streams from becoming viral. That said, live streaming piracy itself is much broader and not particularly new. There are dozens of niche pirate site that have been offering unauthorized streams for many years already, and they’re not going anywhere anytime soon. Source: TorrentFreak
  15. The 2016 U.S. Presidential election cycle won’t be soon forgotten. It shattered old conventions and introduced a completely new way of running a campaign, including fake news. No doubt some of that content was generated for political purposes. But, for better worse, some fake news was created simply for profit. For social media giants Facebook (NASDAQ:FB) and Google (NASDAQ:GOOGL), this new trend represents a challenge that can greatly affect the monetization of their platforms. If the billions of consumers and businesses that use these two brands can’t rely on the information they are accessing, advertisers may drop support for these channels. On the other hand, could small content creators face backlash whether their content is truly fake news or simply viewed that way by these digital behemoths? Facebook and Google Will Crack Down on Fake News Facebook has just announced a new initiative to identify authentic content because, as the company puts it, stories that are authentic resonate more with its community. During the election, the social media giant was criticized for doing very little to combat fake news. Instead, Facebook tried to outsource the task of identifying this content to third parties including five fact checking organizations: the Associated Press, ABC news, Factcheck.org, Snopes and PolitiFact. However, the new update ranks authentic content by incorporating new signals to better identify what is true or false. These signals are delivered in real-time when a post is relevant to a particular user. The signals are determined by analyzing overall engagement on pages to identify spam as well as posts that specifically ask for likes, comments or shares — since these might indicate an effort to spread questionable content. As for Google, the tech company released its 2017 Bad Ads report. Google says the report plays an important role in making sure users have access to accurate and quality information online. Still, the report addresses only ads thus far. Google warns more broadly that the sustainability of the web could be threatened if users cannot rely on the information they find there. https://smallbiztrends.com/2017/02/facebook-and-google-will-crack-down-on-fake-news.html
  16. Remember that last time you posted a picture on Facebook and it automatically suggested to tag other people on the photo? Nothing unusual. You’ve tagged these people before, right? You’ve trained the machine learning face-recognition algorithm. And now Facebook can spot where they are on your picture. Now, even if you refuse to tag anyone, this doesn’t mean Facebook never stores this information somewhere. Like, “person A is potentially present on picture B”. Actually, I’m almost 100% sure they do store it. Hell, I would if I was them. I bet you already see where I’m going with this. Now imagine you take a selfie in a crowded place. Like an airport or a train station. There are some people walking on the background. Hundreds of them. Some of them facing the camera. Guess what: the Facebook’s AI has just spotted them. Even if you’re extremely cautious, even if you never post anything on Facebook, even if you have “location services” disabled on your phone at all times etc. etc. Facebook still knows where you are. You can’t stop other people from taking selfies in an airport. Now all these Jason Bourne movies don’t look so ridiculous any more, do they? All the stupid scenes with people in a control room shouting “OK, we need to find this guy, quick, oh, there he is, Berlin Hauptbahnhof arrival hall just 20 minutes ago, send the asset!” or something like that. “DeepFace” This is not just me being paranoid. Various sources indicate that Facebook uses a program it calls DeepFace to match other photos of a person. Alphabet Inc.’s cloud-based Google Photos service uses similar technology. The efficiency is astonishing According to the company’s research, DeepFace recognizes faces with an accuracy rate of 97.35 percent compared with 97.5 percent for humans — including mothers Face recognition is being built into surveillance systems and law enforcement databases for a while now. We could soon have security cameras in stores that identify people as they shop (source) Even being in “readonly” mode doesn’t help Every time you simply check Facebook without actually posting anything — the app generates a post draft for you, ever saw this? If you have a link or a picture saved in your clipboard, it even offers to attach that to your post. And of course, it has your location. How can you be sure, it does not communicate that data to the servers? Actually, I’m pretty sure it does since the app generates that “preview image” of the link stored in your clipboard (you know, that nicely formatted headline with the cover image). There’s even more. Some evidence suggests that Facebook collects your keystrokes before you actually hit the “Post” button! If you then choose to backspace everything you’ve typed — too late… Facebook has about 600 terabytes of data coming in on a daily basis (source, 2014). If I was NSA I would definitely approach Facebook for this data. UPDATE: a little privacy tip: use Facebook in mobile Safari, with an adblocker, and delete the iOS native app — helps a lot AND saves you from tons of ads and 3rd party cookie tracking. I’m sure there’s a similar solution for Android. On a desktop — use an extension like Disconnect to block 3rd party cookie tracking. UPDATE 2: there’s is a great article if you want to know more — https://veekaybee.github.io/facebook-is-collecting-this/ By Alex Yumashev https://www.jitbit.com/alexblog/260-facebook-is-terrifying/
  17. Facebook and privacy campaigner party to action by Data Protection Commissioner Helen Dixon: Ireland’s Data Protection Commissioner wants the High Court to refer issues concerning the validity of data transfer channels to Europe for determination. Photograph: Brenda Fitzsimons A court case with potentially enormous implications for EU-US trade as well as the privacy rights of hundreds of millions of EU citizens, will open before the commercial division of the High Court on Tuesday. Facebook is a party to the case, in which the Data Protection Commissioner has asked the court to refer to the EU’s top court the question of whether certain contracts protect the privacy rights of EU citizens when their personal data is transferred outside Europe. Austrian privacy campaigner Max Schrems was also joined to the proceedings by the commissioner and is in Dublin for the case, which is expected to last three weeks. The US government and a number of business and privacy groups have been permitted to join the case as amicii curiae, or “friends of the court”. There is likely to be considerable interest in any submission on behalf of the US government, particularly in light of the change in administration in January. The Data Protection Commissioner, Helen Dixon, wants the High Court to refer issues concerning the validity of data transfer channels, known as standard contractual clauses and approved by various European Commission decisions, to the Court of Justice of the European Union (CJEU) for determination. Data transfer The clauses were designed to allow businesses to transfer the personal data of EU citizens to countries outside the European Economic Area while ensuring the citizens enjoyed equivalent privacy rights to those they have in the EU. Ms Dixon’s office brought proceedings after making a draft finding in May last year that Mr Schrems had raised well-founded objections to whether the channels breached the data privacy rights of EU citizens. An earlier complaint to the commissioner by Mr Schrems in relation to Facebook’s handling of his personal data in the US, made in the wake of the Edward Snowden revelations on US surveillance in 2013, ultimately ended up before the CJEU. In that case in October 2015, the court struck down the Safe Harbour framework used by about 4,500 companies to transfer personal data to the United States, saying that US national security, public interest and law enforcement requirements prevailed over the scheme. In an update published on the commissioner’s website, the office said it was seeking a referral to Europe because Ms Dixon had concerns about the validity of the standard contractual clauses when considered in the light of a number of factors, including articles 7, 8 and 47 of the Charter of Fundamental Rights of the European Union, and the CJEU’s judgment in the first Schrems case. Mr Schrems did not proceed last November with an application to protect his exposure to costs, after both Facebook and the commissioner agreed not to pursue him for costs. Argue against referral It is expected that both Mr Schrems and Facebook will argue against referral to the CJEU, albeit for different reasons. Facebook is satisfied that the standard contractual clauses provide adequate safeguards for privacy. The commissioner will make opening submissions on Tuesday. Short opening statements from Mr Schrems and Facebook will follow. In July last year, Mr Justice Brian McGovern ruled that the US government, the Business Software Alliance, Digital Europe and the Electronic Privacy Information Centre (Epic) be joined to the proceedings as “friends of the court”. This allows them to offer expert assistance on the issues. The commissioner’s office says it will publish updates on its website as the hearing progresses. By Elaine Edwards http://www.irishtimes.com/business/technology/major-privacy-case-to-open-before-high-court-in-dublin-1.2964424
  18. Facebook Makes Its Privacy Settings Much Clearer Facebook has made lots of changes to its privacy settings over the years, usually in a bid to make them simpler to understand and use, yet many people just stick with the defaults. Facebook’s new Privacy Basics aims to make it much easier for people to find the tools they need to control their information on the social network. Created, Facebook says, using user feedback, Privacy Basics puts all of the top privacy topics and frequently asked questions within easy reach. There are 32 interactive guides available, in 44 different languages. It provides tips for securing your account, and understanding who can see your posts, what your profile looks like to others, and so on. The update comes as part of Data Privacy Day, which takes place every year on January 28. Source
  19. How To Use Facebook Messenger Without A Facebook Account You Can Now Use Facebook Messenger Without A Facebook Account, Know How With over one billion users worldwide, Facebook Messenger is now one of the biggest messaging platforms worldwide. In order to have a Facebook Messenger app on your device, you need to have an active Facebook account. However, there are a lot of reasons that many people may not want to use Facebook but only the Messenger app. For instance, Facebook staples like pyramid schemes, political debates, and pointless status updates can fill some users with rage and using such a social media site is a big no-no for them. Similarly, there are users who are not interested in keeping up with friends online and rather catch up over a cup of coffee or on the phone instead of through liking each other’s perfect social media posts. But, what about those people who want to keep in touch with certain people who are not on any other platform except for Facebook Messenger. In such a scenario, is it possible to use Facebook Messenger app without having an active Facebook account? Yes, it is. You can stay in touch with your friends via Facebook Messenger, by following the steps below: Open Facebook’s deactivate account page. Ignore the photos of the people who will apparently miss you and scroll to the bottom. The last option says you can continue using Facebook Messenger even if you deactivate your account. Make sure this is not checked and just leave it as is. Scroll down and hit Deactivate. Now, your Facebook account will be deactivated. All your Facebook data will be safe until you are ready to log in again. Go ahead and open the Messenger app using your old Facebook credentials on your smartphone or log in via the website on your PC. You will notice that you can continue chatting with all your friends without losing any of your data. Please note that your deactivated Facebook account doesn’t get reactivated, if you are using Messenger. Your friends will only be able to contact you via the chat window in Facebook or the Messenger app. If you want to use Messenger and don’t have a Facebook account, then follow the instructions mentioned below: Download Facebook Messenger on iOS, Android, or Windows Phone. Open the app and enter your phone number. Tap Continue. You will get a code via SMS to confirm your number. Once you have done that you can key in phone numbers of your friends and start messaging them. Source
  20. Mozilla: The Internet Is Unhealthy And Urgently Needs Your Help Mozilla argues that the internet's decentralized design is under threat by a few key players, including Google, Facebook, Apple, Tencent, Alibaba and Amazon, monopolizing messaging, commerce, and search. Can the internet as we know it survive the many efforts to dominate and control it, asks Firefox maker Mozilla. Much of the internet is in a perilous state, and we, its citizens, all need to help save it, says Mark Surman, executive director of Firefox maker the Mozilla Foundation. We may be in awe of the web's rise over the past 30 years, but Surman highlights numerous signs that the internet is dangerously unhealthy, from last year's Mirai botnet attacks, to market concentration, government surveillance and censorship, data breaches, and policies that smother innovation. "I wonder whether this precious public resource can remain safe, secure and dependable. Can it survive?" Surman asks. "These questions are even more critical now that we move into an age where the internet starts to wrap around us, quite literally," he adds, pointing to the Internet of Things, autonomous systems, and artificial intelligence. In this world, we don't use a computer, "we live inside it", he adds. "How [the internet] works -- and whether it's healthy -- has a direct impact on our happiness, our privacy, our pocketbooks, our economies and democracies." Surman's call to action coincides with nonprofit Mozilla's first 'prototype' of the Internet Health Report, which looks at healthy and unhealthy trends that are shaping the internet. Its five key areas include open innovation, digital inclusion, decentralization, privacy and security, and web literacy. Mozilla will launch the first report after October, once it has incorporated feedback on the prototype. That there are over 1.1 billion websites today, running on mostly open-source software, is a positive sign for open innovation. However, Mozilla says the internet is "constantly dodging bullets" from bad policy, such as outdated copyright laws, secretly negotiated trade agreements, and restrictive digital-rights management. Similarly, while mobile has helped put more than three billion people online today, there were 56 internet shutdowns last year, up from 15 shutdowns in 2015, it notes. Mozilla fears the internet's decentralized design, while flourishing and protected by laws, is under threat by a few key players, including Facebook, Google, Apple, Tencent, Alibaba and Amazon, monopolizing messaging, commerce and search. "While these companies provide hugely valuable services to billions of people, they are also consolidating control over human communication and wealth at a level never before seen in history," it says. Mozilla approves of the wider adoption of encryption today on the web and in communications but highlights the emergence of new surveillance laws, such as the UK's so-called Snooper's Charter. It also cites as a concern the Mirai malware behind last year's DDoS attacks, which abused unsecured webcams and other IoT devices, and is calling for safety standards, rules and accountability measures. The report also draws attention to the policy focus on web literacy in the context of learning how to code or use a computer, which ignores other literacy skills, such as the ability to spot fake news, and separate ads from search results. Source Alternate Source - 1: Mozilla’s First Internet Health Report Tackles Security, Privacy Alternate Source - 2: Mozilla Wants Infosec Activism To Be The Next Green Movement
  21. Explained — What's Up With the WhatsApp 'Backdoor' Story? Feature or Bug! What is a backdoor? By definition: "Backdoor is a feature or defect of a computer system that allows surreptitious unauthorized access to data, " either the backdoor is in encryption algorithm, a server or in an implementation, and doesn't matter whether it has previously been used or not. Yesterday, we published a story based on findings reported by security researcher Tobias Boelter that suggests WhatsApp has a backdoor that "could allow" an attacker, and of course the company itself, to intercept your encrypted communication. The story involving the world's largest secure messaging platform that has over a billion users worldwide went viral in few hours, attracting reactions from security experts, WhatsApp team, and Open Whisper Systems, who partnered with Facebook to implement end-to-end encryption in WhatsApp. Note: I would request readers to read complete article before reaching out for a conclusion. And also, suggestions and opinions are always invited What's the Issue: The vulnerability relies on the way WhatsApp behaves when an end user's encryption key changes. WhatsApp, by default, trusts new encryption key broadcasted by a contact and uses it to re-encrypt undelivered messages and send them without informing the sender of the change. In my previous article, I have elaborated this vulnerability with an easy example, so you can head on to read that article for better understanding. Facebook itself admitted to this WhatsApp issue reported by Boelter, saying that "we were previously aware of the issue and might change it in the future, but for now it's not something we're actively working on changing." What Experts argued: According to some security experts — "It's not a backdoor, rather it’s a feature to avoid unnecessarily re-verification of encryption keys upon automatic regeneration." Open Whisper Systems says — "There is no WhatsApp backdoor," "it is how cryptography works," and the MITM attack "is endemic to public key cryptography, not just WhatsApp." A spokesperson from WhatsApp, acquired by Facebook in 2014 for $16 Billion, says — "The Guardian's story on an alleged backdoor in WhatsApp is false. WhatsApp does not give governments a backdoor into its systems. WhatsApp would fight any government request to create a backdoor." What's the fact: Notably, none of the security experts or the company has denied the fact that, if required, WhatsApp, on government request, or state-sponsored hackers can intercept your chats. What all they have to say is — WhatsApp is designed to be simple, and users should not lose access to messages sent to them when their encryption key is changed. Open Whisper Systems (OWS) criticized the Guardian reporting in a blog post saying, "Even though we are the creators of the encryption protocol supposedly "backdoored" by WhatsApp, we were not asked for comment." What? "...encryption protocol supposedly "backdoored" by WhatsApp…" NO! No one has said it's an "encryption backdoor;" instead this backdoor resides in the way how end-to-end encryption has been implemented by WhatsApp, which eventually allows interception of messages without breaking the encryption. As I mentioned in my previous story, this backdoor has nothing to do with the security of Signal encryption protocol created by Open Whisper Systems. It's one of the most secure encryption protocols if implemented correctly. Then Why Signal is more Secure than WhatsApp? You might be wondering why Signal private messenger is more secure than Whatsapp, while both use the same end-to-end encryption protocol, and even recommended by the same group of security experts who are arguing — "WhatsApp has no backdoor." It's because there is always room for improvement. The signal messaging app, by default, allows a sender to verify a new key before using it. Whereas, WhatsApp, by default, automatically trusts the new key of the recipient with no notification to the sender. And even if the sender has turned on the security notifications, the app notifies the sender of the change only after the message is delivered. So, here WhatsApp chose usability over security and privacy. It’s not about 'Do We Trust WhatsApp/Facebook?': WhatsApp says it does not give governments a "backdoor" into its systems. No doubt, the company would definitely fight the government if it receives any such court orders and currently, is doing its best to protect the privacy of its one-billion-plus users. But what about state-sponsored hackers? Because, technically, there is no such 'reserved' backdoor that only the company can access. Why 'Verifying Keys' Feature Can't Protect You? WhatsApp also offers a third security layer using which you can verify the keys of other users with whom you are communicating, either by scanning a QR code or by comparing a 60-digit number. But here’s the catch: This feature ensure that no one is intercepting your messages or calls at the time you are verifying the keys, but it does not ensure that no one, in the past had intercepted or in future will intercept your encrypted communication, and there is no way, currently, that would help you identify this. WhatsApp Prevention against such MITM Attacks are Incomplete WhatsApp is already offering a "security notifications" feature that notifies users whenever a contact's security code changes, which you need to turn on manually from app settings. But this feature is not enough to protect your communication without the use of another ultimate tool, which is — Common Sense. Have you received a notification indicating that your contact's security code has changed? Instead of offering 'Security by Design,' WhatsApp wants its users to use their common sense not to communicate with the contact whose security key has been changed recently, without verifying the key manually. The fact that WhatsApp automatically changes your security key so frequently (for some reasons) that one would start ignoring such notifications, making it practically impossible for users to actively looking each time for verifying the authenticity of session keys. What WhatsApp should do? Without panicking all one-billion-plus users, WhatsApp can, at least: Stop regenerating users' encryption keys so frequently (I clearly don't know why the company does so). Give an option in the settings for privacy-conscious people, which if turned on, would not automatically trust new encryption key and send messages until manually accepted or verified by users. ...because just like others, I also hate using two apps for communicating with my friends and work colleagues i.e. Signal for privacy and WhatsApp because everyone uses it. Source
  22. WhatsApp Security: Make This Change Right Now! Security researchers found a backdoor in the popular messaging application WhatsApp recently that could allow WhatsApp to intercept and read user messages. Facebook, the owner of WhatsApp, claims that it is impossible to intercept messages on WhatsApp thanks to the services end-to-end encryption. The company states that no one, not even itself, can read what is sent when both sender and recipient use the latest version of the application. It turns out however that there is a way for WhatsApp to read user messages, as security researcher Tobias Boelter (via The Guardian) found out. Update: In a statement sent to Ghacks, a WhatsApp spokesperson provided the following insight on the claim: WhatsApp has the power to generate new encryption keys for users who are not online. Both the sender and the recipient of messages are not made aware of that, and the sender would send any message not yet delivered again by using the new encryption key to protect the messages from third-party access. The recipient of the message is not made aware of that. The sender, only if Whatsapp is configured to display security notifications. This option is however not enabled by default. While WhatsApp users cannot block the company -- or any state actors requesting data -- from taking advantage of the loophole, they can at least activate security notifications in the application. The security researcher reported the vulnerability to Facebook in April 2016 according to The Guardian. Facebook's response was that it was "intended behavior" according to the newspaper. Activate security notifications in WhatsApp To enable security notifications in WhatsApp, do the following: Open WhatsApp on the device you are using. Tap on menu, and select Settings. Select Account on the Settings page. Select Security on the page that opens. Enable "show security notifications" on the Security page. You will receive notifications when a contact's security code has changed. While this won't prevent misuse of the backdoor, it will at least inform you about its potential use. Source Alternate Source - 1: WhatsApp Encryption Has Backdoor, Facebook Says It's "Expected Behaviour" Alternate Source - 2: WhatsApp Backdoor allows Hackers to Intercept and Read Your Encrypted Messages Alternate Source - 3: Oh, for F...acebook: Critics bash WhatsApp encryption 'backdoor' Alternate Source - 4: Your encrypted WhatsApp messages can be read by anyone Alternate Source - 5: How to protect yourself from the WhatsApp 'backdoor' Alternate Source - 6: 'Backdoor' in WhatsApp's end-to-end encryption leaves messages open to interception [Updated] Detailed Explanation of the Issue and Prevention/Alternatives:
  23. Facebook Is Ready To Censor Posts In China -- Should Users Around The World Be Worried? Facebook's relationship with China has a tense and turbulent history. The social network is currently banned in China, and this clearly takes a huge chunk out of Facebook's ad revenue. In a bid to keep Chinese authorities happy, Mark Zuckerberg has been involved in the creation of software that can be used to monitor and censor posts made by users. In terms of playing by China's rules, this is clearly great news for Facebook, and it opens up the possibility of the social network operating in the country. While there is the slight silver lining that Facebook's censorship tool does not amount to a full blackout (as the Guardian puts it: "The posts themselves will not be suppressed, only their visibility"), the new program does raise a very important question: if Facebook is willing and able to create such a censorship tool for China, what’s to stop it doing the same for other markets, or even for its own benefit? The answer, of course, is 'nothing'. Facebook has shown time and time again that it is more than happy to fly in the face of popular user opinion and do whatever it wants. We have already seen some of the ways in which the social network is willing to tinker with users' newsfeeds. Increasingly controversial algorithms have been used for some time to tailor news and posts in a way that Facebook says is in users' interests. There is nothing to stop these algorithms being further tweaked to prevent the appearance of certain posts, certain types of content -- be that at Facebook's whim, or at the behest of governments around the world. Of course, the counter argument is that it would not be in Facebook's interest to introduce censorship outside of China. Except the Chinese case has very much indicated that it is in Facebook's interest to use censorship tools. In China, it is a matter of bowing to governmental demands in order to -- hopefully, in Facebook's view -- be allowed to operate in the country once again. The real driving force here is, as mentioned, money generated through advertising; this is the very reason why we should be wary of Facebook's development of a censorship tool, and fear its use elsewhere. Just as with the covert activities of the NSA, there would be nothing to stop Facebook from using a censorship tool without making it clear to users. After all, Facebook is free to do whatever it wants to do with content that is posted, so long as it is in keeping with the law. It is not a stretch to imagine a high profile advertiser applying pressure to Facebook to put a damper on certain opinions and to threaten withdrawal of advertising. Money talks, so it is hardly inconceivable that Facebook might at least be tempted to comply with such a demand -- and users would be none the wiser. What’s happening in China -- and, indeed, in Russia and other countries -- is great cause for concern. Facebook does not have a great track record when it comes to maintaining user trust (just look at the fake news problem), and as news of tools such as this starts to spread, any trust that does remain is only going to be further undermined. Source
  24. Facebook apologizes for 'terrible error' that told people they died A Facebook bug caused people's profile pages to display that they have died on Friday. Multiple Business Insider employees reported seeing the message at the top of their Facebook profiles Friday afternoon, and the bug even affected Facebook CEO Mark Zuckerberg. As of around 4 p.m. ET, people started reporting that the message was gone from their profiles. Facebook later apologized for the "terrible error" in a statement to Business Insider. Screenshot “For a brief period today, a message meant for memorialized profiles was mistakenly posted to other accounts," a Facebook spokesperson said. "This was a terrible error that we have now fixed. We are very sorry that this happened and we worked as quickly as possible to fix it.” Before the bug was fixed, visitors to the Facebook's CEO profile were greeted with a somber notification: "We hope people who love Mark will find comfort in the things others share to remember and celebrate his life," the message read. The message included a link to Facebook's request form for memorializing the account of someone who has died. People using Facebook's app also reported seeing the message. Article source