Jump to content

Search the Community

Showing results for tags 'fake news'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...

Found 21 results

  1. On paper, they would seem to have little in common. Tun Khin is a human rights activist who advocates for the persecuted Rohingya Muslims in his home country of Myanmar. Jessikka Aro is a Finnish journalist who exposed the international influence of Russian propagandists at the Internet Research Agency long before the rest of the world had ever heard of them. Lenny Pozner is an American father who lost his 6-year-old son, Noah, in the shooting at Sandy Hook Elementary in 2012. Ethan Lindenberger is almost a kid himself, a high school student who’s become a vaccination proponent despite his parents’ anti-vaccination beliefs. Photo: Ethan Lindenberger, seen here testifying before the Senate in March about his parents' anti-vaccine stance, is among those who've seen firsthand how dangerous online disinformation can be But all four of them are bound by one unfortunate and common thread: They’ve all seen firsthand just how ugly—and downright dangerous—the spread of fake news and disinformation online can be. Which is why this week, they gathered in Silicon Valley to talk with tech executives about what they’ve been through and what they want tech companies to do about it. The group met with Twitter on Tuesday, and another meeting was planned at Facebook Wednesday afternoon. The meetings, which were organized by a nonprofit advocacy group called Avaaz, come at a time of fierce debate over what responsibility tech companies have to limit the spread of toxic content on their platforms. Just last week, Facebook announced it was banning seven people, including Infowars conspiracy theorists Alex Jones and Paul Joseph Watson, under a policy that prohibits “dangerous individuals” from having any presence on Facebook. The bans prompted President Trump to lash out against tech companies over the weekend, ramping up accusations of censorship that have become a constant drumbeat on the right. The discussions organized by Avaaz served as a counterpoint to all that pressure, as individual victims of online harassment campaigns came forward to tell tech companies exactly how they’ve been hurt by the hate and hoaxes that have festered on their platforms. “Our job as advocates is to make them stop for a minute and think about the implications of not acting fast enough,” says Oscar Soria, a senior campaigner with Avaaz. During Tuesday’s meeting with Twitter, the attendees took turns telling their stories. Aro shared the details of the global smear campaign that was lodged against her, after her reporting outed the Internet Research Agency. She explained the threats that have been made against her life and read a recent direct message she received while traveling in the Czech Republic, in which a stranger threatened to “castrate” her if she ever came back to the country. Aro says the harassment she’s received violates Finnish defamation laws, and she is in the process of pursuing cases against some of her harassers in court. And yet, she says, the complaints she’s filed to Twitter and Facebook often go unanswered, leaving local investigators to do the work the American companies won't. “I'm basically here, to put it simply, to give a user report live, because they haven't reacted to the ones that I have made online,” Aro says. Khin described the trauma he’s seen in Rohingya refugee camps and pressed Twitter about why it continues to provide safe haven for Senior General Min Aung Hlaing, the commander-in-chief of the Myanmar military. The military was behind some of the accounts that notoriously flooded Facebook with anti-Islam rhetoric, and the United Nations called for its leaders to face genocide charges last year. Facebook has since banned Min Aung Hlaing and other accounts and pages that the UN linked to human rights abuses in the country. While the general's Twitter account hasn’t been active since last year, it remains up on the platform today. “He was the mastermind of the Rohingya genocide. The UN has said he was personally responsible. And Facebook has already banned him. What more evidence do they need?” Khin wrote in a tweet following the meeting. Lindenberger, meanwhile, discussed how his parents came to believe anti-vaccination propaganda on social media, leaving him and his siblings exposed to potentially deadly viruses like the measles. According to Soria, Lindenberger told Twitter executives that after he testified about this issue before the Senate, he himself became the subject of a disinformation campaign. Recently, he said, his own pastor told him to avoid church for his own protection. (WIRED wasn't able to reach Lindenberger.) Pozner, for his part, has faced such violent threats that he is participating in the meetings remotely. Ever since the Sandy Hook tragedy took his son's life, Pozner and his family have been forced to live in hiding, hounded by online death threats from people who believe that the shooting was a hoax. The conspiracy theory, propagated by figures like Alex Jones, has no basis in reality. Now, Pozner runs a non-profit called HONR Network aimed at ending online harassment campaigns, helping its victims, and working with tech companies to change their policies. Of all the tech platforms, Pozner says, Twitter has the farthest to go in terms of cracking down on hoaxes and harassment. "Twitter has allowed their platform to be used as a weapon of mass destruction for which they must take accountability," he says. Twitter spokesperson Liz Kelley told WIRED that the conversation on Tuesday centered on how Twitter can prohibit the “manipulation of the conversation, not serving as the arbiters of truth,” and how Twitter is enforcing the policies against hate speech and violent threats that are already in place. “Hearing these stories is a valuable way for us to inform our decisions and product investments going forward,” Kelley said. Facebook confirmed its executives met with the group, but declined to offer further comment. Avaaz's organizers also hoped to meet with executives from Google, whose video platform YouTube has helped promote some of the internet's worst conspiracies. As of Wednesday afternoon, a meeting with Google had not yet been scheduled. In addition to giving the group a chance to share their stories, Avaaz also encouraged Facebook and Twitter to adopt a policy that would alert people when they've been exposed to information marked false by third-party fact-checkers. Facebook has taken steps to expand fact-checking on its platform, recently announcing that it will limit the visibility of groups that repeatedly share content marked as false by fact-checkers. And just this week the company announced that fact-checkers will also begin vetting information on Instagram. Avaaz wants to see Twitter adopt its own fact-checking policy and to see Facebook build upon the one that's already in place. "This is a necessary step to restore public trust," Soria says. Social media companies have been historically reluctant to make such editorial decisions on their platforms. And, given the recent heightened accusations of liberal bias in Silicon Valley, including from the President of the United States, making decisions about who is right and wrong on the internet comes with risks for these companies. Pozner just hopes these meetings will underscore the fact that the risks he and other victims have faced are so much greater. "I am a strong proponent of the First Amendment, and free speech is an essential aspect of American society. However, there is a fundamental misunderstanding of people's rights and responsibilities online," Pozner says. "A person cannot violate my civil rights to be free of harassment, bullying, or to have my likeness manipulated and my family targeted with death threats and intimidation and then simply attempt to hide behind 'free speech.'" Update: 9:27 AM ET 5/9/2019 This story has been updated to include confirmation from Facebook about its meeting with Avaaz and to clarify the nature of Aro's reporting on the Internet Research Agency. Source
  2. (SINGAPORE) — Singapore reportedly has passed a law criminalizing publication of fake news and allowing the government to block and order the removal of such content. Singapore Prime Minister Lee Hsien Loong The Protection from Online Falsehoods and Manipulation Bill passed Wednesday night by a vote of 72-9, a lawmaker with the opposition Worker’s Party, Daniel Goh, said on Twitter. The law bans falsehoods that are prejudicial to Singapore or likely to influence elections and requires service providers to remove such content or allows the government to block it. Offenders could face a jail term of up to 10 years and hefty fines. Opponents in Parliament said it gave government ministers too much power to determine what was false and broadly defined public interest. The Strait Times newspaper reported Law Minister K. Shanmugam said the orders to correct or remove false content would mostly be directed at technology companies, rather than individuals who ran afoul of the law without intent. Prime Minister Lee Hsien Loong last month defended the proposed law,saying many countries had them and that Singapore had debated the issue for two years. He rejected criticism the law could further stifle free speech in Singapore, which already has stern laws on public protests and dissent. “They criticized many things about Singapore’s media management, but what we have done have worked for Singapore. And it is our objective to continue to do things that will work for Singapore. And I think (the new law) will be a significant step forward in this regard,” he said on a visit to Malaysia. Speaking at the same news conference, Malaysian Prime Minister Mahathir Mohamad warned such laws were a double-edged sword that could be abused by governments to stay in power. Malaysia’s own fake news ban was rushed into law by the government Mahathir’s coalition ousted in a shock election result in 2018. Mahathir has promised to try to repeal the law, though a first attempt to do so failed. Human Rights Watch sharply criticized the law. It is a “disaster for online expression by ordinary Singaporeans” and a “hammer blow” against the independence of online news portals, said Phil Robertson, the group’s deputy Asia director. Source
  3. In a bid to fight fake news read while on your phone, Microsoft’s mobile Edge browser on Android and iOS now includes the NewsGuard extension. The addition, noted by The Guardian, needs to be toggled on within the Edge settings menu to be enabled. Once it is, Edge will display a small shield icon next to the site’s URL in the search bar: a green shield with a checkmark for a trusted news site, and a red shield with an exclamation point inside of it for a site that NewsGuard believes isn’t always accurate. (Some sites haven’t been evaluated, and these will simply show a gray shield.) In this screenshot captured by Microsoft Edge on Android, NewsGuard recommends that you “Proceed with caution: This website generally fails to maintain basic standards of accuracy and accountability.” NewsGuard isn’t there to protect you from phishing or to alert you that the site may be hosting a bad ad that may infect your phone. Instead, it’s there as a sort of anti-malware for your mind. Clicking on the shield brings up a summary of how NewsGuard sees the site, from a responsible presenter of information, to correcting errors quickly, to clearly labeling ads. In certain cases, sites will be given a green shield but NewsGuard will flag problems that won’t be revealed unless you click on the shield. This site, according to NewsGuard, “generally maintains basic standards of accuracy and accountability.” It’s a proactive move for Microsoft, which does not offer the same sort of integration within its desktop Edge browser. There, NewsGuard is merely an available extension. (To enable it, you’ll need to access the ellipsis menu in the top right-hand corner, navigate to Extensions, then manually search for the NewsGuard plugin.) Within the mobile browser, though, NewsGuard is off by default. You’ll need to go into the Settings menu, scroll down to News rating, and then toggle on NewsGuard. Note that Edge also has a built-in relationship with Adblock Plus, which you can toggle on under Content blockers. You’ll need to enter the Edge settings to toggle NewsGuard on. What this means for you: The problem is that the only way to enable this on your phone is to download Edge manually, access the Settings, turn on the feature, and enable Edge as your default browser, rather than the default Chrome (or Safari) browser—which is what probably 99 percent of all users already have configured. That’s a lot of steps to help stop your crazy uncle from forwarding the latest viral news story that Barack Obama was born on Venus. But every little bit counts, right? Source
  4. As much as YouTube has done to counter hoaxes and fake news in its searches, it still has room for improvement. The Washington Post discovered that "more than half" of YouTube's top 20 search results for "RBG," the nickname for US Supreme Court Justice Ruth Bader Ginsburg, were known fake conspiracy theory videos. In fact, just one of the results came from a well-established news outlet. And if you played one of those videos, the recommendations quickly shifted to more extreme conspiracies. Ruth Bader Ginsburg The site addressed the skewed results shortly after the Post got in touch, promoting more authentic videos. There were still conspiracy videos in the mix as of this writing, however. In a statement to the newspaper, YouTube's Farshad Shadloo acknowledged that "there's more to do" in curbing false videos. The internet giant has implemented a number of measures to combat false videos, including fact checking textboxes to counter some falsehoods. However, the "RBG" incident highlights the limitations of its current approach. It still relies on search algorithms that present results based primarily on relevance, not accuracy, and it doesn't offer fact checks for a wide range of subjects, including developing news. Although it's unlikely that many people will be sucked into a conspiracy black hole, this makes life difficult for both vulnerable people and those simply looking for trustworthy videos. Source
  5. Trump has made a lot of things buzzworthy, but perhaps none more than “fake news.” Everyone has strong opinions about who is at fault for spreading lies in the press. It’s “the media’s” fault. It’s Trump’s fault. Before Trump, it was the National Enquirer’s fault. It’s Facebook’s fault. You name a source, and they’ve probably been blamed. And while all of those entities have played their part in this epidemic, I’m going to tell you something that might be hard for you to hear: it’s your fault. Or I’ll say it more nicely… it’s our fault. The nature of the media has changed, and for better or worse they now chiefly operate to survive to attract readers, and we are those readers. If we want to see an end to fake news, we need to stop clicking on it, and stop spreading it. Our click is worth money. You’ve probably heard the phrase “vote with your dollar” applied to things like purchasing fair trade items. But you can, and do, vote with your clicks, too. Here are some tips for spotting fake news and some realities we have to accept if we want to participate in and encourage a free, honest, informative press. Why is this happening? News outlets are businesses, and they have owners who care about profits, like any business. When there were widely read physical newspapers, these organizations only needed a few strong cover stories to sell a paper and make a profit. Now that people consume news online, people are buying way fewer papers. Now it’s about getting clicks and ad revenue, and that is done on a per-story basis. Each story needs to be clickable, or it’s worth nothing to the news outlet. So each story needs to have a sexy headline and a provocative photo, which tempts outlets into being as bombastic as possible with every single story. Furthermore, news outlets are under intense pressure to get stories out as quickly as possible. Getting traffic is about speed. The faster a story goes up, the more traffic it will get because it’s the first to hit social media and collect rapid shares. This tempts news outlets into publishing unverified stories, figuring they can later issue corrections or retractions if they’re wrong. And since no one is punished for this, the practice continues. But there is a a way to fight it. You can punish them by not clicking on totally ridiculous headlines. You can punish them by not sharing unverified reports. Share substantiated stories from reliable sources and direct traffic their way. How to spot fake news Fake news is mostly easy to spot, when you know what to look for. Here are the most common types of fake news you should know about: Outright lies: Totally fake stories that are picked up by mainstream media, like Pizzagate. “Reports”: This tactic is used by many mainstream media outlets to post stories they cannot verify. Headlines like “Report: Evil clowns take over Washington,” mean that the newspaper can’t verify the clowns are taking over themselves, but are spreading a story that was reported elsewhere. They think that by writing “report” in front of their headline, it excuses them from responsibility for its veracity. This is the equivalent of a kid in the lunchroom saying “I didn’t see it myself, but I heard that the quarterback is secretly dating all the cheerleaders. But I don’t know if it’s true.” By writing disclaimers, news organizations decide they can post stories based on what they’ve heard but not been able to substantiate themselves. Disclaimer or not, it still spreads the “news” just the same. Denials: Outlets can spread a story they cannot verify by spreading a denial instead. For example, let’s say a reporter hears that Godzilla has risen from the sea. But that reporter can’t find any evidence that this is the case. They can call the US Coast Guard, and then post a story that says “US Coast Guard Denies Godzilla has Risen from the Sea.” While that denial is true, the outlet also managed to spread the Godzilla rumor that they could not substantiate by writing the story in an inverse way. Misleading headlines: Exaggerations or half-truths in headlines are a constant, almost accepted occurrence. If it sounds too one-sided, it probably is. If it sounds totally absurd, it probably is. If you feel you’re about to open click-bait, you probably are. How you can fight it: Stop paying the piper Every time you click on a fake news story, you are paying its publisher. You are voting for that content. Every time you share fake news, even because you think it’s funny or absurd, you are selling that content and making its publisher money. People stopped buying cassette tapes, so they don’t make them any more. The only way to stop the production of fake news is to stop buying it. Media should consider utilizing new technology to vet their contributors and sources. Blockchain tech can authenticate identities in ways that weren’t possible until now. Blockchain can also verify immutable location and timestamps, making it harder to forge reports. Keep a look out for news services that offer this kind of authentication and reward them with your clicks, and deny your attention to sources that don’t. Facebook has given a lot of lip service to their desire to fight fake news, but I’m sure you agree that you still see tons of it on your feed. Facebook recently launched a web series about how they’re using fact-checkers and AI to catch fake news and misinformation. But so far, when they do catch it, they’re just tweaking that post’s stats so it doesn’t appear on news feeds as much, not removing the post. Is that enough? And are they even obligated to referee the news if they’re not providing it? Do people have a right to spread falsehoods if there are no damages? For the sake of this discussion, it doesn’t matter. Don’t click on the crazy stories you see. If you know it’s fake but think it’s funny, you’re still paying with your click. Stop it. In this climate, it’s hard to believe that we ever outsourced the definition of the truth to other organizations without any question, but it’s very clear those days are over. Source
  6. After numerous infamous cases of people in India and Brazil falling prey to fake news spread on WhatsApp, the problem is now spreading to Nigeria. The West African nation is hosting its national elections in February next year, and a report from The Poynter Institute says its citizens are at risk of being conned by misinformation surrounding political parties – and it’s reaching people through WhatsApp. researcher Allwell Okpi found that rumors about ethnicities and political candidates often spread through WhatsApp in Nigeria, in local languages. According to the report, people using the Facebook-owned service often receive doctored or miscaptioned images. One of the prime examples included photos of Nigerian soldiers allegedly killed by the Boko Haram terrorist group. However, those turned out to be recycled photos from another incident which involved the Kenyan Army in Somalia. One recent false rumor was about where politicians stand on a semi-nomadic tribe clashing with indigenous tribes and Christian farmers. Another one claimed that a presidential candidate, Atiku Abubakar, couldn’t enter the US because of corruption charges. Such misinformation could color people’s opinions of political candidates and skew their decisions to vote in the upcoming elections. A recent survey indicated that 28 percent of people in Nigeria shared information which turned out to be bogus. The Facebook-owned chat application has taken some measures to battle fake news. It imposed a forward limit in India and Brazil to stop mass forwarding of messages. It even banned 100,000 accounts just before the elections in Brazil. In India, the company recently appointed a grievance officer and a company head. It’s taking a lot of effort to spread awareness offline as well through newspaper ads and theater. The company even launched a TV campaign today to warn people about misinformation. But as we noted, WhatsApp alone can’t be blamed for the spread of misinformation; it’s up to the government and the nation’s people to develop a culture of questioning the veracity of the information they receive through new channels of communication. While WhatsApp‘s had a tough 2018, next year will put it under more pressure because of the upcoming elections in India and Nigeria. It’ll be interesting to see if the company can figure out ways to battle the spread of fake news without breaking its end-to-end message encryption. Source
  7. At least not for decades to come. Sorry, Mark Zuckerberg. Dr. Marcus is a professor of psychology and neural science. Dr. Davis is a professor of computer science. In his testimony before Congress this year, Mark Zuckerberg, the chief executive of Facebook, addressed concerns about the strategically disseminated misinformation known as fake news that may have affected the outcome of the 2016 presidential election. Have no fear, he assured Congress, a solution was on its way — if not next year, then at least “over a five- to 10-year period.” The solution? Artificial intelligence. Mr. Zuckerberg’s vision, which the committee members seemed to accept, was that soon enough, Facebook’s A.I. programs would be able to detect fake news, distinguishing it from more reliable information on the platform. With midterms approaching, along with the worrisome prospect that fake news could once again influence our elections, we wish we could say we share Mr. Zuckerberg’s optimism. But in the near term we don’t find his vision plausible. Decades from now, it may be possible to automate the detection of fake news. But doing so would require a number of major advances in A.I., taking us far beyond what has so far been invented. As Mr. Zuckerberg has acknowledged, today’s A.I. operates at the “keyword” level, flagging word patterns and looking for statistical correlations among them and their sources. This can be somewhat useful: Statistically speaking, certain patterns of language may indeed be associated with dubious stories. For instance, for a long period, most articles that included the words “Brad,” “Angelina” and “divorce” turned out to be unreliable tabloid fare. Likewise, certain sources may be associated with greater or lesser degrees of factual veracity. The same account deserves more credence if it appears in The Wall Street Journal than in The National Enquirer. But none of these kinds of correlations reliably sort the true from the false. In the end, Brad Pitt and Angelina Jolie did get divorced. Keyword associations that might help you one day can fool you the next. To get a handle on what automated fake-news detection would require, consider an article posted in May on the far-right website WorldNetDaily, or WND. The article reported that a decision to admit girls, gays and lesbians to the Boy Scouts had led to a requirement that condoms be available at its “global gathering.” A key passage consists of the following four sentences: Was this account true or false? Investigators at the fact-checking site Snopes determined that the report was “mostly false.” But determining how it went afoul is a subtle business beyond the dreams of even the best current A.I. First of all, there is no telltale set of phrases. “Boy Scouts” and “gay and lesbian,” for example, have appeared together in many true reports before. Then there is the source: WND, though notorious for promoting conspiracy theories, publishes and aggregates legitimate news as well. Finally, sentence by sentence, there are a lot of true facts in the passage: Condoms have indeed been available at the global gathering that scouts attend, and the Boy Scouts organization has indeed come to accept girls as well as gays and lesbians into its ranks. What makes the article “mostly false” is that it implies a causal connection that doesn’t exist. It strongly suggests that the inclusion of gays and lesbians and girls led to the condom policy (“So what’s next?”). But in truth, the condom policy originated in 1992 (or even earlier) and so had nothing to do with the inclusion of gays, lesbians or girls, which happened over just the past few years. Causal relationships are where contemporary machine learning techniques start to stumble. In order to flag the WND article as deceptive, an A.I. program would have to understand the causal implication of “what’s next?,” recognize that the account implies that the condom policy was changed recently and know to search for information that is not supplied about when the various policies were introduced. Understanding the significance of the passage would also require understanding multiple viewpoints. From the perspective of the international organization for scouts, making condoms available at a global gathering of 30,000 to 40,000 hormone-laden adolescents is a prudent public health measure. From the point of view of WND, the availability of condoms, like the admission of girls, gays and lesbians to the Boy Scouts, is a sign that a hallowed institution has been corrupted. We are not aware of any A.I. system or prototype that can sort among the various facts involved in those four sentences, let alone discern the relevant implicit attitudes. Most current A.I. systems that process language are oriented around a different set of problems. Translation programs, for example, are primarily interested in a problem of correspondence — which French phrase, say, is the best parallel of a given English phrase? But determining that someone is implying, by a kind of moral logic, that the Boy Scouts’ policy of inclusion led to condoms being supplied to scouts isn’t a simple matter of checking a claim against a database of facts. Existing A.I. systems that have been built to comprehend news accounts are extremely limited. Such a system might be able to look at the passage from the WND article and answer a question whose answer is given directly and explicitly in the story (e.g., “Does the Boy Scouts organization accept people who identify as gay and lesbian?”). But such systems rarely go much further, lacking a robust mechanism for drawing inferences or a way of connecting to a body of broader knowledge. As Eduardo Ariño de la Rubia, a data scientist at Facebook, told us, for now “A.I. cannot fundamentally tell what’s true or false — this is a skill much better suited to humans.” To get to where Mr. Zuckerberg wants to go will require the development of a fundamentally new A.I. paradigm, one in which the goal is not to detect statistical trends but to uncover ideas and the relations between them. Only then will such promises about A.I. become reality, rather than science fiction. Source
  8. The Facebook video is nuts, but I can’t tear my eyes away. A plane, struggling in a huge storm, does a 360-degree flip before safely landing and letting out terrified passengers. It turns out the video is totally bunk, spliced together from a computer-generated clip and unrelated real news footage. But that didn’t stop the Facebook post from arriving in my News Feed via a friend last month. I watched it. Maybe you did, too: It has nearly 14 million views. Everyone now knows the Web is filled with lies. So then how do fake Facebook posts, YouTube videos and tweets keep making suckers of us? To find out, I conducted a forensic investigation of the fake that fooled my social network. I found the original creator of that CG plane clip. I spoke to the Facebook executive charged with curbing misinformation. And I confronted my friend who shared it. The motives for a crazy plane report may be different from posts misdirecting American voters or fuelling genocide in Myanmar. Yet some of the questions are the same: What makes fake news effective? Why did I end up seeing it? And what can we do about it? Fake news creators “aren’t loyal to any one ideology or geography,” said Tessa Lyons, the product manager for Facebook’s News Feed tasked with reducing misinformation. “They are seizing on whatever the conversation is” — usually to make money. This year, Facebook will double the number of humans involved in fighting constantly morphing “integrity” problems on its network, to 20,000. Thanks in part to those efforts, independent fact- checkers and some new technologies, Facebook user interaction with known fake news sites has declined by 50 per cent since the 2016 US election, according to a study by Stanford and New York University. But if you think you’re immune to this stuff, you’re wrong. Detecting what’s fake in images and video is only getting harder. Misinformation is part of an online economy that weaponises social media to profit from our clicks and attention. And with the right tools to stop it still a long way off, we all need to get smarter about it. Aristomenis Tsirbas, a Los Angeles-based director, made the computer-generated video of a plane doing a 360 that ended up being repurposed as part of a fake news report. Seeing is believing The crazy plane video first appeared on September 13 on a Facebook page called Time News International. Its caption reads: “A Capital Airlines Beijing-Macao flight, carrying 166 people’s, made an emergency landing in Shenzhen on 28 August 2018, after aborting a landing attempt in Macao due to mechanical failure, the airline said.” No real commercial plane did a 360 roll so close to the ground, but an emergency landing really did happen that August day in Macau. Four days later, in Los Angeles, film director Aristomenis Tsirbas started getting messages from his friends. A year earlier, the computer graphics whiz had created and posted to YouTube a video he’d made showing a plane doing a 360. Someone had taken his work and used it at the beginning of a fake news report. “I realised, oh, my God, I’m part of the problem,” Tsirbas told me. The artist, who has worked on Titanic and Star Trek, has a hobby in creating realistic but implausible videos, often involving aliens. He posts them on YouTube, he said, in part to demonstrate CG and in part to make a little money from YouTube ads. The photorealism of Tsirbas’s clip played a big role in making the fake story go viral. And that makes it typical: Misinformation featuring manipulated photos and videos is among the most likely to go viral, Facebook’s Lyons said. Sometimes, like in this case, it employs shots from real news reports to make it seem just credible enough. “The really crazy things tend to get less distribution than the things that hit the sweet spot where they could be believable,” Lyons said. Even after decades of Photoshop and CG films, most of us are still not very good about challenging the authenticity of images, or telling the real from the fake. That includes me: In an online test made by software maker Autodesk called Fake or Foto, I correctly identified the authenticity of just 22 per cent of their images. (You can test yourself here.) Another lesson: Fake news often changes the context of photos and videos in ways their creators might never imagine. Tsirbas sees his work as pranks or satire, but he hasn’t explicitly labelled them that way. “They are clearly fakes,” he said. After we spoke, he wrote to say he’d now add a disclaimer to his CG videos: “This is a narrative work.” Satire, in particular, can lose important context unless it’s baked into an image itself. Another doctored fake news image, first posted to Twitter in 2017, appears to show President Trump touring a flooded area of Houston, handing a red hat to a victim. Artist Jessica Savage Broer, a Trump critic, told me she Photoshopped it to make a point about how people need to “use critical thinking skills.” But then earlier this year, supporters of the president started sharing it on Facebook — by the hundreds of thousands — as evidence of the president’s humanitarian work. Artist Jessica Savage Broer photoshopped this image to include President Donald Trump. She said she wanted to make the point that people need to “use critical thinking skills. The outrage algorithm Why would someone turn Tsirbas’s airplane video into a fake news report? There’s no clear answer, but there are clues. Time News International, the page that published it, did not respond to requests I sent via Facebook, an email address or a UK phone number listed on its page. Facebook’s Lyons said pages posting misinformation most often have an economic motive. They post links to articles on sites with just-believable-enough names that are filled with advertisements or spyware, which might attempt to invade our online privacy. Lyons’s team shared with me a half-dozen samples of fake news. But the links to money aren’t always immediately clear. The Time News International page doesn’t regularly link to outside articles, though it posts a lot of outrageous photos and videos about topics in the news. That has attracted it a following of 225,000 people on Facebook; a base it could direct to content it might capitalise on in the future. Facebook and other social media companies deserve some of the blame. It’s easy to grow an audience for outlandish stories when publishing doesn’t require vetting, and algorithms are tuned to share the stuff that garners the greatest outrage. I saw that crazy video because Facebook decided I should. Fake news producers also use our friends to add to their credibility. When I saw the plane video, my suspicions weren’t on high alert because it came from my friend, who I trust as a smart guy. He told me he realised later the video was a fake but thought comments on his post would alert his friends. “It’s just funny thinking about the steps by which we get duped,” he said. Stop spreading the news Facebook’s response to the plane video shows how far it’s come in the fight with fake news; and how far we have to go. On September 17, a few days after it was posted, the video was detected by Facebook’s machine-learning systems, programs that try to automatically detect fake news. The company won’t disclose exactly how those work, but it said the signals include what sorts of comments people leave on posts. Once detected, Facebook passed the video to its network of independent fact-checkers. After Snopes labelled it as “false,” Facebook made it show up less often in News Feeds. Why does the fake plane video remain up at a time when Facebook is making headlines for taking down other posts? Facebook said deletion is for violations of its community standards, such as pornography. “My job is to prevent misleading and false information from going viral,” Lyons said. “Even if something is false, we don’t prevent people from sharing it. We give them context.” That comes in the form of a label. Now when the video appears in a News Feed or someone attempts to share it, up pops “Additional Reporting On This,” with a link to reports from fact-checking organisations. Facebook said it also notified people who had already shared it, though my friend doesn’t recall seeing a warning. “I wouldn’t consider this a success from our side,” Lyons said. Typically, posts that Facebook demotes have an 80 per cent reduction in the total number of views, so it’s possible without Facebook’s action, the post could have been seen by hundreds of millions. (Later, Facebook’s automated systems also detected duplicates of the video being uploaded by other pages.) It’s also an issue of new media literacy. Facebook and others have produced fliers such as “Tips for spotting false news,” but it’s hard to change a response that is both human and pretty fundamental to the social media experience. There have always been hoaxes, but perhaps we need time to internalise just how easy they’ve become to create. Lyons is already tracking the next generation of CG images dubbed “deep fakes” that don’t even require the expertise of a creator like Tsirbas. Instead, they use artificial intelligence to splice together bits from lots of existing videos to create, for example, a fake speech by a president. Maybe we’ll eventually learn to be less trusting of our friends, at least the online ones. The people we count on for important information in the real world aren’t always the people who fill our social media feeds. Or if you want to avoid being that friend: Before you spread the latest outrage online, stop and consider the source. Source
  9. Apple’s Safari, one of the internet’s most popular web browsers, has been surfacing debunked conspiracies, shock videos, and false information via its “Siri Suggested Websites” feature. Such results raise questions about the company’s ability to monitor for low-quality information, and provide another example of the problems platforms run into when relying on algorithms to police the internet. As of yesterday, if you typed “Pizzagate” into Apple’s Safari, the browser’s “Siri Suggested Website” prominently offered users a link to a YouTube video with the title “PIZZAGATE, BIGGEST SCANDAL EVER!!!” by conspiracy theorist David Seaman (the video doesn’t play, since Seaman’s channel was taken down for violating YouTube’s terms of service). The search results appeared on multiple versions of Safari. Apple removed all examples of the questionable Siri Suggested sites provided to it by BuzzFeed News. "Siri Suggested Websites come from content on the web and we provide curation to help avoid inappropriate sites. We also remove any inappropriate suggestions whenever we become aware of them, as we have with these. We will continue to work to provide high-quality results and users can email results they feel are inappropriate to [email protected]" Safari / Via Safari Safari isn’t the only browser to try to anticipate its users’ searches; Google has long delivered autocomplete search suggestions, which have occasionally been gamed or surfaced inappropriate content. However, Safari’s Siri Suggested Website feature goes one step further, autocompleting and suggesting a site for users to visit. Frequently, Siri Suggested dials up a Wikipedia page (as it does when you search for Apple CEO Tim Cook). But when BuzzFeed News entered incomplete search terms that might suggest contentious or conspiratorial topics (as shown below), the search algorithms directed us toward low-quality websites, message boards, or YouTube conspiracy videos rather than reliable information or debunks about those topics. Meanwhile, Google does not feature such unreliable pages in its top search results. Those suggested results matter since Safari is one of the internet’s most popular web browsers — some estimates suggest it has captured over 10% of the browser market share. Other searches for conspiracies or popular fake news tropes returned similarly low-quality results. The browser also surfaced the recently popular QAnon conspiracy. Typing “QAnon is real” into the search bar delivers an autocomplete for a YouTube video with the title “The Calm Before the Storm – QAnon Is the Real Deal.” Safari Google’s search results for the same phrase surface a number of articles debunking the conspiracy theory. Google Safari’s autocomplete suggestion for “Hillary Clinton murder” surfaced a shoddy webpage about an alleged FBI cover-up related to the death of former deputy White House counsel Vince Foster. Google’s top result was a fact-check. Safari Google A search for “the Holocaust didn’t happen” (a well-known and debunked conspiracy theory) returns a link to a Holocaust denier page on the website 666ismoney.com. Safari Back in 2016, Google faced criticism for algorithmically promoting search results for Holocaust denier sites. However, the results have since been fixed: Google BuzzFeed News found a number of other examples of Siri Suggested Websites surfacing debunked or conspiratorial information on topics including race science and vaccines. Safari Natural News is a well known anti-science news website. On the same search phrase, Google offers users resources from government sources, like the CDC, in its results: Google The list goes on. Safari’s autocomplete for “whites are smarter th” delivered an answers.com user-generated page arguing that “God made white people, blacks came from monkeys.” Safari Siri Suggested Website also surfaced a link to Alex Jones’ Infowars site while autocompleting a search about Hillary Clinton. The site claims to offer the “real reason Hillary is attacking the alt-right.” Safari The Siri Suggested problem seems to stem from what researchers call a “data void,” which is what happens when a term doesn’t have “natural informative results” and manipulators seize upon it. Many of the sites surfaced by the Siri Suggested feature came from conspiracy or junk sites hastily assembled to fill that void. This post was updated with additional comment from Apple. Source
  10. PepsiCo really doesn’t want anyone talking shit about its corn puffs online. There is a rumor that Kurkure, a corn puff product developed by the company in India, is made of plastic. The conspiracy theory naturally thrived online, where people posted mocking videos and posts questioning whether the snack contained plastic. In response, PepsiCo obtained an interim order from the Delhi High Court to block all references to this conspiracy theory online in the country, MediaNama reports. Hundreds of posts claiming that Kurkure contains plastic have already been blocked across Facebook, Twitter, Instagram, and YouTube, according to LiveMint, and the court order requires social networks to continue to block such posts. According to MediaNama, PepsiCo petitioned for 3412 Facebook links, 20244 Facebook posts, 242 YouTube videos, six Instagram links, and 562 tweets to be removed, a request the court has granted. Source
  11. At the heart of the spread of fake news are the algorithms used by search engines, websites and social media which are often accused of pushing false or manipulated information regardless of the consequences. What are algorithms? They are the invisible but essential computer programmes and formulas that increasingly run modern life, designed to repeatedly solve recurrent problems or to make decisions on their own. Their ability to filter and seek out links in gigantic databases means it would be impossible to run global markets without them, but they can also be refined down to produce personalised quotes on everything from mortgages to plane tickets. They also run our Google searches, our Facebook newsfeed, recommend articles or videos to us and sometimes censor questionable content because it may contain violence, pornography or racist language. Other algorithms charged with the most complex and sensitive tasks can be opaque "black boxes" which develop their own artificial intelligence based on our data. A skewed view of the world? "Algorithms can help us find our way through the huge amount of information on the internet," said Margrethe Vestager, the European commissioner for competition. "But the problem is that we only see what these algorithms—and the companies that use them—choose to show us," she added. In organising your online content, algorithms also tend to create "filter bubbles", insulating us from opposing points of view. During the US presidential election in 2016, Facebook was accused of helping Donald Trump by allowing often false information about his rival Hillary Clinton to circulate online, closing people into a news bubble. Algorithms also tend to make extreme opinions "and fringe views more visible than ever", according to Berlin-based Lorena Jaume-Palasi, founder of the Algorithm Watch group. However, their effects can be difficult to measure, she warned, saying that algorithms alone are not to blame for the rise in nationalism in Europe. Spreading fake news? Social media algorithms tend to push the most viewed content without checking if it is true or not, which is why they magnify the impact of fake news. On YouTube in particular, conspiracy theory videos get a great deal more traffic than accurate and properly sourced ones, said Guillaume Chaslot, one of the Google-owned platform's former engineers. These videos, which may claim that the moon landings or climate change are lies, get far more views and comments, keeping users on the platform longer and undermining credible, traditional media, Chaslot insisted. More ethical algorithms? Some observers believe that algorithms could be programmed "to serve human freedom", with many non-governmental groups demanding far more transparency. "Coca-Cola doesn't reveal its formula but its products are tested for their effect on our health," Jaume-Palasi argued, insisting on the need for clear regulation. The French privacy protection body, the CNIL, last year recommended state oversight of algorithms and that there should be a real push to educate people "so they understand the cogs of the (information technology) machine". New European data protection rules also allow people to contest the decision of an algorithm and "demand a human intervention" in case of conflict. Some internet giants have themselves begun to act to some degree: Facebook has started an effort to automatically label suspicious posts, while YouTube is reinforcing its "human controls" on videos aimed at children. However, former Silicon Valley insiders who make up the Center for Humane Technology, which was set up to combat tech's excesses, have warned that "we can't expect attention-extraction companies like YouTube, Facebook, Snapchat, or Twitter to change, because it's against their business model." < Here >
  12. During the 2016 U.S. presidential election, the internet was abuzz with discussion when reports surfaced that Floyd Mayweather wore a hijab to a Donald Trump rally, daring people to fight him. The concocted story started on a sports comedy website, but it quickly spread on social media—and people took it seriously. From Russian “bots” to charges of fake news, headlines are awash in stories about dubious information going viral. You might think that bots—automated systems that can share information online—are to blame. But a new study shows that people are the prime culprits when it comes to the propagation of misinformation through social networks. And they’re good at it, too: Tweets containing falsehoods reach 1500 people on Twitter six times faster than truthful tweets, the research reveals. Bots are so new that we don’t have a clear sense of what they’re doing and how big of an impact they’re making, says Shawn Dorius, a social scientist at Iowa State University in Ames who wasn’t involved in the research. We generally think that bots distort the types of information that reaches the public, but—in this study at least—they don’t seem to be skewing the headlines toward false news, he notes. They propagated true and false news roughly equally. The main impetus for the new research was the 2013 Boston Marathon bombing. The lead author—Soroush Vosoughi, a data scientist at the Massachusetts Institute of Technology in Cambridge—says after the attack a lot of the stuff he was reading on social media was false. There were rumors that a student from Brown University, who had gone missing, was suspected by the police. But later, people found out that he had nothing to do with the attack and had committed suicide (for reasons unrelated to the bombing). That’s when Vosoughi realized that “these rumors aren’t just fun things on Twitter, they really can have effects on people’s lives and hurt them really badly.” A Ph.D. student at the time, he switched his research to focus on the problem of detecting and characterizing the spread of misinformation on social media. He and his colleagues collected 12 years of data from Twitter, starting from the social media platform’s inception in 2006. Then they pulled out tweets related to news that had been investigated by six independent fact-checking organizations—websites like PolitiFact, Snopes, and FactCheck.org. They ended up with a data set of 126,000 news items that were shared 4.5 million times by 3 million people, which they then used to compare the spread of news that had been verified as true with the spread of stories shown to be false. They found that whereas the truth rarely reached more than 1000 Twitter users, the most pernicious false news stories—like the Mayweather tale—routinely reached well over 10,000 people. False news propagated faster and wider for all forms of news—but the problem was particularly evident for political news, the team reports today in Science. At first the researchers thought that bots might be responsible, so they used sophisticated bot-detection technology to remove social media shares generated by bots. But the results didn’t change: False news still spread at roughly the same rate and to the same number of people. By default, that meant that human beings were responsible for the virality of false news. That got the scientists thinking about the people involved. It occurred to them that Twitter users who spread false news might have more followers. But that turned out to be a dead end: Those people had fewer followers, not more. Finally the team decided to look more closely at the tweets themselves. As it turned out, tweets containing false information were more novel—they contained new information that a Twitter user hadn’t seen before—than those containing true information. And they elicited different emotional reactions, with people expressing greater surprise and disgust. That novelty and emotional charge seem to be what’s generating more retweets. “If something sounds crazy stupid you wouldn’t think it would get that much traction,” says Alex Kasprak, a fact-checking journalist at Snopes in Pasadena, California. “But those are the ones that go massively viral.” The research gives you a sense of how much of a problem fake news is, both because of its scale and because of our own tendencies to share misinformation, says David Lazer, a computational social scientist at Northeastern University in Boston who co-wrote a policy perspective on the science of fake news that was also published today in Science. He thinks that, in the short term, the “Facebooks, Googles, and Twitters of the world” need to do more to implement safeguards to reduce the magnitude of the problem. But in the long term we also need more science, he says—because if we don’t understand where fake news comes from and how it spreads, then how can we possibly combat it? Source
  13. Facebook fights fake news, again Now that the French election is a thing of the past, Facebook is taking it upon itself to continue the fight against fake news in another European country - the United Kingdom. The social network is taking some precautionary steps ahead of the June General Elections in the UK by buying ad space in British newspapers and printing an anti-fake news leaflet. Facebook took out ads in several major newspapers, including The Times, the Daily Telegraph and The Guardian. The ads list ten things that users should look out for when deciding whether the information they read is genuine. Among the tips the social network is sharing with users is checking the headlines, URLs, photos and even the dates of the articles since it often happens for old articles to be shared after months or years and to be taken as if they were brand new by the readers. They also advise people to investigate the source, to watch for unusual formatting, to check the evidence, look at other reports and, quite importantly, to figure out if the story is a joke in the first place. So many times over the years, articles from satire sites were taken seriously by people without a second thought. A growing trouble The fake news issue has been growing over the past few years, although it certainly reached its peak during the US presidential election last year. Since then, Facebook has taken numerous steps to fight this problem on its network, including removing fake accounts responsible for spreading such articles. Ahead of the French elections, the company announced it had shut down 30,000 such accounts. "People want to see accurate information on Facebook and so do we. That is why we are doing everything we can to tackle the problem of false news," said Simon Milner, Facebook director of policy for the UK. The company admitted that it wasn't going to solve the problem on its own, which is why they started working with third-party fact-checkers during the elections so they can independently assess facts and stories. Source
  14. WikiTribune is here to fight fake news Jimmy Wales, Wikipedia co-founder, steps up the fight against fake news, announces a news service combining the work of professional journalists and volunteers. Wikitribune, as the service will be called, will offer factual and neutral articles that help combat the problem that has been plaguing the world for the past few years, escalating during the American election season. Much like Wikipedia, the service will be free to read and also free of ads, which means it will rely on donations from supporters. Wikitribune has great potential to become a great tool the world can rely on, but it may not be so easy to get people to trust it, especially those that are prone to believing fake news. Writers will have to detail the source of each fact, and there will be a reliance on the public to edit articles in order to keep them accurate, much like it happens on Wikipedia. However, the changes will only go live if they are accepted by a staff member or a trusted community volunteer, so those who don't mean well will be kept away from altering articles. "In most news sites, the community tends to hang at the bottom of articles in comments that serve little purpose. We believe the community can play a more important role in news. Wikitribune puts community at the top, literally. Articles are authored, fact-checked, and verified by professional journalists and community members working side by side as equals, and supported not primarily by advertisers, but by readers who care about good journalism enough to become monthly supporters," the site reads. Crowdfunded journalism The money the site will raise will go into paying the salaries of the ten journalists they are looking to hire. If the goal sum isn't reached, the funds will be refunded, they say. “Wikitribune is news by the people and for the people. This will be the first time that professional journalists and citizen journalists will work side-by-side as equals writing stories as they happen, editing them live as they develop and at all times backed by a community checking and re-checking all facts," said Wales. In the past few months, many companies have taken steps to fight against fake news, including Facebook and Google. Source
  15. Facebook fights against fake news Facebook is taking its fight against fake news seriously, it seems. After releasing a number of tools to help fight against this plague, the company took a more proactive approach and took down some 30,000 fake accounts linked to France ahead of the presidential elections. According to a statement released by the company, Facebook is trying to "reduce the spread of material generated through inauthentic activity, including spam, misinformation, or other deceptive content that is often shared by creators of fake accounts." The company explained in a blog post that there have been many changes brought to the platform lately, including some that run "under the hood." Some additional improvements that were recently made help detect fake accounts on Facebook more effectively, including those that are hard to spot. "We've made improvements to recognize these inauthentic accounts more easily by identifying patterns of activity - without assessing the content itself. For example, our systems may detect repeated posting of the same content, or an increase in messages sent," the company explains. Facebook admits that these changes will not result in the removal of every fake account, but as time goes by, the effectiveness will grow. Their priority, for now, is to remove the accounts with the largest footprint, with a high amount of activity and a broad reach. The game is on For its part, Facebook seems to be putting in a lot of work to stop the distribution of misinformation, as the company puts it, as well as spam and false news. "We've found that a lot of false news is financially motivated, and as part of our work to promote an informed society, we have focused on making it very difficult for dishonest people to exploit our platform or profit financially from false news sites using Facebook." In the past few weeks, Facebook has released a list of tips to help users spot fake news, and has signed up for the News Integrity Initiative. Recently, World Wide Web creator Sir Tim Berners-Lee has said that Facebook and Google have a lot to do to tackle this problem and, unfortunately for them, they're the ones that should do it because so many people use them. Both companies have done a lot to this extent and will continue to work in this direction. Source
  16. Google takes war on fake news to the next level Google's war on fake news takes on a new direction as the company straight out marks search results as "true" or "false." Google is currently rolling out an update to its platform which adds a "fact check" label in search results next to articles containing claims that have been vetted. This fact-check tagging system is rolling out globally on Google Search and News and it's an expansion of the company's program introduced in October in the US and the UK via Google's Jigsaw group. "We’re making the Fact Check label in Google News available everywhere, and expanding it into Search globally in all languages. For the first time, when you conduct a search on Google that returns an authoritative result containing fact checks for one or more public claims, you will see that information clearly on the search results page. The snippet will display information on the claim, who made the claim, and the fact check of that particular claim," the company wrote in an announcement. Multiple opinions, multiple answers Unfortunately, this information won't be available for every search result, and there may even be search result pages where different publishers checked the same claim and reached different conclusions. It's important to know, however, that these fact checks are not Google's and they're presented so people can make more informed judgments. "Even though different conclusions may be presented, we think it's still helpful for people to understand the degree of consensus around a particular claim and have clear information on which sources agree," Google adds. Google's fact check community has now grown to 115 organizations, without which this effort would not be possible. It's a good idea for Google not to get involved in "personally" fact-checking all these stories even though it may very well find it easy to get the necessary people hired, because it could easily be accused of bias. Source
  17. Facebook has a new plan to make you trust real news First, Facebook started off by saying it wants to fight against fake news. Now, the social network wants to help people trust real news again. How does it plan to do that? Well, it starts off by launching a $14 million program called the "News Integrity Initiative." In collaboration with the likes of Mozilla, Craig Newmark (Craiglist founder), the Knight Foundation, the Tow Foundation, City University of New York and others, Facebook has begun working on this new project that deepens its involvement in the news. The company has spent many years so far denying that it has anything to do with the news and saying it cannot be categorized as such. On the other hand, it seems to be coming to terms with the fact that people will share news over Facebook, so much, in fact, that it has become the most dominant platform for news. Tim Berners-Lee, the inventor of the World Wide Web, event went on to say that Facebook and Google should be the ones leading the fight against fake news due to the size of the masses they reach. Lots of pressure on Facebook Nonetheless, Facebook has been heavily criticized for what has happened over its platform in recent years, particularly as false information runs riot among people who don't exactly know how to pick their sources. "We’re excited to announce we are helping to found and fund the News Integrity Initiative, a diverse new network of partners who will work together to focus on news literacy. The initiative will address the problems of misinformation, disinformation and the opportunities the internet provides to inform the public conversation in new ways," reads the announcement signed by Campbell Brown, the head of News Partnerships at Facebook. Jeff Jarvis, professor of journalism at CUNY and one of the leaders of the initiative, says that this isn't a problem that's exclusive to Facebook. "My greatest hope is that this Initiative will provide the opportunity to work with Facebook and other platforms on reimagining news, on supporting innovation, on sharing data to study the public conversation, and on supporting news literacy broadly defined," Jarvis wrote in a blog post. Source
  18. Fake news is the plague we need to fight against Tim Berners-Lee, the man we have to thank for inventing the World Wide Web, believes there are several things that need to be done to ensure the future of the web in order to make this a platform that benefits humanity - fight against fake news, political advertisements and data sovereignty. As the World Wide Web turns 28, Berners-Lee celebrates the occasion. He writes that over the past year he has become increasingly worried about three trends that he believes harm the web. The first thing the world needs to fight against is fake news. "Today, most people find news and information on the web through just a handful of social media sites and search engines. These sites make more money when we click on the links they show us. And, they choose what to show us based on algorithms which learn from our personal data that they are constantly harvesting," Berners-Lee writes. "The net result is that these sites show us content they think we’ll click on – meaning that misinformation, or ‘fake news’, which is surprising, shocking, or designed to appeal to our biases can spread like wildfire. And through the use of data science and armies of bots, those with bad intentions can game the system to spread misinformation for financial or political gain." He takes things a step forward and gives names. He believes that we must push back against misinformation by encouraging gatekeepers such as Google and Facebook to continue their efforts to combat the problem, while also avoiding the creation of any central bodies that decide what's true or not because that's another problem altogether. It's not just fake-news to fight against Another thing we need to fight against is government over-reach in surveillance laws, including in court, if need be. "We need more algorithmic transparency to understand how important decisions that affect our lives are being made, and perhaps a set of common principles to be followed," he adds. Political advertising online needs to be more transparent, he believes, especially considering that during the 2016 US elections as many as 50,000 variations of adverts were being served every single day on Facebook. There are many problems plaguing the world wide web, but some are more pressing than others, it seems. TheWeb Foundation, which Berners-Lee leads, will be working on many of these issues as part of the new five-year strategy. "I may have invented the web, but all of you have helped to create what it is today. All the blogs, posts, tweets, photos, videos, applications, web pages and more represent the contributions of millions of you around the world building our online community. [...] It has taken all of us to build the web we have, and now it is up to all of us to build the web we want – for everyone," Tim Berners-Lee concludes. Source
  19. Microsoft Edge Browser Accused of Displaying Fake News in New Tabs News outlet partnership go wrong for Edge users All the news is delivered by MSN with help from news outlets across the world, and while at first glance everything should be pretty helpful for users, it turns out that the browser is suffering from an issue that the Internet is trying to deal with as we speak: fake news. A number of users have turned to the built-in Windows 10 Feedback Hub app to complain about what they claim to be fake news displayed in Microsoft Edge, explaining that the balanced news that they should find in the browser do not exist and most sources are trying to give articles a certain spin that shouldn’t be there. “I have been disgusted to read such clearly slanted stories. I would prefer to read news reports that allowed me to draw my own conclusions that did not seem intent on spinning the news in one direction or another. It is time that you offered BALANCED news instead of relying on your partnerships with news outlets that clearly have an agenda in their news reporting,” one such comment reads. Microsoft still tightlipped Microsoft Edge does not allow users to edit news sources, but only to choose the categories they want to receive articles for, so there’s no way to deal with the alleged fake news without the company’s own tweaks. Of course, Microsoft Edge does not deliberately spread fake news, and if this is indeed happening, it’s only the fault of the sources that the browser is configured to use to show articles in the start page and in new tabs. Microsoft, however, hasn’t said a single thing until now and is yet to respond to the suggestion posted in the Feedback Hub, so it remains to be seen if the company gives more power to users to configure news sources or if the company itself removes sources involved in spreading fake news. Source
  20. Apple chief calls on governments and technology companies to crack down on misinformation in public discourse Apple CEO Tim Cook is urging governments and technology firms like his own to help stem the spread of falsehoods Fake news is “killing people’s minds”, Tim Cook, the head of Apple, has said. The technology boss said firms such as his own needed to create tools that would help stem the spread of falsehoods, without impinging on freedom of speech. Cook also called for governments to lead information campaigns to crack down on fake news in an interview with a British national newspaper. The scourge of falsehoods in mainstream political discourse came to the fore during recent campaigns, during which supporters of each side were accused of promoting misinformation for political gain. “We are going through this period of time right here where unfortunately some of the people that are winning are the people that spend their time trying to get the most clicks, not tell the most truth,” Cook told the Daily Telegraph. “It’s killing people’s minds, in a way.” He said: “All of us technology companies need to create some tools that help diminish the volume of fake news. We must try to squeeze this without stepping on freedom of speech and of the press, but we must also help the reader. Too many of us are just in the ‘complain’ category right now and haven’t figured out what to do.” He said that a crackdown would mean that “truthful, reliable, non-sensational, deep news outlets will win”, adding: “The [rise of fake news] is a short-term thing. I don’t believe that people want that.” While instances were seen among supporters of both sides of the recent US election battle, Donald Trump’s campaign was seen by many as a particular beneficiary of fake news reports. And the US president’s team has been caught sending aides out to insist that a huge crowd had attended his inauguration, when the evidence showed a relatively modest audience was there. Trump’s spokesman, Sean Spicer, insisted that the event had attracted “the largest audience ever to witness an inauguration ” and Trump said he believed the crowd went “all the way back to the Washington Monument”. Images from the moment Trump was taking the oath showed the crowd was relatively small and went nowhere near as far back down Washington’s National Mall as the monument. Other evidence suggested a relatively small crowd in attendance. Senior aide Kellyanne Conway later characterised the Trump administration’s falsehoods as “alternative facts”. Fake anti-Trump stories during the election included one in which it was falsely claimed that he had groped the drag queen and television presenter RuPaul. Hillary Clinton was scrutin ised over her claim that there was “no evidence” her emails had been hacked because the FBI director, James Comey, had concluded it was likely they had been. A study by economists at Stanford University and New York University published two months after November’s US presidential election found that in the run-up to the vote, fake anti-Clinton stories had been shared 30 million times on Facebook, while those favouring her were shared eight million times. It said: “The average American saw and remembered 0.92 pro-Trump fake news stories and 0.23 pro-Clinton fake news stories, with just over half of those who recalled seeing fake news stories believing them.” But it called into question the power of fake news reports spread on social media to alter the outcome of the election, saying that, “for fake news to have changed the outcome of the election, a single fake article would need to have had the same persuasive effect as 36 television campaign ads”. Nevertheless, Cook demanded action to decrease the reach of fake news. “We need the modern version of a public service announcement campaign. It can be done quickly, if there is a will.” He added: “It has to be ingrained in the schools, it has to be ingrained in the public. There has to be a massive campaign. We have to think through every demographic... It’s almost as if a new course is required for the modern kid, for the digital kid. “In some ways kids will be the easiest to educate. At least before a certain age, they are very much in listen and understand [mode], and they then push their parents to act. We saw this with environmental issues: kids learning at school and coming home and saying why do you have this plastic bottle? Why are you throwing it away?” By Kevin Rawlinson https://www.theguardian.com/technology/2017/feb/11/fake-news-is-killing-peoples-minds-says-apple-boss-tim-cook
  21. The 2016 U.S. Presidential election cycle won’t be soon forgotten. It shattered old conventions and introduced a completely new way of running a campaign, including fake news. No doubt some of that content was generated for political purposes. But, for better worse, some fake news was created simply for profit. For social media giants Facebook (NASDAQ:FB) and Google (NASDAQ:GOOGL), this new trend represents a challenge that can greatly affect the monetization of their platforms. If the billions of consumers and businesses that use these two brands can’t rely on the information they are accessing, advertisers may drop support for these channels. On the other hand, could small content creators face backlash whether their content is truly fake news or simply viewed that way by these digital behemoths? Facebook and Google Will Crack Down on Fake News Facebook has just announced a new initiative to identify authentic content because, as the company puts it, stories that are authentic resonate more with its community. During the election, the social media giant was criticized for doing very little to combat fake news. Instead, Facebook tried to outsource the task of identifying this content to third parties including five fact checking organizations: the Associated Press, ABC news, Factcheck.org, Snopes and PolitiFact. However, the new update ranks authentic content by incorporating new signals to better identify what is true or false. These signals are delivered in real-time when a post is relevant to a particular user. The signals are determined by analyzing overall engagement on pages to identify spam as well as posts that specifically ask for likes, comments or shares — since these might indicate an effort to spread questionable content. As for Google, the tech company released its 2017 Bad Ads report. Google says the report plays an important role in making sure users have access to accurate and quality information online. Still, the report addresses only ads thus far. Google warns more broadly that the sustainability of the web could be threatened if users cannot rely on the information they find there. https://smallbiztrends.com/2017/02/facebook-and-google-will-crack-down-on-fake-news.html
  • Create New...