Jump to content

Search the Community

Showing results for tags 'social media'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 28 results

  1. LONDON (Reuters) - Sixteen of the world’s biggest advertisers have joined together to push platforms such as Facebook, Twitter and Google’s YouTube to do more to tackle dangerous and fake content online. The Global Alliance for Responsible Media will also include media buying agencies from the major ad groups - WPP, IPG, Publicis, Omnicom and Dentsu - as well as the platform owners, the group said on Tuesday at the ad industry’s annual gathering in Cannes, France. Luis Di Como, executive vice president of global media at Unilever, said it was the first time that all sides of the industry had come together to tackle a problem that had far reaching consequences for society. “When industry challenges spill into society, creating division and putting our children at risk, it’s on all of us to act,” he said. “Founding this alliance is a great step toward rebuilding trust in our industry and society.” He said the group would initially focus on content that was a danger to society, such as terrorism. Platform owners had taken steps to address the problems, he said, but their focus had been more reactive - tackling content after it appeared - than proactive. The alliance will work together to develop processes and protocols to protect people and brands, he said. Other brand owners in the alliance include Adidas, Danone, Diageo, Mondelez International, Nestle and Procter & Gamble. Source
  2. More than 4.7 million counterfeit products seized, over 16 400 social media accounts suspended and 3 300 websites closed in the EU-wide operation Aphrodite II against trafficking of counterfeit goods. A joint investigation carried out by law enforcement authorities from 18 countries and supported by Europol, resulted in the seizure of 4.7 million counterfeit products. During the operation, 16 470 social media accounts and 3 400 websites selling counterfeit products were closed. The online fake goods marketers were selling a large variety of counterfeit items including clothes and accessories, sports equipment, illegal IPTV set-top boxes, medicines, spare car parts, mobile phones, miscellaneous electronic devices and components, perfumes and cosmetics. The operation led to the arrest of more than 30 suspects and reported 110 others to respective judicial authorities. A select number of suspects are part of two distinct criminal networks responsible for producing and trafficking counterfeit products online. Several investigations are still ongoing. Europol's Intellectual Property Crime Coordinated Coalition (IPC3) and the Italian Finance Corps (Guardia di Finanza) coordinated the joint investigation, with cooperation from the private sector. The European Union Intellectual Property Office (EUIPO) supported the activities of IPC3 with a grant. Law enforcement agencies from Austria, Belgium, Bulgaria, Croatia, Cyprus, Czechia, Greece, Hungary, Ireland, Italy, Malta, Netherlands, Portugal, Romania, Slovakia, Spain, Ukraine and the United Kingdom are all involved in the operation. Digital platforms - no safe haven for counterfeits Criminal groups continuously abuse the communication opportunities of digital platforms such as websites, social media and instant messaging to traffic and sell counterfeit products. The exponential growth of internet platforms has also affected the development of online marketplaces (known as e-stores) that are considered as alternative retail channels. These new markets also take advantage of social channels to perpetrate illicit activities. Law enforcement, supported by the private sector, is therefore extending its response to online trafficking of counterfeit products. To counter the threat, Europol is examining the scale of the problem, gathering evidence and monitoring social media and sales platforms. Selling fakes on social media Sellers can advertise counterfeit goods through overt social media posts - with photos of the product and price - or through hidden links to other marketplaces located outside the EU. In the latter case, details of the transaction are arranged through other communication channels such as instant messaging applications or even by telephone under different names. Couriers deliver the packages while the payment is made with prepaid cards, money transfer companies or other forms of electronic payment and web-based services. Fake products sold on social media can be extremely dangerous. Lacking any quality control and not complying legal norms, fake toys, medicines, body care products, fake spare car parts, inks and material used to produce imitation luxury products and clothes can be harmful to consumer health. IPC3 intends to promote their recurrent operation, operation Aphrodite, to encourage more countries and private countries to get on board and contribute their expertise and explore new operational methodologies. Source
  3. Multiple vaping companies were sent letters by federal regulators this week over posts by social media influencers that did not include necessary warnings about the vape products. The warning letters—which were sent to Artist Liquids Laboratories, Humble Juice Co., Hype City Vapors, and Solace Technologies—stated that the posts in question were reviewed by the Food and Drug Administration (FDA) and the Federal Trade Commission (FTC) and found to lack the required warning statement that the product both contains nicotine and that nicotine is an addictive chemical. According to the letters, the posts by influencers in partnership with the respective companies were shared to Facebook, Instagram, and Twitter, platforms on which some of the influencers had tens of thousands or more followers. In some cases, the letters said, posts by the companies themselves on social media or their websites failed to communicate the required warning language. “Given the significant risk of addiction, the failure to disclose the presence of and risks associated with nicotine raises concerns that the social media postings could be unfair or likely to mislead consumers,” the letters read. “The FTC urges you to review your marketing, including endorsements by your social media influencers, and ensure that necessary and appropriate disclosures are made about the health risks of nicotine.” Lorenzo De Plano, a co-founder of Solace, told Gizmodo in a statement by email that the letter his company was sent was related to a post by a single influencer who did not include necessary warnings in their post, adding that the company is no longer working with that individual. “All of Solace Vapor’s internal packaging, marketing and nicotine warnings are compliant with FDA standards,” Plano said. “Solace Vapor does not condone the use of our products by anyone who previously was not a tobacco and or cigarette user. We will be reviewing and terminating any and all 3rd party influencers who may not be compliant with our marketing policies. We hope that all other companies in our industry do the same.” The letters stated that the companies would be required to submit a written response within 15 working days of receipt that outlined their timeline for corrective actions. Spokespeople for Artist Liquids, Humble Juice, and Hype City Vapors did not immediately return requests for comment. The FDA and FTC said that the warning letters come as part of the FDA’s Youth Tobacco Prevention Plan, which among other initiatives is aimed at cutting off access by kids to tobacco products but also includes policing ads and marketing that may target youth. Vape giant Juul previously came under fire for its own marketing, which has been accused of attempting to lure teens to its products and contributing in large part to the widespread use of vape products among kids. The company has since folded many of its social media accounts, including Instagram. “Years of progress to combat youth use of tobacco is now threatened by an epidemic of e-cigarette use by kids, and unfortunately research shows many youth are mistaken or unaware of the risks and the presence of nicotine in e-cigarettes,” Acting FDA Commissioner Ned Sharpless said in a statement this week. “That’s why it’s critical we ensure manufacturers, retailers and others are including the required health warning about nicotine’s addictive properties on packages and advertisements—especially on social media platforms popular with kids.” Source
  4. A lack of security training for interns, and their obsession with sharing content on social media, could lead to a perfect storm for hackers looking to collect social engineering data. Researchers are warning of a new security Achilles’ heel for enterprises, and it may not be what they expect. That threat is interns. According to researchers, interns are unwittingly posting confidential and valuable company insights via social media that pose a security risk to the companies that hire them. While insider threats are nothing new and have often been linked to disgruntled employees, or hires who unintentionally click on malicious phishing emails, interns bring an entirely new threat to companies. Lax security training for company interns – coupled with the attachment of Generation Z to social media – is providing a lucrative opportunity for hackers to collect social engineering information, researchers said. More disturbingly, the level of information posted online – including details about office layout, company data, and even badge information – was enough to allow researchers with IBM X-Force Red to actually create their own spoofed badge and physically breach an office while purporting to be an employee. “From posting photos of their security badges to video blogging a ‘day in the life’ at the office, the social media habits of interns and eager young employees make them a rich source of information for hackers,” said Stephanie Carruthers, global social engineering expert with X-Force Red, in a recent post. A New Threat When it comes to collecting data for social engineering, “social media is a goldmine,” said Carruthers – and between Snapchat, Instagram, YouTube and Facebook, Generation Z is the most avid users of social media to date, according to a Pew Research survey. “About 75 percent of the time, a social media search turns up the information I’m seeking within just a few hours,” she said. “This is especially true for large companies, where these posts are most often from interns or new employees.” For instance, interns may post pictures to Snapchat, Instagram or Facebook, as well as videos to Youtube, of their office to social media, revealing internal office layouts, badge pictures, Outlook calendars and more in the background – an easy way for hackers to both collect social engineering tidbits or even breach company premises physically. In fact, that’s exactly what Carruthers did – after discovering an Instagram photo of an intern revealing a new corporate badge, IBM researchers were able to produce a fake badge using photo editing: “The fake badge may not work on doors, but it could work for piggybacking when other employees enter a secure location,” she said. Other platforms, like Glassdoor, offer troves of valuable information for phishing emails – including company organizational charts, salary ranges or typical interview information. “Using this information, an attacker could develop phishing emails, preparing the subject and content according to what’s trending among employees of a given company,” Carruthers said. “Unfortunately, employees could easily fall for a well-crafted email, and they may forget to check the sender’s legitimacy.” Added that that equation is a lack of proper security awareness training for onboarding interns and new hires at many firms, she said. “For companies that don’t include security awareness training as part of onboarding, new employees may not be trained until the next round of companywide instruction, which could be up to a year away,” she said. “Excited new employees often post their #NewJob #FirstDay #CompanyName via a hash-tagged selfie, showing off their new workspace and neglecting to realize that sensitive company information may be in the background.” Protection Insider threats continue to be a top concern across the industry. In fact, according to the Verizon Data Breach Investigations Report from this year, “privilege misuse and error by insiders” account for 30 percent of breaches. How can organizations protect against this insider threat? Companies should rethink their social media security policies, as well as train managers and social teams to spot any risky data posted online, Carruthers said: And because photos may inevitably end up online from the office, she recommended that companies also establish a safe photo space – an area of the office where any sensitive information is banned. The top method of protection, however, is implementing security training, Carruthers stressed. “Make sure your interns and new hires are getting this as part of their onboarding process,” she said. “You can make this fun and effective by helping them to understand the ways a hacker could use the seemingly harmless info they might consider posting.” Source
  5. "SOCIAL MEDIA PLATFORMS should advance FREEDOM OF SPEECH," White House says. Share on Facebook Share on Twitter Donald Trump has long accused social media platforms like Facebook, Twitter, and YouTube of political bias. On Wednesday, his White House launched a new online form that allows members of the public to report political bias in their content moderation decisions. "SOCIAL MEDIA PLATFORMS should advance FREEDOM OF SPEECH," the form says (capitalization in the original, of course). "Yet too many Americans have seen their accounts suspended, banned, or fraudulently reported for unclear 'violations' of user policies. No matter your views, if you suspect political bias caused such an action to be taken against you, share your story with President Trump." The form asks users to provide their name and basic demographic and contact information. Users then provide details about the content that was censored and can provide screenshots of messages from social media companies about moderation decisions. The form also collects respondents' email addresses and asks for permission to add users to White House newsletters. Respondents are also asked to accept a user agreement that gives the Trump Administration a broad license to use the information, including publishing it. The form singles out four social media platforms by name: Facebook, Instagram, Twitter, and YouTube. Users can also choose "other" and type in another platform. As these platforms have become more prominent, they have faced harsh criticism from both sides of the political spectrum. Liberals have attacked them for being too slow to block online harassment and hate speech. Social media companies have responded by beefing up their moderation efforts—but that has caused conservatives to worry about mainstream conservative content getting swept up in the dragnet. Last month Vice reported on a recent internal discussion at Twitter addressing this very issue. During an all-hands meeting, someone asked why the platform doesn't use automated tools to remove white supremacist content the way it has for ISIS propaganda. A Twitter employee who works on the issue reportedly said that one reason was that filters designed to identify white supremacist accounts could also catch the accounts of some Republican politicians. The latest White House initiative ratchets up the pressure on social media companies from the right, encouraging them to tread lightly as they consider more aggressive moderation of far-right content. Source: White House unveils new tool to report censorship by social media giants (Ars Technica)
  6. Social media companies like Facebook and Google have been slammed in the wake of the Christchurch massacre for failing to stop the spread of violent footage posted by the shooter. Pressure is mounting on them to do more after the terrorist’s video quickly spread across the internet on Friday but former tech employees say it’s not going to get any better. Yesterday, Facebook said it removed 1.5 million videos of the New Zealand shootingsincluding 1.2 million that were blocked from being posted. That implies 300,000 versions of the video were available to watch for at least short periods of time before Facebook managed to pull them down. For hours after the attack, the video circulated on other popular content sharing sites YouTube and Twitter as well as lesser known video streaming sites. Prime Minister Scott Morrison has taken aim at social media companies for not doing enough to prevent the spread of Friday’s live streamed attack. He demanded that tech giants provide assurances that they would prevent attacks from being shown online, suggesting live streaming services could be suspended. Opposition leader Bill Shorten also took aim at social media sites for hosting hate speech and not being accountable for the spread of anti-social content. Criticism has come from all corners, but serious questions remain about whether these sites can reliably be tasked with preventing another horrific live streamed video from being so widely circulated again. ‘IT ISN’T GOING TO GET A LOT BETTER’ These companies use a combination of algorithms, human workers and user reporting to police content. But given the huge volume of postings during an event like Christchurch it is currently an impossible task to block everything in real time. Alex Stamos is a computer scientist and the former chief security officer at Facebook. The day after the massacre he took to Twitter to lament the immense difficulty faced by a company like Facebook when so many users willingly post the violating footage. “Millions of people are being told online and on TV that there is a video and a document that are too dangerous for them to see, so they are looking for it in all the normal places,” he said, sharing a picture which showed a spike in Google searches for “New Zealand shooting” on Friday. “So now we have tens of millions of consumers wanting something and tens of thousands of people willing to supply it, with the tech companies in between.” Even if the company’s filtering systems were bulletproof, questions still remain about what should be allowed for legitimate reporting purposes and how to differentiate, he wrote. In short, “It isn’t going to get a lot better than this.” In fact, it will likely get worse. When it comes to Facebook, others were quick to point out that recent changes announced by CEO Mark Zuckerberg to introduce encrypted messaging and ostensibly boost privacy on the platform will limit the company’s ability to pull down infringing content. “End-to-end encryption prevents anyone — including us — from seeing what people share on our services,” Zuckerberg said earlier this month. According to former Facebook exec Antonio Garcia Martinez, a cynic might see this as a way for Facebook to protect itself against this kind of criticism. “Zuck’s recent statements about encryption, interpreted uncharitably, are a way to get out from under this content moderation curse forever, with an idealistic sheen,” he wrote on Twitter this morning. “By the way, I’m told the video is still circulating on WhatsApp, and there’s nothing FB can do about it due to e2e (end-to-end encryption),” he added. Tech firms have long struggled to balance their ethos of supporting free speech with the need to remove and prevent the spread of terrorist content. In 2016, Google, Facebook, Twitter and Microsoft announced they had teamed up to create a database of unique digital fingerprints known as “hashes” for videos and images that promote terrorism. Known as perceptual hashing, it means when one company takes down a piece of violating content, other companies can also use the hash to identify and remove the same content. But like other systems designed to improve content moderation, it is imperfect and is beholden to the never-ending game of cat and mouse when users are intent on sharing content. And it’s a problem that doesn’t look like going away any time soon. Facebook CEO Mark Zuckerberg has signalled major changes to come to Facebook. Source
  7. The AchieVer

    Peach isn’t dead yet

    Peach isn’t dead yet It’s a 1.0 app in a web 2.0 world Social media is increasingly the internet: Facebook was founded in 2004, and it ate the web as we knew it then — a collection of microsites and curiosities run by so many individual proprietors, individually. It used to be that personalization was what you did to your site; now it’s found in the ads you’re served. Peach — the microblogging platform— was seemingly designed against those circumscribed possibilities, as an antidote to the weird world-eating dominion of the Twitters and Facebooks and Instagrams of the universe. Its whole purpose was to bring people back to the early days of online, when the only limits were in what you could code. To describe it in a line: Peach is an online diary that you can share with your friends, like LiveJournal and Tumblr before it. Peach went down last week. It took a few days before the developers addressed the situation online, and in that time its users were distraught because it wasn’t clear if the app was ever coming back. (As of this writing, it’s still not back up.) It is a special place: Warm, inviting, and private, a port hidden from the chaotic storm of posts that make up the contemporary internet. I got in touch with some of those people to see what Peach meant to them, and what it felt like to face down the possibility that this safe, beautiful place might disappear. “To say I’m bereft would be an understatement,” wrote my friend Alison, who owns an aerial gym and who was a prolific, early blogger. Many people felt the same way. “I’m holding out hope that I’ll wake up tomorrow and check Peach and everything will load like nothing ever happened, but the app has been on borrowed time for so long that anything besides accepting its death feels foolish,” said Peter McCracken. “PEACH FELT LIKE A REFRESHING BREEZE AFTER BEING JAMMED INTO A HOT AND CROWDED SUBWAY CAR FOR TWO HOURS.” “When I first joined, Peach felt like a refreshing breeze after being jammed into a hot and crowded subway car for two hours. It was antithetical to the numbers game of accumulating followers and posting nonstop that plagued my Tumblr and Twitter circles at the time. Here was a charmingly simple app that let you curate your own space and peek in on friends.” McCracken went on, saying Peach is more of a web of interconnected diaries than it is a social network. There, he posted about the major shifts that occurred in his life — career shifts, the relationships, and the like. It was, he wrote, decidedly uncool. “I probably won’t be able to unlearn the muscle memory that made me open it in vain to check notifications on posts that are no longer there,” he wrote. “It happened five times while I was writing this.” A person who went by Crow sent me an email, where they said Peach was a place where they went to interact with other people. “I’m autistic and really have no idea to interact with people in real life & it’s a lot easier online especially on Peach,” they said. “Without Peach I wouldn’t have met my ex who is still one of my closest friends today, my art wouldn’t have improved as much, and I think I would be a lot less happy if I’m being honest.” A person named Michael wrote me, saying that they used Peach because it was isolated from the wider internet, which mean it was easier to post without consequence — unlike somewhere larger, like Facebook. “I go on there and clear the drafts for my brain, and I keep up with a few internet friends, and it’s just nice,” they said. “And like it feels trite to call something a safe space, but Peach is a motherfucking safe space!! In all the ways that twitter is not and refuses to be.” For most people, Peach seemed to be a place to mature, in the realest sense: It’s a place where the wider world doesn’t interfere, somewhere away from the seriousness of the internet at large. A woman named Helena wrote: “My time on peach roughly matches up to my first serious adult relationship (I’ve been with peach slightly longer than my partner, which is wild to think about) and I think in both my relationship with them and my relationship with my Peach friends I’ve realised how much real intimacy is based on those two things — being able to be honest about the hard stuff, and being able to be listened to on the things you don’t think are worth mentioning.” “People are (half-)joking a lot about Peach being a therapist but the time I had access to actual therapy I struggled so much with the idea of telling the truth about how I felt to a stranger, whereas I had a handful of friends I made on Peach (we had mutual friends, not total randos) who I immediately felt like there was this bond of trust with because we were both in the same boat, being honest with each other on a dusty abandoned mobile app,” she went on. That was the magic. There aren’t many open spaces left online, and there aren’t too many playgrounds left. Peach is a 1.0 app in a web 2.0 world; its architecture is nostalgic for a time we’ve left behind. Though not irrevocably, at least not yet, because it’s still around. Source
  8. Company, Devumi, already filed for bankruptcy in mid-2018. Today, a company that sold social media 'likes' and 'followers,' settled its legal case with the New York's Attorney General Office in the first-ever criminal investigation of its kind. The company, Devumi LLC, operated the now-defunct devumi.com website, where it sold likes and views on YouTube, Vimeo, and SoundCloud, endorsements on LinkedIn, pins on Pinterest, and likes, retweets, and Twitter followers. While social media influencers and celebrities knew about the site for many years, and some of them even used it to boost their online presence, the site was brought into the public's attention after a New York Time exposé in January 2018. The NYT article sparked an immediate investigation into Devumi's fraudulent business by New York Attorney General Letitia James. Her office found that Devumi sold both bot (automated) and sock-puppet (human-operated) accounts. Furthermore, investigators also found that Devumi sold accounts and online activity that they've illegally copied from real accounts, including their real avatars and content. Both were seen as criminal behavior because the bot accounts helped generate interest or tricked companies into buying advertising or product placements with social media celebrities that faked their follower numbers. Seeing the writing on the wall that would have lead to a long legal case, Devumi representatives signed a settlement with the New York Attorney General Office to make the case go away, on the promise that it and all related companies and associates would stop engaging in any similar practices. Devumi, in the meantime, has ceased operations months before, claiming that the NYT article generated a drop in sales, with social media users going to rival companies, fearing they might be exposed for buying likes and followers like the celebrities named in the Times piece. These included John Leguizamo (actor), Michael Dell (Dell owner), Ray Lewis (retired NFL player), Kathy Ireland (model and actress), Akbar Gbaja-Biamila (host of the "American Ninja Warrior" show), and, ironically, even Martha Lane Fox (Twitter board member). Source
  9. Taking part in risky stunts — whether or not in front of a camera — is a practice that can occasionally end in disaster for those involved. YouTube certainly wants no part of such shenanigans and updated its guidelines to ram home the point. In a Q&A section posted on Tuesday, January 15, introducing its revamped guidelines, YouTube acknowledged that the video-streaming site is “home to many beloved viral challenges and pranks, like Jimmy Kimmel’s Terrible Christmas Presents prank or the water bottle flip challenge,” but said it had to make sure that “what’s funny doesn’t cross the line into also being harmful or dangerous.” Harmful or dangerous? Ah, that would be stunts such as the so-called “Bird Box challenge,” where some folks, inspired by the recent Netflix Original starring Sandra Bullock, have been attempting a range of activities while wearing a blindfold — like driving a car or walking along a train track. We can throw 2018’s Tide pod challenge into the same category too — a rather risky endeavor that involved eating the contents of laundry detergent packs. “Anything for a thumbs up,” appeared to be the mantra behind the madness, though creators are after views to boost their revenue off ads, too. YouTube already bans content showing dangerous activities, but the new rules go into more detail regarding “dangerous challenges and pranks.” It tells creators that it doesn’t even permit videos where someone believes they’re in some kind of physical danger, even if the situation is actually safe. The company offers examples such as home-invasion setups or drive-by shooting pranks. Stunts like fake bomb threats fit neatly into that category, too, though something as daft as that can also get you jailed. YouTube was also keen to make it clear that it doesn’t allow pranks “that cause children to experience severe emotional distress, meaning something so bad that it could leave the child traumatized for life,” adding that it’s been working with psychologists to develop guidelines around the kinds of setups that go too far. The company said it’s giving creators two months to review and clean up their content. During this time, challenges and pranks that violate its guidelines will be removed if its team gets to the banned content first, but the channel will not receive a strike during this period. If a creator disagrees with a strike, they can appeal against it. A strike disappears after 90 days, but creators are warned that if their account receives three strikes for violations within a 90-day period, it will be terminated. YouTube has received a barrage of criticism in recent years for hosting offensive content on its site, an issue which it says it’s tackling with machine-learning algorithms and the addition of more human reviewers. Source
  10. NEW DELHI (Reuters) - Global social media and technology giants are gearing up to fight sweeping new rules proposed by the Indian government that would require them to actively regulate content in one of the world’s biggest Internet markets, sources close to the matter told Reuters. The rules, proposed by the Information Technology ministry on Christmas Eve, would compel platforms such as Facebook, its messaging service WhatsApp and Twitter to remove unlawful content, such as anything that affected the “sovereignty and integrity of India”. This had to be done within 24 hours, the rules propose. The proposal, which caught many holidaying industry executives off guard, is open for public comment until Jan. 31. It will then be adopted as law, with or without changes. The move comes ahead of India’s national election due by May and amid rising worries that activists could misuse social media, especially the WhatsApp messaging service, to spread fake news and sway voters. Industry executives and civil rights activists say the rules smack of censorship and could be used by the government of Prime Minister Narendra Modi to increase surveillance and crack down on dissent. Social media firms have long battled efforts by governments around the world to hold them responsible for what users post on their platforms. U.S. and India lobby groups, representing Facebook and other companies, have sought legal opinions from law firms on the impact of the federal proposal, and have started working on drafting objections to be filed with the IT ministry, four sources in the sector said. “The companies can’t take this lying down. We are all concerned, it’s fundamental to how these platforms are governed,” said an executive at a global social media company. An estimated half a billion people in India have access to the Internet. Facebook has about 300 million users in the country and WhatsApp has more than 200 million. Tens of millions of Indians use Twitter. The new rules, the sources said, would put privacy of users at risk and raise costs by requiring onerous round-the-clock monitoring of online content. Internet firm Mozilla Corp said last week the proposal was a “blunt and disproportionate” solution to the problem of harmful content online, and one which could lead to over-censorship and “chill free expression”. The IT ministry has said the proposal was aimed at only making social media safer. “This is not an effort to curb freedom of speech, or (impose) censorship,” Gopalakrishnan S., a joint secretary at India’s IT ministry said on Saturday when the ministry ran a #SaferSocialMedia campaign on Twitter. Facebook and WhatsApp declined to comment. A Twitter spokesperson said the company continues to engage with the IT Ministry and civil society on the proposed rules. “This will be like a sword hanging on technology companies,” said Nikhil Narendran, a partner specializing in technology law at Indian law firm Trilegal. TIGHT REGULATIONS Such regulations are not unique to India. Vietnam has asked tech companies to open local offices and store data domestically, while Australia’s parliament has passed a bill to force companies to give police access to encrypted data. Germany requires social media companies to remove illegal hate speech within 24 hours or face fines. Nevertheless, the proposal would further strain relations between India and global technology firms. They have been at odds since last year due to federal proposals requiring them to store more user data locally to better assist legal investigations. The new rules, called “intermediary guidelines”, also propose requiring companies with more than 5 million users in India to have a local office and a nodal officer for “24x7 coordination with law enforcement”. When asked by a government agency or through a court order, companies should within 24 hours “remove or disable access” to “unlawful” content, they stipulate. The rules also mandate companies to reveal the origin of a message when asked, which if enforced would deal a blow to WhatsApp which boasts of end-to-end encryption to protect user privacy. WhatsApp has battled criticism after fake messages about child kidnap gangs on its platform sparked mob lynchings in India last year. “You have created a monster, you should have the ability to control the monster,” a senior government official said, referring to WhatsApp. “We remain flexible in principle (to suggestions), but we definitely want them to be more accountable, especially the big companies,” the official said. Source
  11. The year 2018 will go down in history as the one where social networking platforms made country-specific changes and agreed to store user data belonging to Indians within the country. With great power comes great responsibility. The quote made popular by the iconic comic series 'Spider-Man' sums up the challenges that social media platforms like WhatsApp and Facebook are facing in India. They have been accused of being a carrier of hate messages and fake news that incited mob violence. And, now they stare at the prospects of stricter government rules, greater accountability, and regulatory scrutiny. These platforms, for some of whom India is the biggest consumer base outside of their home country, can see very much clear the writing on the wall -- follow the rules of engagement if you want to be in the world's fastest growing economy. The year 2018 will go down in history as the one where social networking platforms not only made country-specific changes -- be it labeling forwarded messages, limiting the number of people a user can send a message to at one go and launching public awareness campaigns against fake news. They also agreed to store user data belonging to Indians within the country. Globally, the tech and social giants scrambled to make efforts to mollify users with better control of their digital profile and data trail, as they faced backlash over data breaches. India market was no different. Earlier this year, Facebook came under the regulatory glare after a global data leak scandal hit about 87 million users. British data analytics and political consulting firm Cambridge Analytica was accused of harvesting personal information of millions of Facebook users illegally to help political campaigns and influence polls in several countries. Law and IT Minister Ravi Shankar Prasad warned the US social media giant of "stringent" action for any attempt to influence polls through data theft, even threatening to summon its CEO Mark Zuckerberg, if needed. The IT ministry slapped two notices on Cambridge Analytica and Facebook over the data breaches. Facebook admitted that nearly 5.62 lakh people in India were "potentially affected" by the incident and rushed to tighten processes, to prevent a repeat. But Cambridge Analytica continued to be evasive and in mid this year, the Centre asked CBI to probe the alleged misuse of data of India's Facebook users by the British political consultancy firm. Facebook, meanwhile, to bring transparency in political advertisements in the run-up to 2019 general elections, is making it compulsory for advertisers to disclose their identity and location before any such ad material can be run on the popular social media platform and Instagram. Twitter, too, intensified its crackdown on fake and automated accounts and began removing suspicious accounts from users' followers to give a "meaningful and accurate" view of follower count. But, it was Facebook-owned WhatsApp that faced the maximum heat after rumours circulating on the messaging platform incited mob fury and claimed over a dozen lives in various parts of the country. The toxic messages that spread on WhatsApp instigated riots in certain cases, as people forwarded and misinterpreted videos on the messaging platform. Following government's warnings, WhatsApp recently named a grievance officer for India and announced the appointment of an India head -- a first for the country that accounts for most users across the world. It has launched a label that identifies forwarded messages and barred forwarding of messages to more than five people at one go. As the Supreme Court voiced concerns over irresponsible content on social media, the government rushed to propose changes in IT Act's rules and released draft amendments which would require "intermediaries" to enable tracing of originators of information when required by government agencies. In the political slugfest that ensued, the Congress alleged that if the amendments were cleared, there would be a tremendous expansion in the power of the "big brother" government on ordinary citizens, "reminiscent of eerie dictatorships". Some Cyberlaw experts have equated the changes in rules to India's own anti-encryption law. The proposals require social media firms to deploy technology-based automated tools for proactively identifying, removing or disabling public access to "unlawful information or content". If approved, these changes will place social media platforms -- even those like WhatsApp which promise users privacy and encryption -- firmly under government lens, forcing them to adopt stricter due-diligence practices. The amendments -- which come ahead of the general polls in 2019 -- propose that platforms would have to inform users to refrain from hosting, uploading or sharing any content that is blasphemous, obscene, defamatory, hateful or racially, ethnically objectionable, or threatens national security. When backed by lawful order, these platforms will have to, within 72 hours, provide assistance as asked for by any government agency. The IT ministry has met Facebook, WhatsApp, Twitter, Google, and others to discuss the proposed changes and public feedback has been sought by January 15. The seemingly-infallible tech behemoths are already being equated with big oil and big tobacco, in Western markets. The larger question is whether the shifting public perception and recent moves by the government to regulate these habit-forming, new-age platforms would change the very essence of social media, once considered a harbinger of free speech and individual rights. source
  12. More Americans get their news from social media than from newspapers, a Pew Research study has found, a tip of the balance in that direction for the first time. As recently as last year, the sides were roughly equal – news via social media was about the same portion as news via print newspapers. According to the report posted on the Pew Research Center website, 20% of U.S. adults say they “often get news via social media,” a slightly higher figure than the 16% who favor newspapers. Pew notes that social media’s recent gain over print follows years of “steady declines” in newspaper circulation, combined with “modest increases” in the portion of Americans using social media. The survey was conducted earlier this year. TV remains the single most popular news consumption platform, though, despite steady decline in recent years: 49% of adults get their news from TV. Coming in at #2 is news websites (33%), with radio at #3 (26%). Social media and print round out the top five. When combined, news websites and social media are closing in on TV: 43% to TV’s 49%. Breaking down the types of TV news, Pew found that local TV is the most popular, with 37% of adults going that route, while 30% use cable TV most often and 25% turn to national evening network news programs. Pew asked about streaming devices for the first time in their annual study, finding that 9% of U.S. adults often get news from a streaming device on their TV. The majority of those using streaming devices (73%) don’t do so exclusively: they also use broadcast or cable TV for news. As would be expected, age plays a significant role in news consumption. Americans 65 and older are five times as likely as 18- to 29-year-olds to get their news from TV. Only 16% of that younger demo say they get their news from TV often. The 30- to 49-year-olds who do so are at about 36%. On the flip side: that youngest demo is four times as likely to get news from social media as the oldest demo. The elders is the only age group in which print has held its popularity, with four in 10 getting news from dead trees often. The middle group showed a preference for websites, with 42% of the 30- to 49-year-olds going online or using apps. Of the younger demo, that percentage is 27%, trailing behind the most popular – social media (36%). Only 2% of the youngest adults turn to print. Another Pew finding: Younger and middle-age Americans are far less likely than their elders to rely on only one platform. No more than half of the below-49ers rely on a single platform. Source
  13. Republicans, Democrats, and Independents may not be able to agree taxes, foreign policy, or immigration. But they increasingly agree that social media do more to hurt free speech and democracy than help, according to a new poll out from Axios. The survey of 3,622 adults was conducted by SurveyMonkey earlier this month. It showed that over the last year, the adults who thought that social media helped went from 53% to 40%. The ranks of those who said the platforms hurt jumped from 43% to 57%. Although people with different political party allegiances differed in their total assessment of social media outlets, they all shows significant and similar shifts in their outlooks. Democrats who thought the platforms were good went from 61% to 50%; the number who thought they were bad jumped from 37% to 48%. At 52%, a majority of Republicans had already thought them a problem last year. That number now stands at 69%, while those with a positive take dropped from 45% to 30%. Independents were in between the other two groups. Negative takes jumped from 42% to 58%; the number who thought the platforms were good now stands at 39%, versus 55% last year. The changes in attitude come as one scandal after another has rocked the industry and Congress brought Twitter CEO Jack Dorsey and Facebook COO Sheryl Sandberg in for multi-hour hearings. A poll conducted for Fortune earlier this month found that Facebook is the least trustworthy of all big tech companies regarding the safety of user data. A recent article from the New York Times that drilled into the company’s responses to multiple crises, and the actions reportedly taken by Sandberg and CEO Mark Zuckerberg, led to renewed criticism. Even Apple CEO Tim Cook has now called regulation “inevitable” because free market responses failed. And there probably isn’t an app for that. Source
  14. Conservative apps deliver curated partisan news feeds on what are effectively private social media platforms, the New York Times reports. They're "creating a safe space for people who share a viewpoint, who feel like the open social networks are not fun," says the developer of such apps. Republicans who feel Silicon Valley harbors a bias against conservative views are developing their own online networks, according to a Saturday report in The New York Times. The Times highlighted a new generation of mobile applications made for the National Rifle Association and the Great America pro-Trump political action committee (PAC), which deliver curated partisan news feeds on what are effectively private social media platforms. These apps are gaining attention at a time when Silicon Valley has been accused repeatedly of using their power to stifle right-leaning voices. "People with center-right views feel like the big social platforms, Facebook and Twitter, are not sympathetic to their views," said Thomas Peters, CEO of uCampaign, the Washington startup behind the NRA and Great America apps. Peters added that the apps are "creating a safe space for people who share a viewpoint, who feel like the open social networks are not fun places for them." While looking to propagate their message outside the offerings of Big Tech ahead of next month's midterm election, these mini-platforms also actually aim to harness the enormous reach of those networks. The Times noted that the right-leaning platforms offer options to post messages on Facebook and Twitter that are scripted by the campaigns. Democratic candidates, including former President Barack Obama, have used consumer-facing apps to promote their political campaigns and advocacy. This year, Democratic campaigns are also embracing peer-to-peer text messaging, believed to engage younger voters more than stand-alone candidate apps, the Times said. Meanwhile, uCampaign recently started its own peer-to-peer texting platform, RumbleUp, for conservative campaigns. The full report can be found on the New York Times' website. Source
  15. Facebook and Twitter are declining as news and media referral sources on mobile, according to a report from traffic analytics company Chartbeat, which finds that users are increasingly using search for news as well as migrating to publisher and news aggregation apps. Why it matters: The increase of social media distribution on smartphones meant that more people generally had access to more news and information than ever before, but a lot of it was unvetted, one-sided or outright false. Between the lines: Three market forces are pushing news traffic to come from places other than traditional forms of social media... Facebook's January 2017 decision to begin distributing less news, which is pushing more people to access news traffic from sources directly via search. A commitment to higher-quality news aggregation services from device manufacturers. A narrative around fake news on social media that's pushing consumers to look elsewhere for authoritative news and information. The big picture: Since January 2017, per Chartbeat... Twitter and Facebook have declined in their share of traffic sent to news sites. Facebook traffic to publishers is down so much (nearly 40%) that according to Chartbeat, "a user is now more likely to find your content through your mobile website or app than from Facebook." Google Search on mobile has grown more than 2x, helping guide users to stories on publishers' owned and operated channels. Direct mobile traffic to publishers's websites and apps has also steadily grown by more than 30%. Flipboard has grown 2x in news referrals. It is the default news app on Samsung devices in the United States. Google News (Mobile) has grown 3x since May 2018. It is the default app on "Stock" Android devices globally. Apple News has grown, although it's unclear how much. It is the default news aggregator on iOS with certain products in the U.S., UK and Australia. The bottom line: At a high-level, it's an example of how new technologies can be partially regulated by market pressure (and threats of democratic government regulation) over time. Source
  16. WASHINGTON (Reuters) - California will join other states planning to participate in a meeting organized by the U.S. Justice Department to discuss concerns about conservative voices being stifled on social media, the state’s attorney general said on Thursday. The Justice Department said it had invited a bipartisan group of 24 state attorneys general to attend the Sept. 25 meeting. Attorney General Jeff Sessions called the meeting after President Donald Trump criticized social media outlets for what he said was unfair treatment of conservatives. Lawmakers in both the House of Representatives and the Senate held hearings this month to grill executives of social media companies about their handling of conservative voices online. Companies like Facebook Inc, Twitter Inc and Google owner Alphabet Inc have been accused by some conservatives of seeking to exclude their ideas. The companies deny any such bias. “Today, the Justice Department formally sent invitations to a bipartisan group of twenty-four state attorneys general that expressed an interest in attending the meeting hosted by Attorney General Jeff Sessions,” a Justice Department official said. “The meeting will take place here at the Department of Justice, and we look forward to having a robust dialogue with all attendees on the topic of social media platforms.” The Justice Department invited officials from California Attorney General Xavier Becerra’s office to the meeting after Becerra reached out to Washington, Becerra spokeswoman Sarah Lovenheim said in an email statement. “States like California, the nation’s tech leader and home to a $385 billion tech industry, have a wealth of insight and expertise to share in any inquiry about the role of technology companies, and we look forward to a thoughtful conversation in Washington, D.C.,” Becerra said in a statement. Texas and South Carolina said previously they would participate, while others said they were not invited. Source
  17. Social media companies are deliberately addicting users to their products for financial gain, Silicon Valley insiders have told the BBC's Panorama programme. "It's as if they're taking behavioural cocaine and just sprinkling it all over your interface and that's the thing that keeps you like coming back and back and back", said former Mozilla and Jawbone employee Aza Raskin. "Behind every screen on your phone, there are generally like literally a thousand engineers that have worked on this thing to try to make it maximally addicting" he added. In 2006 Mr Raskin, a leading technology engineer himself, designed infinite scroll, one of the features of many apps that is now seen as highly habit forming. At the time, he was working for Humanized - a computer user-interface consultancy. Infinite scroll allows users to endlessly swipe down through content without clicking. "If you don't give your brain time to catch up with your impulses," Mr Raskin said, "you just keep scrolling." He said the innovation kept users looking at their phones far longer than necessary. Mr Raskin said he had not set out to addict people and now felt guilty about it. But, he said, many designers were driven to create addictive app features by the business models of the big companies that employed them. "In order to get the next round of funding, in order to get your stock price up, the amount of time that people spend on your app has to go up," he said. "So, when you put that much pressure on that one number, you're going to start trying to invent new ways of getting people to stay hooked." Lost time A former Facebook employee made a related point. "Social media is very similar to a slot machine," said Sandy Parakilas, who tried to stop using the service after he left the company in 2012. "It literally felt like I was quitting cigarettes." During his year and five months at Facebook, he said, others had also recognised this risk. "There was definitely an awareness of the fact that the product was habit-forming and addictive," he said. "You have a business model designed to engage you and get you to basically suck as much time out of your life as possible and then selling that attention to advertisers." Facebook told the BBC that its products were designed "to bring people closer to their friends, family, and the things they care about". It said that "at no stage does wanting something to be addictive factor into that process". Like's legacy One of the most alluring aspects of social media for users is "likes", which can come in the form of the thumbs-up sign, hearts, or retweets. Leah Pearlman, co-inventor of Facebook's Like button, said she had become hooked on Facebook because she had begun basing her sense of self-worth on the number of "likes" she had. "When I need validation - I go to check Facebook," she said. "I'm feeling lonely, 'Let me check my phone.' I'm feeling insecure, 'Let me check my phone.'" Ms Pearlman said she had tried to stop using Facebook after leaving the company. "I noticed that I would post something that I used to post and the 'like' count would be way lower than it used to be. "Suddenly, I thought I'm actually also kind of addicted to the feedback." Vulnerable teens Studies indicate there are links between overusing social media and depression, loneliness and a host of other mental problems. In Britain, teenagers now spend about an average of 18 hours a week on their phones, much of it using social media. Ms Pearlman believes youngsters who recognise that social media is problematic for them should also consider steering clear of such apps. "The first things I would say is for those teenagers to step into a different way of being because with a few leaders, it can help others follow," she said. Last year Facebook's founding president, Sean Parker, said publicly that the company set out to consume as much user time as possible. He claimed it was "exploiting a vulnerability in human psychology". "The inventors", he said, "understood this consciously and we did it anyway." But Ms Pearlman said she had not intended the Like button to be addictive. She also believes that social media use has many benefits for lots of people. When confronted with Mr Parker's allegation that the company had effectively sought to hook people from the outset, senior Facebook official Ime Archibong told the BBC it was still looking into the issue. "We're working with third-party folks that are looking at habit-forming behaviours - whether it's on our platform or the internet writ large - and trying to understanding if there are elements that we do believe are bringing harm to people," he said, "so that we can shore those up and we can invest in making sure those folks are safe over time." Recent reports indicate Facebook is working on features to let users see how much time they have spent on its app over the previous seven days and to set daily time limits. The Panorama programme also explores the use of colour, sounds and unexpected rewards to drive compulsive behaviour. Twitter declined to comment. Snap said it was happy to support frequent creative use of its app, Snapchat. But it denied using visual tricks to achieve this and added that it had no desire to increase empty engagement of the product. < Here >
  18. HANOI: Tens of thousands of Vietnamese social media users are flocking to a self-professed free speech platform to avoid tough internet controls in a new cybersecurity law, activists told AFP. The draconian law requires internet companies to scrub critical content and hand over user data if Vietnam’s Communist government demands it. The bill, which is due to take effect from January 1, sparked an outcry from activists, who say it is a chokehold on free speech in a country where there is no independent press and where Facebook is a crucial lifeline for bloggers. The world’s leading social media site has 53 million users in Vietnam, a country of 93 million. Many activists are now turning to Minds, a US-based open-source platform, fearing Facebook could be complying with the new rules. "We want to keep our independent voice and we also want to make a point to Facebook that we’re not going to accept any censorship," Tran Vi, editor of activist site The Vietnamese, which is blocked in Vietnam, told AFP from Taiwan. Some activists say they migrated to Minds after content removal and abuse from pro-government Facebook users. Two editors’ Facebook accounts were temporarily blocked and The Vietnamese Facebook page can no longer use the "instant article" tool to post stories. Nguyen Chi Tuyen, an activist better known by his online handle Anh Chi, says he has moved to Minds as a secure alternative, though he will continue using Facebook and Twitter. "It’s more anonymous and a secretive platform," he said of Minds. He has previously had to hand over personal details to Facebook to verify his identity and now fears that information could be used against him. ‘Scary’ law About 100,000 new active users have registered in Vietnam in less than a week, many posting on politics and current affairs, Minds founder and CEO Bill Ottman told AFP. "This new cybersecurity law is scaring a lot of people for good reason," he said from Connecticut. "It’s certainly scary to think that you could not only be censored but have your private conversations given to a government that you don’t know what they’re going to use that for." The surge of new users from Vietnam now accounts for nearly 10 percent of Minds total user base of about 1.1 million. Users are not required to register with personal data and all chats are encrypted. Vietnam’s government last year announced a 10,000-strong cybersecurity army tasked with monitoring incendiary material online. In its unabashed defence of the new law, Vietnam has said it is aimed at protecting the regime and avoiding a "colour revolution", but refused to comment to AFP on Thursday. Facebook told AFP it is reviewing the law and says it considers government requests to take down information in line with its Community Standards — and pushes back when possible. Google declined to comment on the new law when asked by AFP, but their latest Transparency report showed that it had received 67 separate requests from the Vietnamese government to remove more than 6,500 items since 2009, the majority since early last year. Most were taken down, though Google does not provide precise data on content removal compliance. Ottman says countries like Vietnam are fighting a losing battle trying to control online expression. "It’s like burning books, it just causes more attention to be brought to those issues and it further radicalises those users because they’re so upset that they’re getting censored," he said. Source
  19. HOW TO DISCONNECT FROM SOCIAL MEDIA BUT STAY CONNECTED TO THE WORLD Social media is terrible, and social media is amazing. It inundates us with panic-inducing news and rage-inducing hot takes; it also keeps us connected to our friends, professional circles, and news from around the world. But if you try to drink straight from the fire hose, you’re going to drown—or get your head blasted pretty hard. The key is figuring out what social media is good for—for you—and then getting other things that you need from somewhere else. I personally find Twitter terrible for news. Information is scattered and often incorrect, and it usually comes with a lot of panic —“THIS ISN’T NORMAL” and the like, as if I won’t know things are bad unless I’m shouted at. When social media is our only news source — or source for updates from our friends, or for links to good essays to read — it becomes really hard to take a break. You can use Freedom to block Twitter from your phone until 10am (that’s a bonus hack, by the way; I do that and it’s great), but if Twitter is the only place you get news, you may spend your morning worrying about what breaking news you’re missing out on — not to mention lacking articles to browse on the train in to work. It’s important that your social media feeds work for you. On Facebook, you can unfollow, unfriend, and snooze to get inflammatory news-sharers out of your feed. On Twitter, you can mute keywords and accounts. You can also use Tweetdeck to follow whittled-down lists instead of your entire feed — when you don’t want to drown in the endless feed, but want to keep up with your actual friends or favorite cute animals, you can just do that. Once you’ve broken your morning Twitter habit, or scrubbed your feed of everyone but your friends and cute animals—or whatever works for you—here’s how to keep up with the world in a way that’ll make you feel a little less batty. RSS Feeds Young ’uns, pull up a chair. Back before the endless scroll of the social media feed, one way we got our electronic news was via RSS readers, helpful tools that aggregated the feeds of our favorite sites and blogs, listing new articles so we could browse and determine what we wanted to read. (And, unlike social media feeds, an RSS reader’s feed had a blessed end.) The best and most beautiful RSS reader is no longer with us (RIP Google Reader), but others still exist. Try Feedly or Inoreader. You’ll have to spend some time importing the sites you want to follow, but once you do you’ll have an easy list of headlines to browse whenever you like. You can make separate lists for national news, essays, blogs, or publications in your own field. You can keep up with as little or as much as you like. And it won’t be interspersed with the million other things screaming for your attention on social media. Push Alerts I know this sounds counterintuitive, but I actually found that signing up for push alerts for breaking news made opening Twitter much less anxiety-inducing for me. Instead of scrolling my feed wondering what fresh hell I was about to encounter, I knew that I’d get a push alert on my phone each time a new fresh portal to hell opened up. Newsletters If Tinyletters are the new blogs, then why not get your news sent straight to your inbox, too? You can get the latest headlines from your newspaper of choice, or a weekly tour of an obscure field of interest curated by an expert in said field. Some that come highly recommended: Vox Sentences, The Washington Post’s Daily 202, No Complaints, The Ann Friedman Weekly. (If you’ve got some you love, please recommend them in the comments!) Good Old-Fashioned Newspapers Go to their website and browse some headlines! Maybe pay for an online subscription to get behind the paywall (and support their work). I know it sounds nuts, but that’s where the news is, and when you go beyond the headlines and read a few articles, without a feed’s worth of other posts grappling for your eyes, you’ll find that the news can be surprisingly informative. Who knew! Talk to Friends — Online is Fine Of course, social media isn’t only about news and reading material—it’s also a way we stay connected to our friends. Social contact on social media can feel thin, but it’s not insignificant, and if you don’t fill the void, you’ll have, well, a void. If you’re pulling back from your feeds, take advantage of the other ways technology lets us chat with our friends: gchat, Slack, text messages, whatever it takes. (Please do not harangue me for being a millennial in the comments.) SOURCE
  20. Heavy GDPR-like fines proposed for firms that fail to tackle abuse The culture secretary has pledged new laws to regulate social media firms after being snubbed by 10 out of 14 companies invited to consult on regulation. Speaking on the Andrew Marr Show yesterday, Matt Hancock, secretary of state for culture, digital, media and sport, said the government does not have adequate power to regulate social media firms, and that self-policing to this point has not worked. He revealed that being ignored by several major players in the industry - who were invited to contribute on plans to tackle online abuse, to remove inappropriate content, and introduce a levy - has given him greater motivation to introduce new legislation. "The fact that only four companies turned up when I invited the 14 biggest in; it gave me a big impetus to drive this proposal to legislate through," Hancock said. "Before then, and until now, there has been this argument - work with the companies, do it on a voluntary basis, they'll do more that way because the lawyers won't be involved. "And after all, these companies were set up to make the world a better place. The fact that these companies have social media platforms with over a million people on them, and they didn't turn up [is disappointing]." "One of the problems that we've got is we engage with Facebook and Google and Twitter, and they get all of the press, they get all of the complaints in the public debate, but there's now actually a far greater number of social media platforms, like musical.ly and others, that didn't show up." Announcing the proposed legislation, Hancock said: "Digital technology is overwhelmingly a force for good across the world and we must always champion innovation and change for the better. "At the same time I have been clear that we have to address the Wild West elements of the internet through legislation, in a way that supports innovation." The new laws, which the culture secretary said are likely two years away, will follow failed efforts to engage with tech giants voluntarily on new regulations, outlined in a policy paper last year. Whitehall's proposal, which had invited views from tech companies prior to feeding into legislation, said the 'levy' would be sought on a voluntary basis, with the document reading: "We may then seek to underpin this levy in legislation, to ensure the continued and reliable operation of the levy." It adds: "The levy will not be a new tax on social media." But now the government is now intent on using legislation to tackle these issues, with part of the Data Protection Bill currently going through Parliament dedicated to introducing fines for indiscretions such as allowing underage users on social media platforms. Speaking on Sky News' politics show Ridge on Sunday, the UK's digital minister, Margot James, expanded on the government's position, saying: "The consultation that we conduct is quite likely to result in measures that we will outline into law that will oblige companies to take content down - and obviously there will have to be consequences for them to face if they don't comply." Asked what these consequences would be, James added: "In the data protection legislation that is just finishing its passage through Parliament at the moment, there is the capacity for the Information Commissioner to fine companies up to 4% of their global annual turnover. We would envisage something similar in this area. "Obviously that would be a cap - like a maximum - and there would be a scale of other deterrents en route to that. But we do understand that companies need to face consequences if they do not comply with laws applicable online just as they are online." DCMS and the Home Office will publish a white paper later this year with further details. Antony Walker, deputy CEO of trade industry body TechUK, said: "Where we can move quickly with confidence on the effectiveness of the outcome we should do. But we must avoid 'quick fixes' that are unworkable and could end up being counter-productive. We need to get to a position where government and tech firms are 100% aligned on what needs to be done so that we can get on and implement solutions that we can all have confidence will work. "There is still a lot of work to be done between now and the publication of a white paper, and as a sector we want to keep building on the serious and constructive engagement that has happened to date." Source
  21. A new study suggests that middle-aged people between the ages of 30 and 49 that spend time on social media are more likely than millennials to report mental health problems. The reason for this is because those over the age of 30 are more likely to dwell on the direction in which their life is going and whether or not they have achieved their personal goals. “In their desire to validate accomplishments, many middle-aged adults may look to high school peers (i.e. those who roughly had the same starting line) as a point of comparison,” wrote the study’s authors, led by Dr. Bruce Hardy of Temple University, in the journal Computers in Human Behavior. The study, which was based on data collected from a social survey of nearly 750 people, went on to explain, “As most people present themselves hyper-positively online, social comparisons are unrealistic and may deteriorate self-worth and mental well-being.” (Related: Researchers have ranked Instagram as the worst social media app for young people’s mental health.) Considering the fact that millennials are some of the heaviest social media users on the Internet, it would make sense that young people are the ones that are most likely to report mental health problems. However, according to the study: “Millennials have spent their entire lives with digital media. Because they and their social media ‘friends’ have grown together online, social comparison may produce less stark and shocking contrast between them and their peers compared to older adults who adopted social media after losing contact with many of their peers.” (Related: A former Facebook executive has argued that social media is ripping society apart.) How social media affects your health The findings of this study contribute to the broader idea that social media usage, while beneficial in some ways, can actually be more harmful than most people even realize. Such was the argument made in a recent article published by Sabrina Barr of The Independent, which outlined six ways that social media negatively affects your mental health. Self-esteem As you scroll down the Facebook news feed or look at picture after picture on Snapchat, it’s often hard not to compare yourself to others. As such, many people define happiness as how they look relative to others. This can have a serious effect on self-esteem and self-worth. Human connection Today, there are just as many – if not more – relationships being formed on social media than in real life. As a result, people are becoming less and less skilled at communicating in person and traditional human connection is slowly becoming obsolete. Memory If you have Facebook or Instagram, chances are you’ve gone back and looked at your old pictures and videos a time or two. However, it’s important to remember that when we post on social media, we tend to post pictures or videos that make us look as happy and attractive as possible (photo filters, crafty editing, etc.). Thus, when we go back and look at these posts, it often distorts reality and alters our memory of how those events really occurred. Sleep Many of us are so addicted to social media that we can’t stop scrolling even when we climb into bed for the night. This limits our ability to get enough rest, which in turn can have an impact on how we function throughout the day. Attention span The amount of information we now have access to as a result of social media actually limits our attention span and causes us to become easily distracted. Doctors recommend training yourself to exercise will power by intentionally not checking your phone for five minutes at a time throughout the day. Mental health As Dr. Bruce Hardy of Temple University found in his study, social media usage can lead to mental health issues and general unhappiness, especially among middle-aged adults. Perhaps it is time to put down the smartphones, close the laptops, and get away from sites like Facebook and Twitter, even if it’s only for a few hours each day. < Here >
  22. Facebook's marketing department is using algorithms to identify emotionally vulnerable and insecure youth as young as 14, The Australian reported today after reporters managed to get their hands on a 23-page report from Facebook's Australian office. The document, dated this year and flagged as "Confidential: Internal Only," presents Facebook's advertising capabilities and highlights the social network's ability to detect, determine, and categorize user moods using sophisticated algorithms. The leaked file, authored by two of Facebook Australia's top execs is a presentation that the company is willing to share with potential customers only under a "non-disclosure agreement only." Facebook using algorithms to categorize emotional states In it, Facebook reveals that by monitoring posts, photos, and interactions, they can approximate a person's emotional state into categories such as "silly," "overwhelmed," "anxious," "nervous," "defeated," "stressed," "stupid," "useless," or a "failure." Facebook claims that it can identify not only emotions and state of mind, but also how and when this changes in real-time and across periods of time. While this was to be expected from a company as advanced as Facebook, the document reveals the social network won't shy away from using its algorithms against youth as young as 14, data which it then makes available to advertisers, so they can target teens that feel insecure or are emotionally vulnerable. The social network is using these points to lure in advertisers to its network, alluding it could help in targeting users, including young teens, in their most vulnerable states when most people tend to buy products and make themselves feel better. The document specifically mentions Facebook's ability to target over 6.4 million "high schoolers," "tertiary students," and "young Australians and New Zealander [...] in the workforce." Facebook confirmed validity of leaked document Contacted by reporters, Facebook admitted the document was real, apologized, and said it would start an investigation into the practice of targeting its younger userbase. Current privacy laws allow companies to collect data on users if its anonymized and stripped of any personally-identifiable information, such as names, precise addresses, or photos. Facebook said it respects privacy laws, but reporters said Facebook is in breach of the Australian Code for Advertising & Marketing Communications to Children, an advertising guideline which states that advertisers must get permission from the child's parent or guardian before collecting any data. Facebook, who currently boasts of a total monthly active userbase of over 1.85 billion, is the second online advertisers behind Google. A recent report revealed that Google and Facebook are cannibalizing 99% of all the money in digital advertising, and have established a de-facto duopoly. In 2014, news broke out that Facebook meddled with people's news feed algorithms to test if it could influence people's moods. Source
  23. A 20-year-old Thai man streamed the murder of his 11-month-old daughter on Facebook Live on Monday, in another appalling case where Facebook's live streaming service has been used exactly for what Facebook never intended. According to Thai news media, the man, named Wuttisan Wongtalay, murdered his daughter on the rooftop of an abandoned hotel in the Thai town of Phuket. Facebook took nearly a day to take down the videos He streamed the murder in two video feeds started on Monday, at 16:50 and 16:57, local time (09:50 and 09:57 GMT). The videos show the father tying a rope around his daughter's neck and dropping her off the side of the hotel. Following his heinous act, the father took his own life shortly after, but he didn't stream his own death. Thai authorities said the suspect had a fight with his wife, and acted in desperation as he believed his wife didn't love him anymore and wanted to leave him. The live stream was converted into a video and hosted in the father's Facebook profile, where it remained for almost a full day, as Facebook miserably failed to act on numerous user reports. The video was finally taken down on Tuesday, after Thai police and ministry officials reached out to the company. Nonetheless, by that time the video was copied and uploaded to several other platforms such as YouTube, DailyMotion, and others. Live feature is turning into Facebook's biggest nightmare Facebook's tardy response comes after the company failed to act in the murder of an elderly man in Cleveland earlier this month, an event also streamed on the platform. In March, a group of teenagers used Facebook Live to broadcast the sexual assault of a 15-year-old teen. Facebook Live was also used to stream the brutal beating of a Trump supporter in January. Also in January, three men from Uppsala, Sweden, streamed another sexual assault using Facebook's platform. The three men are now in custody. These are only the major incidents that took place this year. Multiple incidents took place in 2016, and all in all, the common factor was that Facebook's staff failed to act in due time, leaving videos online for hours, despite reports from its users. Following the Cleveland incident, Facebook promised to improve its live video feed monitoring and review systems. The recent wave of live stream incidents, coupled with the flood of fake news stories are currently Facebook's biggest problems. On a side note, Facebook appears to be losing the battle of establishing itself as a source of reliable news. Big news agencies have already dropped the company's Instant Articles service, while others acused the company of intentionally burying their articles. This happens all while governments in Europe are threatening the company with fines due to the increasing number of fake news articles shared on its network. The next few months will be crucial for Facebook's upcoming future. Source
  24. Used An iPhone And Social Media Pre-2013? You May Be Due A Tiny Payout Twitter, Instagram, and others are stumping up $5.3m to settle a privacy suit with implications for those who used social-media apps on an iPhone in 2012 or earlier. Given the millions who downloaded the social-media apps in question, it's likely the settlement will result in a very small payment for each individual. Eight social-media firms, including Twitter and Instagram, have agreed to pay $5.3m to settle a lawsuit over their use of Apple's Find Friends feature in iOS. The main problem that complainants had with the accused firms was that their apps, which used Apple's Find Friends, didn't tell users that their contact lists would be uploaded to company servers. The lawsuit alleged the privacy incursions occurred between 2009 and 2012, the year the class action suit began. Instagram, Foursquare, Kik, Gowalla, Foodspotting, Yelp, Twitter, and Path have agreed to pay in to the settlement fund, which will be distributed to affected users via Amazon.com, according to Venture Beat. Yelp had previously argued it was necessary to store user contact lists to enable the Find Friends feature, which consumers understood would occur in the context of using a mobile app. However, US District Judge Jon Tigar countered that the key question was whether Apple and app developers "violated community norms of privacy" by exceeding what people reasonably believe they consented to. "A 'reasonable' expectation of privacy is an objective entitlement founded on broadly based and widely accepted community norms," said Tigar. If the judge approves the settlement, Apple and LinkedIn would be the only remaining defendants among 18 firms originally accused of the privacy violation. Given the millions of people who downloaded these apps, it's likely the settlement will result in a very small payment for each individual. However, people who took part in the class action suit could receive up to $15,000 each. Source
  25. In the late 1990s, Kevin Mitnick introduced his version of human hacking to the digital world. Mitnick was then, and still is, an expert at social engineering, which is the "... psychological manipulation of people into performing actions or divulging confidential information." Over the years, Mitnick and his counterparts polished their skills, and very few doors—digital and otherwise—went unopened. However, the return on investment for cybercriminals was less than satisfactory; even though scamming a victim via social engineering is simpler than using a technical hack, the process is labor intensive. Enter social media Fast forward a few years to when social media came into the picture. "Social engineering, when coupled with the new and widespread use of social media, becomes more effective by exploiting the wealth of information found on social-networking sites," writes John J. Lenkart in his Naval Postgraduate School thesis (PDF). "This information allows for more selective targeting of individuals with access to critical information." In his research, Lenkart, a unit chief in the Federal Bureau of Investigation, determined that all social-engineering attacks have one thing in common: They rely on acquiring pertinent information about the target organization or individual. Figure A depicts the steps involved in a social-engineering attack. Figure A Lenkart next mentions that employing social-media outlets increases the effectiveness of social engineering by expanding the attack surface of the intended victim organization and its members. The way social media comes into play is shown in Figure B and is circled in red. Figure B Lenkart then writes that data mining social-media outlets clearly enhances social-engineering techniques by being able to identify the sphere of influence or inner trust circle of a targeted individual or organization. Collective-attention threats James Caverlee, associate professor at Texas A&M University, along with Kyumin Lee assistant professor Utah State University, and former Texas A&M student Krishna Y. Kamath, now at Twitter are also interested in social-media-enhanced social engineering, in particular, threats involving spam. n their coauthored paper Combating Threats to Collective Attention in Social Media: An Evaluation (PDF), they write, "Breaking news, viral videos, and popular memes are all examples of the collective attention of huge numbers of users focusing on large-scale social systems. But this self-organization, leading to user attention quickly coalescing and then collectively focusing on the phenomenon, opens these systems to new threats like collective-attention spam." Caverlee, Lee, and Kamath first point out that large-scale social systems such as web-based social networks, online social-media sites, and web-scale crowdsourcing systems have several positive features: large-scale growth in the size and content in the community; bottom-up discovery of "citizen-experts"; discovery of new resources beyond the scope of the system's designers; and new social-based information search and retrieval algorithms. However, the three add, "The relative openness and reliance on users coupled with the widespread interest and growth of these social systems carries risks and raises growing concerns over the quality of information in these systems." In this TechRepublic article, fake news is discussed as a way for adversaries to get targeted victims to do something they would rather not. Collective-attention threats have the same effect and appear to be more successful because the content is considered trustworthy. Next, the researchers identify three types of collective-attention threats targeting social-media outlets: content pollution by social spammers; coordinated campaigns for strategic manipulation; and threats to collective attention. A possible solution This Texas A&M University press release notes that Caverlee and his team are building a threat-awareness application that will serve as an early warning system for users. The countermeasure should mitigate the effects of collective-attention threats. The early warning system consists of a framework for detecting and filtering social spammers and content polluters in social systems. The framework is built around what they call a social honeypot (Figure C) that will: harvest spam profiles from social-networking communities; develop robust statistical user models for distinguishing between social spammers and legitimate users; and filter out unknown (including zero-day) spammers based on these user models. The framework will consist of a method set and algorithms that detect coordinated campaigns in large-scale social systems by linking free text posts with common "talking points" and extracting campaigns from large-scale social systems. Figure C Not to be taken lightly Of concern to the Texas A&M researchers is the understanding that it only takes a few spammers using collective-attention threat methodology to disrupt the quality of information. FBI Unit Chief Lenkart, in his conclusion warns, "The pervasiveness of social-networking media cannot be ignored when developing a security program to limit its impact on an organization's vulnerability to the insider threat." By Michael Kassner http://www.techrepublic.com/article/fbi-agent-explores-how-social-engineering-attacks-get-a-boost-from-social-media/
×
×
  • Create New...