Jump to content

Search the Community

Showing results for tags 'websites'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 39 results

  1. WordPress team wants to forcibly auto-update older WordPress versions to newer releases. The developers behind the WordPress open-source content management system (CMS) are working on a plan to forcibly auto-update older versions of the CMS to more recent releases. The goal of this plan is to improve the security of the WordPress ecosystem, and the internet as a whole, since WordPress installations account for more than 34% of all internet websites. Officially supported versions include only the last six WordPress major releases, which currently are all the versions between v4.7 and v5.2. The plan is to slowly auto-update old WordPress sites, starting with v3.7, to the current mimum supported version, which is the v4.7 release. This will be done in multiple stages, as follows: 2% of all WP 3.7 sites will be auto-updated to WP 3.8 After a week, another 18% will be auto-updated to WP 3.8 After two weeks, 80% of WP 3.7 sites will be auto-updated to WP 3.8. Repeat the same steps as above, but migrating sites from WP 3.8 to WP 3.9; WP3.9 to WP 4.0; and so on. The WordPress team said it plans to monitor this tiered forced auto-update process for errors and site breakage. If there's something massively wrong, then auto-update can be stopped altogether. If only a few individual sites break, than those site will be rolled back to their previous versions and the owner will be notified via email. "The email should be a strongly-worded warning, letting them know that their site could not be upgraded to a secure version, and that they should manually update immediately. If they don't update, it's almost guaranteed that their site will be hacked eventually," said Ian Dunn, a member of the WordPress dev team. A first auto-update plan would have wreaked havoc on the internet This looks like a sensible solution, but an earlier proposal had the WordPress team forcibly update all old WordPress sites to version 4.7 at once. This idea was quickly scraped after an avalanche of negative feedback from WordPress site owners who warned that millions of sites would have gone down with WSOD (white screen of death) errors caused by incompatibilities between themes, plugins and the newer WordPress core version. The tiered forced auto-update is the result of the feedback, and one that takes possible site breakage into account. Furthermore, the WordPress team plans to allow site owners to opt out of this forced update process. The WordPress team plans to send emails to website administrators and show a stern warning in websites' dashboards before starting the auto-update process. These warnings will also include opt-out instructions, and will be shown/sent at least six weeks before a site is forcibly auto-updated. "They'll be warned about the security implications of opting-out," Dunn said. More than 3% of the internet runs outdated WordPress sites The finer details of the auto-update process have not been finalized yet, but a source has told ZDNet that the WordPress security team hopes to auto-update all old sites within a year. Versions prior to v3.7 will not be auto-updated because v3.7 is the version in which the auto-update mechanism was included in the CMS. These older versions only support manual updates and can't be auto-updated. Versions prior to v3.7 account for under 1% of all WordPress installations, though, so this won't be a big issue. WordPress sites running versions from v3.7 to v4.7 account for 11.7% of all WordPress sites, which is roughly in the tens of millions of sites range. That's about 3% of all internet sites, currently running extremely old WordPress versions. WordPress 3.7 was released in October 23, 2013, while the current minimum "safe" version, v4.7, was released in December 2016. It was foreshadowed last year While the plans to go with a forced update has shocked some members of the webdev community, it has not surprised ZDNet. We knew it was coming because the WordPress security team hinted about it last year. In a talk at the DerbyCon 2018 security conference, WordPress Security Team lead Aaron Campbell said his team was working on "wiping older versions from existence on the internet." This is what he meant. The reason behind the WordPress dev team's desire to forcibly update all older CMS versions to the new one is because of man-power. For the past six years, WordPress developers have been backporting every single security patch for all versions going back to WordPress 3.7. While this was doable in the beginning, as the WordPress CMS moved forward, it took up more and more time because WordPress devs had to convert newer PHP code into one that's compatible with the older WordPress codebase. "That sucks for us as a security team," Campbell said about this process, last year at DerbyCon. "But it's absolutely the best thing for our users. And because that's where we set the measure of success, that's what we do." By moving all users to WordPress 4.7 (and then 4.8, 4.9, etc), developers are also making their lives easier, but also keeping the internet more secure, as a whole. Currently, WordPress is the most targeted CMS today, mainly due to its large adoption and huge attack surface. Reducing the attack surface is the easier way to combat malware botnets that take over WordPress sites and use them to host malware, SEO spam, or launch DDoS attacks. Source
  2. Cloudflare has gone down around the world and vast numbers of websites have gone down with it. The company provides cloud computing services to millions of people, meaning that a lot of websites are not accessible. One of these is Down Detector, which tracks outages, meaning people can’t even see if the website they want to visit is working or not. The chat service Discord has also stopped working as well as a number of other prominent pages. Matthew Price, Cloudflare CEO, tweeted: ‘Aware of major @Cloudflare issues impacting us network wide. Team is working on getting to the bottom of what’s going on. Will continue to update.’ One person tweeted: Cloudflare, possibly the largest internet/networking company in the world doesn’t have any kind of automated visibility on downtimes. does that not concern anyone? ‘Half their network is down and their status page says “all systems OK!’ they claimed. The outage appears to have started in the past hour. Unable to check Down Detector for updates, people took to Twitter in a quest for answers. ‘Sure is great that when Cloudflare goes down it takes half the Internet with it,’ one person wrote. Another roared: ‘Typical illustration of how bad the internet is centralized nowadays: Cloudflare is down, most websites & services being disrupted by this.’ The outage caused problems with the cryptocurrency website Coindesk, which tweeted: ‘ Due to a Cloudflare outage, we’re getting bad data from our providers, which is showing incorrect crypto prices. ‘Calm down everyone, Bitcoin is not $26.’ The outage appears to have been resolved now. Price added: ‘Appear to have mitigated the issue causing the outage. Traffic restored. Working now to restore all services globally. More details to come as we have them.’ Source
  3. Criminals are using TLS certificates to convince users that fraudulent sites are worthy of their trust. One of the most common mechanisms used to secure web browser sessions — and to assure consumers that their transactions are secure — is also being used by criminals looking to gain victims' trust in phishing campaigns. The FBI has issued a public service announcement defining the problem and urging individuals to go beyond simply trusting any "https" URL. Browser publishers and website owners have waged successful campaigns to convince consumers to look for lock icons and the "https:" prefix as indicators that a website is encrypted and, therefore, secure. The problem, according to the FBI and security experts, is that many individuals incorrectly assume that an encrypted site is secure from every sort of security issue. Craig Young, computer security researcher for Tripwire’s VERT (vulnerability and exposure research team) recognizes the conflict between wanting consumers to feel secure and guarding against dangerous over-confidence. "Over the years, there has been a battle of words around how to communicate online security. Website security can be discussed at a number of levels with greatly different implications," he says. "On its own, however, the padlock does not actually confirm that the user is actually connected with a server from the business they expect," Young explains. "Unfortunately, there is still no solid solution for empowering the general public to discern phishing or scam sites with 100% effectiveness." In the FBI's PSA, the bureau points out that criminals are increasingly incorporating website certificates in phishing email messages impersonating known companies and individuals. The trustworthy-looking URLs take the victims to pages that seek sensitive and personal information. "This isn’t new; cyber criminals have been orchestrating these kinds of phishing campaigns for several years," says Kevin Bocek, vice president of security strategy and threat intelligence at Venafi. He explains, "In 2017, security researchers uncovered over 15,000 certificates containing the word 'PayPal' that were being used in attacks. Since then it’s become clear that bad actors have an entire supply chain in place on the dark web to get trustworthy TLS certificates to use in all kinds of malicious attacks." Bocek says that researchers have found definitive evidence of TLS certificates for sale on the dark web, with prices for highly trustworthy certificates reaching more than a thousand dollars. He sees greater visibility and transparency as key assets in fighting the proliferation of these "trustworthy" certificates used in fraudulent ways. Other technologies may eventually provide additional weapons against the criminals. Young says, "In the long run, the best available solution to this problem is probably the use of newer standards like WebAuthN to prevent naïve users from inadvertently divulging site credentials to a phisher." The FBI's PSA doesn't recommend new technology, instead suggesting behavioral defenses against the phishing attacks. The Bureau recommends questioning the intent of email messages, confirming the authenticity of messages before divulging sensitive information, looking for mis-spellings or domain inconsistencies, and tempering the overall trust in a site simply because it displays a green lock icon. Source
  4. A new extortion scam campaign is underway that is targeting websites owners and stating that if they do not make a payment, the attacker will ruin their site's reputation and get them blacklisted for spam. We all know, or should know, about the sextortion emails people are receiving where the sender states they have hacked the recipient's computer and taped them doing things while on adult sites. Since then, further extortion scams were created that pretend to be the CIA, bomb threats, and even from hitmen asking you to pay them to call off their hit. In this new variant, scammers are utilizing a web sites contact's form to send messages to site owners with a subject of "Abuse and lifetime blocking of the site - example.com. My requirements". The demands then state the sender will destroy the reputation of the site if a .3 bitcoin (approximately $2,400) payment is not paid to them. Extortion Email If a payment is not made, the extortionist states that they will send millions of emails from your domain, leave nasty reviews about the recipient's site, and submit nasty messages to other people's contact forms pretending to be from your domain. All of this being done to ruin the reputation of the site. Extortion Email Says: Hey. Soon your hosting account and your domain xxx.nl will be blocked forever, and you will receive tens of thousands of negative feedback from angry people. Here is a list of what you get if you don’t follow my requirements: + abuse spamhouse for aggressive web spam tens of thousands of negative + reviews about you and your website from angry people for aggressive + web and email spam lifetime blocking of your hosting account for + aggressive web and email spam lifetime blocking of your domain for + aggressive web and email spam Thousands of angry complaints from angry + people will come to your mail and messengers for sending you a lot of + spam complete destruction of your reputation and loss of clients + forever for a full recovery from the damage you need tens of thousands + of dollars Do you want this? If you do not want the above problems, then before June 1, 2019, you need to send me 0.3 BTC to my Bitcoin wallet: 19ckouUP2E22aJR5BPFdf7jP2oNXR3bezL How do I do all this to get this result: 1. I will send 30 messages to 13 000 000 sites with contact forms with offensive messages with the address of your site, that is, in this situation, you and the spammer and insult people. And everyone will not care that it is not you. 2. I’ll send 300 messages to 9,000,000 email addresses and very intrusive advertisements for making money and offer a free iPhone with your website address xxx.nl and your contact details. And then send out abusive messages with the address of your site. 3. I will do aggressive spam on blogs, forums and other sites (in my database there are 35 978 370 sites and 315900 sites from which you will definitely get a huge amount of abuse) of your site xxx.nl. After such spam, the spamhouse will turn its attention on you and after several abuses your host will be forced to block your account for life. Your domain registrar will also block your domain permanently. Receiving an email like this is scary, especially when someone threatens your website, which may be your livelihood. With that said, it is important to understand that this attacker is sending these to many sites, it is just a scam, and that they are not going to take the effort to ruin your site's reputation. If you receive one of these emails, simply mark it as spam or delete it. Source
  5. Over 80 government websites are down after TLS certificates expired and there's nobody on hand to renew them. More than 80 TLS certificates used by US government websites have expired so far without being renewed, leaving some websites inaccessible to the public. NASA, the US Department of Justice, and the Court of Appeals are just some of the US government agencies currently impacted, according to Netcraft. The blame falls on the current US federal government shutdown caused by US President Donald Trump's refusal to sign any 2019 government budget bill that doesn't contain funding for a Mexico border wall he promised during his election campaign. This has resulted in hundreds of thousands of government workers being furloughed across all government agencies, including staff handling IT support and cybersecurity. As a result, government websites are dropping like flies, with no one being on hand to renew TLS certificates. Websites with expired certificates where admins followed proper procedures and implemented correctly-functioning HSTS (HTTP Strict Transport Security) policies are down for good, and users can't access these portals, not even to browse for basic information. Government websites with expired TLS certificates but which didn't implement HSTS show an HTTPS error in users' browsers, but this error can be bypassed to access the site via HTTP. Nevertheless, visitors are warned not to log in or perform any sensitive operations on these sites, as traffic and authentication credentials aren't encrypted and could be intercepted by threat actors. Visiting and browsing content is fine, but users should also be aware that all websites will not be actively managed and there won't be employees on hand to process requests or update sites with the latest correct information. The current government shutdown has been a disaster on the cybersecurity front so far. Experts from multiple cyber-security firms have warned that this would be the perfect time for hostile countries to carry out cyber-attacks against the US government, as agencies are understaffed and IT infrastructure is left largely unattended. According to Axios, the Department of Homeland Security's newly created Cybersecurity and Infrastructure Security Agency (CISA) has had 43 percent of its staff, which amounts to roughly 1,500 employees, sent home. The National Institute of Standards and Technology, which puts together and manages many security standards, has also kept only 49 employees of its normal 3,000. But besides the losses in current personnel, government agencies have also missed an important opportunity for recruiting new cyber-security talent this winter, according to CyberScoop. No representatives for the FTC, NIST, the State Department, or CISA were present at booths at an important cyber-related student recruiting event held in Washington this year. In the end, nothing good will come out of this shutdown. May it be a cyber-attack that goes undetected or agencies losing cyber-security personnel leaving for the private sector, the ripple effects of this shutdown will haunt agencies for months or years to come. Source
  6. The internet is an amazing place where you can find more than 1 billion websites. Along with some fantastic sites there are some weird ones too. It’s impossible for a person to visit every website. Therefore we have gathered some strange websites on the internet. Some of them are funny, some are really boring and a few are like you can’t answer why they exist. We haven’t included adult site here, so you can click on all link without any hesitation. Enjoy the list! 1. Iloveyoulikeafatladylovesapples: Feel the hunger of the fat lady until you let her eat enough apples. The website is completely useless still you can enjoy the graphics and background music. 2. Thenicestplaceontheinter.net: The really sweet website that offers free hugs. Go get it. 3. SciencevsMagic.net/Tes: You can mix the words amazing and weird to describe this one. Also, the website gave AIDS to my eyes. 4. Michaeljfoxnews: Feel the earthquake on your computer. 5. Pointerpointer: I don’t know where did they find these pictures but this is how you get to the specific point. 6. Heeeeeeeey: Just click on link and get the heeey hooo party feel. 7. wwwdotcom: A serious tip for you. 8. Rainymood: Rain makes everything better. So just sit back and enjoy the sound effect to enlighten your mood. 9. Isitchristmas: The name suggests all. May be the website has been designed for people suffering from short term memory loss. 10. Cat-bounce: And that’s how humans play with emotions of cats. 11. 111111111111111111111111111111111111111111111111111111111111: Believe me; I have no idea what the exact purpose of website. But it seems like website owner is not really a fan of Arnold Schwarzenegger. 12. Heyyeyaaeyaaaeyaeyaa: A catchy music with special cartoon characters for our special readers. 13. Thisman: This is the height of weirdness! The website says that hundreds of people dream about this face. No, I don’t. 14. Breakglasstosoundalarm: The thing you wanted to do once in your life is here. 15. Internetlivestats: I don’t think this is a live data, however you will get an idea of few internet stats. 16. Simonpanrucker: No words to explain this useless thing. Kindly decide yourself how weird it is. 17. Ilooklikebarackobama: You might wanna reply this website, “No you don’t, not even a bit”. 18. Corgiorgy: The cute dog army. 19. Haneke: If you like complicated things and pay too much attention into details, you won’t regret after visiting this website. 20. Fearthegaychicken: The question is what makes you think that this chicken is gay. Is it background color or the sound? 21. Koalastothemax: An amazing creativity and fun with pixels. 22. Procatinator: Cats popularity is increasing day by day and somehow this website is the reason behind it. 23. Youfellasleepwatchingadvd: If your mom doesn’t allow you to watch TV, you could spend some time here. 24. Essaytyper: This is the place where you become a professional typist in no time. 25. Feedthehead: My advice is, don’t just feed the head, play with the whole face. 26. Nooooooooooooooo: If your boss gives you extra workload, you can reply him this link. 27. Zoomquilt: The weirdness tends to infinity. Even a telescope can’t look so far. 28. Staggeringbeauty: Just shake the mouse and see the snake’s reaction. 29. Anasomnia: This is how dreams become nightmare. 30. Eelslap: Slap tight as many times as you want. He won’t mind. Source
  7. TV and Sport events http://www.rojadirecta.me/ http://88.80.11.29/ http://www.streamhunter.eu http://zonytvcom.info/ http://www.livestation.com/ (News Channels) https://www.youtube.com/live/all http://www.justin.tv/ http://www.ustream.tv http://aflam4you.tv/index.html (Arabic Channels + beIN Sports channels) http://www.kakibara.com/ http://www.stream2watch.me/ http://www.hahasport.com/ http://www.firstrow1.eu/ http://tvtoss.com/ Movies http://www.movie4k.to/ Movies Subtitles http://subscene.com/ Football highlights http://footyroom.com/ Updated: Download Music: http://beemp3.com http://mp3lx.com/ http://mp3skull.com/ http://www.mp3toss.com Updated 2: Watch TV on Android 1- Download IPTV App from Playstore 2- Download TV playlists and add them to the App, Here a good website for the playlists NB: You can try another app called Kodi, it's available for Windows too but did not tried it yet Updated 3: http://www.streamgaroo.com/
  8. Open .git directories are a bigger cybersecurity problem than many might imagine, at least according to a Czech security researcher who discovered almost 400,000 web pages with an open .git directory possibly exposing a wide variety of data. Vladimír Smitka began his .git directory odyssey in July when he began looking at Czech websites to find how many were improperly configured and allow access to their .git folders within the file versions repository. Open .git directories are a particularly dangerous issue, he said, because they can contain a great deal of sensitive information. “Information about the website’s structure, and sometimes you can get very sensitive data such as database passwords, API keys, development IDE settings, and so on. However, this data shouldn’t be stored in the repository, but in previous scans of various security issues, I have found many developers that do not follow these best practices,” Smitka wrote. Smitka queried 230 million websites to discover the 390,000 allowing access to their .git directories. The vast majority of the websites with open directories had a .com TLD with .net, .de, .org and uk comprising most of the others. What tends to happen is developers leave the .git folder in a publicly accessible portion of their site and when they go to verify whether or not the folder is protected many are fooled when they use <web-site>/.git/ and receive an Error 403 message. Smitka noted that this might make it appear as if the folder is inaccessible, but in fact the error message is a false positive. “Actually, the 403 error is caused by the missing index.html or index.php and disabled autoindex functionality. However, access to the files is still possible,” he said adding the files can possibly even be viewable on Google. Instead he recommends using <web-site>/.git/HEAD to ensure the folder is secure. During his scanning process he was able to find 290,000 email address in the directories, so he set about trying to warn as many people as possible about their website’s vulnerabilities. He boiled the initial list down to about 90,000 addresses by eliminating machine addresses and those associated with multiple domains. In the end 18,000 were kicked back as undeliverable. “After sending the emails, I exchanged about 300 additional messages with affected parties to clarify the issue. I have received almost 2,000 thank-you emails, 30 false positives, 2 scammer/spammer accusations, and 1 threat to call the Canadian police,” Smitka said. The emails contain a link to a page Smitka created that explained and contained a mitigation for the problem. Source
  9. Despite the fact that the Drupal exploit was reported-and patched-in March 2018, some 115,000 websites are still vulnerable. An exploit found in popular content management system (CMS) Drupal that makes it trivially easy for attackers to execute arbitrary code is still causing massive amounts of trouble three months after being discovered. As reported by researchers from Malwarebytes Labs, the attack known as Drupalgeddon 2 has infected over 900 websites with malware, primarily in the form of cryptominers that max out visitor CPUs in order to mine cryptocurrency. While many infected websites simply appear to test domains set up and abandoned on Amazon Web Services, many legitimate sites are infected, including high-profile pages operated by the Arkansas state government and the University of Southern California. A full list of infected websites can be found here, and anyone running an outdated version of Drupal should check to be sure they aren't listed. Just because your website isn't listed doesn't mean its administrators are off the hook: Security researcher Troy Mursch said that the number of vulnerable websites is greater than 115,000, leaving plenty of internet real estate left to infect. The anatomy of a Drupal disaster It makes sense that a remote code execution vulnerability would go unresolved for so long on a Drupal website, at least from the perspective of Jérôme Segura of Malwarebytes Labs. "Updating or upgrading Drupal may have side effects, such as broken templates or functionality, which is why you need to make a full back up and test the changes in the staging environment before moving to production," Segura said in a Malwarebytes Labs blog post. The frustration and extra work that come with a CMS upgrade is well known to anyone who works with one, so it makes sense that updates would be avoided until absolutely necessary. Unfortunately for those still running Drupal versions older than 7.5.9, this is one of those instances. Outdated versions of Drupal seem commonplace—Malwarebytes Labs even reported that 30% of those infected by Drupalgeddon 2 were running some version of Drupal 7.3, which was last updated in 2015. As reported by TechRepublic when Drupalgeddon 2 was first revealed in March 2018, "The vulnerability relates to a conflict between how PHP handles arrays in parameters, and Drupal's use of the hash (#) in at the beginning of array keys to signify special keys that typically result in further computation, leading to the ability to inject code arbitrarily." An attacker has no need to authenticate with Drupal to perform the exploit—they just have to visit a page with a maliciously crafted URL. This isn't the first time that a widespread exploit has been successful due to the failure of IT to install needed security updates: Perhaps the most well-known incident to happen due to similar causes was the GoldenEye/Peyta outbreak in 2017. That's just a single example, and it isn't the only one. Ransomware proliferates largely due to unpatched systems, and the US Government even released a report saying that botnets are successful in part due to exploiting known vulnerabilities. There's no excuse for this kind of attack: The vulnerability is known and its patch is available. Yes, installing it might be a headache and necessitate more work, but as has been stated before, taking the effort to patch now will prevent your having to recover later. You can get the latest versions of Drupal here. The big takeaways for tech leaders: A three-month-old Drupal exploit has spread to over 900 websites, infecting them with cryptominers and other malware. A patch for the vulnerability has been available since March 2018. Drupal administrators need to update to the latest version now to prevent becoming a victim of this "trivially easy" exploit. Source
  10. Batu69

    Movie Downloading Websites

    When it comes to entertainment nothing can beat the environment and fun provided by movies. There are millions of movies in different categories that you can choose as per your interest. From sci-fi to comedy, from action to suspense, from horror to romantic, everything is out there. You just need to find the right path to watch these movies. If your life is so busy and hectic that you don’t get enough time to go out for movies halls and theater to watch movies, still you can have latest movies in your device so that you can watch it whenever you want you. Today, I am sharing best free Hollywood movie downloading websites where you can download latest and old movies in high-definition. 1. My Download Tube My Download Tube is free movie downloading website that provides the latest Hollywood and Bollywood movies and lets you download for free. If you don’t want to download any movie but want to watch to online, My Download Tube completes your wish and lets you watch them without any registration or sign-up. My Download Tube also provides a section for games where you can check games and download them for free. Or you can simply write your query as movie or game name in the search box to watch what exactly you intend to. 2. YouTube Movies YouTube Movies is one of the sites where you can find out any video, any episode of favorite TV series, movies, songs and lot more. You can use its search box tool to find the link to download full movie. All the movies provided here are good in quality and has full length. You can download YouTube movies by installing Internet Download manager that will automatically prompt you to download movies. 3. Gingle Gingle is another amazing online portal to download the latest movies, not just movie but you can also search for music, listen to streaming online radio stations, play online games, wallpapers and much more. If you are looking for anything specific, just request it on Ginger and the online portal will be delighted to add that. Ginger doesn’t ask you for any registration or to create any account. You can find your favorite stuff that you want to watch easily. 15 Best Free Movie Downloading Websites Of 2016
  11. LeeSmithG

    [Emoticons] Help.

    Has anyone got a text document with the emoticons so I can copy and paste. I have searched and nothing. You know :+) is :+d is :+( is So much appreciated.
  12. 2018 deadline to stop individuals from accessing global web Tightening controls come amid Xi’s goal of "cyber-sovereignty" China’s government has told telecommunications carriers to block individuals’ access to virtual private networks by Feb. 1, people familiar with the matter said, thereby shutting a major window to the global internet. Beijing has ordered state-run telecommunications firms, which include China Mobile, China Unicom and China Telecom, to bar people from using VPNs, services that skirt censorship restrictions by routing web traffic abroad, the people said, asking not to be identified talking about private government directives. The clampdown will shutter one of the main ways in which people both local and foreign still manage to access the global, unfiltered web on a daily basis. China has one of the world’s most restrictive internet regimes, tightly policed by a coterie of government regulators intent on suppressing dissent to preserve social stability. In keeping with President Xi Jinping’s “cyber sovereignty” campaign, the government now appears to be cracking down on loopholes around the Great Firewall, a system that blocks information sources from Twitter and Facebook to news websites such as the New York Times and others. While VPNs are widely used by businesses and individuals to view banned websites, the technology operates in a legal gray area. The Ministry of Industry and Information Technology pledged in January to step up enforcement against unauthorized VPNs, and warned corporations to confine such services to internal use. At least one popular network operator said it had run afoul of the authorities: GreenVPN notified users it would halt service from July 1 after “receiving a notice from regulatory departments.” It didn’t elaborate on the notice. It’s unclear how the new directive may affect multinationals operating within the country, which already have to contend with a Cybersecurity Law that imposes stringent requirements on the transfer of data and may give Beijing unprecedented access to their technology. Companies operating on Chinese soil will be able to employ leased lines to access the international web but must register their usage of such services for the record, the people familiar with the matter said. Shares in U.S.-listed 21Vianet Group Inc., a provider of networking and datacenter services to Chinese clients, slid as much as 4.1 percent before ending 2.4 percent lower. Westone Information Industry Inc., which helps to set up VPNs and secure networks, fell as much as 1.5 percent Tuesday. “This seems to impact individuals” most immediately, said Jake Parker, Beijing-based vice president of the US-China Business Council. “VPNs are incredibly important for companies trying to access global services outside of China,” he said. “In the past, any effort to cut off internal corporate VPNs has been enough to make a company think about closing or reducing operations in China. It’s that big a deal,” he added. China Mobile Ltd., the Hong Kong-listed arm of the country’s biggest carrier, declined to comment. Representatives for publicly traded China Telecom Corp. and China Unicom (Hong Kong) Ltd. couldn’t immediately comment. The ministry didn’t immediately reply to an email seeking comment. — With assistance by Steven Yang, and Christina Larson Article source
  13. A majority of the top 1 million websites earn an “F” letter grade when it comes to adopting defensive security technology that protect visitors from XSS vulnerabilities, man-in-the-middle attacks, and cookie hijacking. The failing grades come from a comprehensive analysis published this week by the Mozilla Foundation using its Mozilla Observatory tool. According to a scan of Alexa ranked top 1 million websites, a paltry 0.013 percent of sites received an “A+” grade compared to 93.45 percent earning an “F”. The Observatory tool, launched last year, tests websites and grades their defensive posture based on 13 security-related features ranging from the use of encryption (HTTPS), exposure to XSS attacks based on the use of X-XSS-Protection (XXSSP) and use of Public Key Pinning which prevents a site’s use of fraudulent certificates. The silver-lining to the bad grades is that in the year since the Observatory tool began grading sites, security has improved. Compared to scans conducted between April 2016 and June 2017 the percentage of sites earning a “B” have jumped 142 percent and those earning a “C” have increased 90 percent. “It’s very hard if you’re just someone running a website to make it secure,” said April King, staff security engineer at Mozilla and developer of the Observatory tool. “There are so many different security standards. The documentation for those standards are scattered all over the place. There are not a lot of single resources that are telling you straight-up what you need to do.” King said she is encouraged at the pace of improvement when it comes to specific defensive tools. For example, the percentage of sites that support HTTPS has grown 36 percent in the past year. “The number might seem small, but it represents over 119,000 top websites,” she told Threatpost. Other security wins include a 125 percent increase in the number of sites that have adopted Content Security Policy (CSP), a browser feature that fends off Cross Site Scripting (XSS) and data injection attacks. Another win has been a 117 percent increase in adoption of Subresource Integrity (SRI), a verification feature that ensures when a browser fetches resources from third parties, such as a content delivery network, the content is not manipulated in transit. However, despite triple-digit growth in both CSP and SRI adoption, still less than one percent of sites still have adopted these security features. King concedes that achieving a secure website configuration, using all the available technologies developed in recent years by browser makers, is not easy. “I’m extremely optimistic. With tools that are free and easy to use, like Observatory, we can begin to see a common framework for building websites. This type of tool is pushing awareness back into the tool chain and making it very easy for people to implement,” King said. King likens Observatory to Qualys SSL Labs’ SSL Server Test, a free tool that analyses the configuration of SSL web servers. Observatory goes way beyond checking a website’s TLS implementation and checks for 13 different web security mechanisms. The scoring system is based on a 0 to 100 point scheme. Scores don’t just check for the presence of any given technology, but the correct implementation as well. Observatory is a tough grader, King said, because it’s designed to be a teaching tool to help administrators across the industry “become aware of the myriad technologies that standard bodies and browser companies have designed and implemented to improve the safety of the internet’s citizens.” “The fact that so many new sites have started using these technologies recently is a strong sign that we are beginning to succeed in that mission,” she said. Article source
  14. We submit hundreds of blacklist review requests every day after cleaning our clients’ websites. Google’s Deceptive Content warning applies when Google detects dangerous code that attempts to trick users into revealing sensitive information. For the past couple of months we have noticed that the number of websites blacklisted with Deceptive Content warnings has increased for no apparent reason. The sites were clean, and there was no external resources loading on the website. Recently, we discovered a few cases where Google removed the Deceptive Content warning only after SSL was enabled. We conducted the following research in collaboration with Unmask Parasites. What is an SSL Certificate? Most websites use the familiar HTTP protocol. Those that install an SSL/TLS certificate can use HTTPS instead. SSL/TLS is a cryptographic protocol used to encrypt data while it travels across the internet between computers and servers. This includes downloads, uploads, submitting forms on web pages, and viewing website content. SSL doesn’t keep your website safe from hackers, rather it protects your visitor’s data. To the average visitor, SSL is what’s behind the green padlock icon in the browser address bar. This icon signifies that communication is secure between the visitor and the web server, and any information sent or received is kept safe from prying eyes. Without SSL, an HTTP site can only transfer information “in the clear”. Therefore, bad actors can snoop on network traffic and steal sensitive user input such as passwords and credit card numbers. The problem is that many visitors don’t notice when SSL is missing on a website. Google Moves on HTTP/HTTPS We have seen Google pushing SSL as a best practice standard across the web. Not only are they rewarding sites that use HTTPS, it seems they are steadily cracking down on HTTP sites that should be using HTTPS. In 2014, Google confirmed HTTPS helps websites rank higher in their search engine results. In January 2017, they rolled out the Not Secure label in Chrome whenever a non-HTTP website handled credit cards or passwords. Google also announced they would eventually apply the label to all HTTP pages in Chrome, and make the label more obvious: There has been a lot of talk about how to promote SSL and warn users when browsing HTTP sites. Studies show that users do not perceive the lack of a “secure” icon as a warning, but also become blind to warnings that occur too frequently. Our plan to label HTTP sites clearly and accurately as non-secure will take place in gradual steps, based on increasingly stringent criteria. Source: Google Security Blog Perhaps the red triangle warning has not been as effective, and they could be working on even stronger labels through their SafeBrowsing diagnostics. Blocking Dangerous HTTP Input In a few recent cases, we had Google review a cleaned website twice over a couple of days, but the requests were denied. Once we enabled SSL, we asked again and they cleared it. Nothing else was changed. We dug further and uncovered a few more cases where this behavior had been replicated. Upon investigation, the websites contained login pages or password input fields that were not being delivered over HTTPS. This could mean that Google is expanding its definition of phishing and deception to include websites that cause users to enter sensitive information over HTTP. We don’t know what Google looks for exactly to make their determination, but it’s safe to assume they look for forms that take passwords by just looking for input type=”password” in the source code of the website when that specific page is not being served over HTTPS. Here’s an example from the Security Issues section of Google Search Console showing messages related to Harmful Content: We see that the WordPress admin area is blocked, as well as a password-protected page. Both URLs start by HTTP, indicating the SSL is missing. Both pages have some form of login requirement. Most of these sites were previously hacked, and these warnings remained after the cleanup had been completed. There were a few, however, where there was no previous compromise. In each case, enabling SSL did the trick. As the largest search engine in the world, Google has the power to reduce your traffic by 95% with their blacklist warnings. By blocking sites that should be using HTTP, Google can protect its users and send a clear message to the webmaster. Domain Age a Factor There seems to be another similar factor among the affected websites. Most appear to be recently registered domains, and as such, they did not have time to build a reputation and authority with Google. This could be another factor that Google takes into account when assessing the danger level of a particular website. Some websites were not even a month old, had no malware, and were blacklisted until we enabled SSL. Google Ranking and Malware Detection One of the many factors involved in how Google rates a website is how long the site has been registered. Websites with WHOIS records dating back several years gain a certain level of authority. Google’s scanning engines also help limit our exposure to dangerous websites. Phishing attacks often use newly-registered domains until they are blacklisted. New sites need time to develop a reputation. An older website that never had any security incidents is less likely to have any false positive assessment, while a new website won’t have this trust. As soon as Google sees a public page asking for credentials that are not secured by HTTPS, they take a precautionary action against that domain. HTTP As a Blacklist Signal Google has been slowly cracking down on HTTP sites that transfer sensitive information and may be starting to label them as potential phishing sites when they have a poor reputation. While Google has not confirmed that SSL is a factor in reviewing blacklist warnings, it makes sense. Google can ultimately keep their user’s browsing experience as safe as possible, and educate webmasters effectively by blocking sites that don’t protect the transmission of passwords and credit card numbers. Password handling is a big security concern. Every day there are cases of mishandled passwords, so it’s understandable that Google is testing their power in changing the tides and keeping users safe. Conclusion Keeping the communication on your website secure is important if you transmit any sensitive user input. Enabling SSL on your website is a wise decision. Thankfully this has become an easier process in recent years, with many hosts encouraging and streamlining the adoption of SSL. Let’s Encrypt came out of beta over a year ago, and has grown to over 40 million active domains. If you have a relatively new website and want to ensure that Google does not blacklist you for accepting form data, be sure to get SSL enabled on your website. We offer a free Lets’s Encrypt SSL certificate with all our firewall packages and are happy to help you get started. Article source
  15. When, in January 2017, Mozilla and Google made Firefox and Chrome flag HTTP login pages as insecure, the intent was to make phishing pages easier to recognize, as well as push more website owners towards deploying HTTPS. But while the latter aim was achieved, and the number of phishing sites making use of HTTPS has increased noticeably, the move also had one unintended consequence: the number of phishing sites with HTTPS has increased, too. “While the majority of today’s phishing sites still use the unencrypted HTTP protocol, a threefold increase in HTTPS phishing sites over just a few months is quite significant,” noted Netcraft’s Paul Mutton. One explanation may be that fraudsters have begun setting up more phishing sites that use secure HTTPS connections. Another may be that they have simply continued compromising websites to set up the phishing pages, but as more legitimate sites began using HTTPS, more phishing pages ended up having HTTPS. Finally, it’s possible that fraudsters are intentionally compromising HTTPS sites so that their phishing login pages look more credible. Whatever the reason – and it might simply be a combination of them all – the change made some phishing attempts even more effective. And so the battle between attackers and defenders continues. Article source
  16. You already know that some applications offer portable or “soft” installations, but don’t you wish there was some place on the Internet where you could find any portable application? Although few, the good news is that there are some! Portable applications are incredibly useful. If you’re someone who is constantly carrying around a flash drive, you should always have a few of your favorite portable applications (or even a portable application suite) on it. I’ve found that portable applications are just as useful for a wide variety of uses — like when you’re setting up synchronized folders. I have a folder in my Dropbox dedicated to no-installation-required programs, and syncing it to any new desktop or laptop means that I immediately have several applications available at my fingertips. It’s rather helpful. Here where you can go to find these types of portable programs
  17. ALMOST 200,000 WEBSITES and connected systems remain vulnerable to the Heartbleed OpenSSL bug, more than two-and-a-half years after it was first uncovered. That's according to the Shodan Report 2017, based on scans conducted by the search engine that enables used to scour the internet for specific types of computers. The systems will be wide open to a range of exploits that have been around almost since the bug was first publicised. The US is far out in front with 42,032 systems still vulnerable, according to Shodan, followed by South Korea with 15,380, China with 14,116, Germany with 14,072 and France with 8,702. The UK has some 6,491 systems and servers vulnerable to Heartbleed connected to the internet. And the organisations hosting the most vulnerable systems include South Korea's SK Broadband and Amazon. Approximately 75,000 of the vulnerable connected systems are using expire SSL certificates and running ageing versions of Linux. Heartbleed is a security flaw in the open source OpenSSL cryptography library, widely used in implementations of the Transport Layer Security (TLS) protocol. The flaw was reported to the OpenSSL developers on 1 April 2014, publicly disclosed on 7 April 2014, with a fix released the same day. However, many organisations have been slow to patch their systems accordingly. Many may not even know that the software they're running uses the OpenSSL library. Other TLS implementations are not affected by the flaw. Indeed, just months after the flaw was discovered, Shodan found that 300,000 systems remained vulnerable. That was in June 2014. Shodan is a search engine that enables users to find specific types of devices connected to the internet using a variety of filters. Shodan collects data mostly on web servers (HTTP/HTTPS - port 80, 8080, 443, 8443), as well as FTP (port 21), SSH (port 22), Telnet (port 23), SNMP (port 161), SIP (port 5060),[2] and real-time streaming protocol (RTSP, port 554). The latter can be used to access webcams and their video stream. It was launched in 2009 by computer programmer John Matherly who, in 2003, conceived the idea of a search engine that could search for devices linked to the Internet, as opposed to information. The name Shodan is a reference to a character from the System Shock computer games. Article source
  18. In a massive crackdown, police and law enforcement agencies across Europe have seized more than 4,500 website domains trading in counterfeit goods, often via social networks, officials said on Monday. The operation came as Europol, Europe's police agency, unveiled its newest campaign dubbed "Don't F***(AKE) Up" to stop scam websites selling fake brand names online. "The internet has become an essential channel for e-commerce. Its instant global reach and anonymity make it possible to sell nearly anything to anyone at any time," Europol said. "Counterfeiters know it and are increasingly exploiting the unlimited opportunities" the internet offers. But Europol warned that "despite these products looking like a bargain, they can pose serious risks to the health and safety of buyers." In the crackdown, agencies from 27 countries mostly in Europe but including from the US and Canada, joined forces to shut down over 4,500 websites. They were selling everything from "luxury goods, sportswear, spare parts, electronics, pharmaceuticals, toiletries and other fake products," Europol said in a statement, without saying how long the crackdown took. An annual operation run in collaboration with the US Immigration and Customs Enforcement and Homeland Security, there was "a significant increase in the number of seized domain names compared to last year," said Europol director Rob Wainwright. Spotting the fakes As part of the crackdown, Dutch anti-fraud police arrested 12 people across The Netherlands over the past two weeks as they searched homes and warehouses. Most of the raids were prompted by online sales of counterfeit goods on social networking sites such as Facebook and Instagram. "This is a relatively new phenomenon in the trade in counterfeit brand names," the Dutch Fiscal Information and Investigation Service (FIOD) said in a statement. More than 3,500 items of clothing and fake luxury goods were seized in Holland, including shoes, bags and perfumes purporting to be such brands as Nike, Adidas, and Kenzo, with a market value of tens of thousands euros. Publishing a guide on how to spot fake websites and social media scams, Europol warned consumers had to be on their guard. "When shopping online, you are more likely to fall victim to counterfeiters," it said as "without the physical product to look at and feel, it can be more difficult for you to spot the differences." It also warned that by using illicit websites online shoppers "are exposing your computer or mobile device to cyber-attacks like phishing or malware." Article source
  19. Website spreading Gatak-infected keygens (via Symantec) Websites offering free keygens for various enterprise software applications are helping crooks spread the Gatak malware, which opens backdoors on infected computers and facilitates attacks on a company's internal network, or the theft of sensitive information. Gatak is a backdoor trojan that first appeared in 2012. Another name for this threat is Stegoloader, and its main distinctive feature is its ability to communicate with its C&C servers via steganography. Gatak relies on steganography to stain hidden Steganography is the technique of hiding data in plain sight. In the world of cyber-security, steganography is the practice of hiding malicious code, commands, or malware configuration data inside PNG or JPG images. The malware, in this case Gatak, connects to its online C&C server and requests new commands. Instead of receiving an HTTP network requests, for which all security software knows to be on the lookout, the data is sent as an innocuous image, which looks like regular web traffic. The malware reads the image's hidden data and executes the command, all while the local antivirus thinks the user has downloaded an image off the Internet. Keygens for enterprise software spreading Gatak Security firm Symantec says it uncovered a malware distribution campaign that leverages a website offering free keygens for various applications such as: SketchList3D - woodworking design software Native Instruments Drumlab - sound engineering software BobCAD-CAM - metalworking/manufacturing software BarTender Enterprise Automation - label and barcode creation software HDClone - hard disk cloning utility Siemens SIMATIC STEP 7 - industrial automation software CadSoft Eagle Professional - printed circuit board design software PremiumSoft Navicat Premium - database administration software Originlab Originpro - data analysis and graphing software Manctl Skanect - 3D scanning software Symantec System Recovery - backup and data recovery software All of the above are specialized apps, deployed in enterprise environments. The group behind this campaign is specifically targeting users that use these applications at work, but without valid licenses, in the hopes of infecting valuable targets they could hack, steal data from, and possibly sell it on the underground. Keygens don't work, they just infect users with Gatak The keygens distributed via this website aren't even fully-working tools. They just produce a random string of characters, but their purpose is to trick the user into executing the keygen binary just once, enough to infect the victim. The hackers are picky about the companies they target because the security firm has seen second-stage attacks on only 62% of all infected computers. Attackers use Gatak to gather basic information about targets, on which, if they deem valuable, deploy other malware at later stages. In some cases, the hackers also resort to lateral movement on the victim's network, with the attackers manually logging into the compromised PC. Attacks aren't sophisticated, and the hackers only take advantage of weak passwords inside the local network. Symantec says it didn't detect any zero-days or automated hacking tools employed when hackers have attempted to infect other devices on the local network. Gatak infections per industry vertical (via Symantec) Telemetry data shows that 62% of all Gatak infections have been found on computers on enterprise networks. Most of these attacks have targeted the healthcare sector, but it doesn't appear that hackers specifically targeted this industry vertical, as other companies in other verticals were also hit. Attackers might have opted to focus more on healthcare institutions because these organizations usually store more in-depth user data they can steal, compared to the automotive industry, gambling, education, construction, or others. "In some cases, the attackers have infected computers with other malware, including various ransomware variants and the Shylock financial Trojan," Symantec notes in a report. "They may be used by the group when they believe their attack has been uncovered, in order to throw investigators off the scent." Article source
  20. SHA-1 is a hashing algorithm that has been used extensively since it was published in 1995, however, it is no longer considered secure. It was deemed vulnerable to attacks from well-funded adversaries back in 2005 and was replaced by SHA-2 and SHA-3 which are considerably more secure hashing functions. Many companies including Google, Mozilla, and Microsoft have already announced that they'll stop accepting SHA-1 TLS certificates by 2017. Now, Microsoft has detailed how numerous websites, users, and third-party applications will be affected once the company deprecates SHA-1 signed certificates starting February 4, 2017. Microsoft states that in an effort to further enhance security features on Edge and Internet Explorer 11, the two browsers will prevent sites using SHA-1 signed certificates from loading and will display an "invalid certificate" warning. While it isn't recommended, users will have the option to bypass the warning and access the potentially vulnerable website. The company has clarified that this will only impact websites with SHA-1 signed certificates that link to a Microsoft Trusted Root CA, while manually installed enterprise or self-signed SHA-1 certificates will remain unaffected. The Redmond giant states that developers who have installed the latest 2016 November Windows updates can test if their websites will be affected by the change. The detailed procedure can be viewed in the company's blog post here. Microsoft has clarified that third-party Windows applications utilizing the Windows cryptographic API set or older versions of Internet Explorer will not be affected by the changes. Similarly, the update will not prevent clients from using the SHA-1 certificate in client authentication. Regarding cross-signed certificates, Microsoft has explicitly confirmed that Windows will only check the thumbprint of the root certificate is in the Microsoft Trusted Root Certified Program. The company has clarified that certificates "cross-signed with a Microsoft Trusted Root that chains to an enterprise/self-signed root" will not be affected by the changes next year. Source: Microsoft Article source
  21. Over a third (35%) of the world’s websites are still using insecure SHA-1 certificates despite the major browser vendors saying they’ll no longer trust such sites from early next year, according to Venafi. The cybersecurity company analyzed data on over 11 million publicly visible IPv4 websites to find that many have failed to switch over to the more secure SHA-2 algorithm, despite the January deadline. With Microsoft, Mozilla and Google all claiming they won’t support SHA-1 sites, those still using the insecure certificates from the start of 2017 will find customers presented with browser warnings that the site is not to be trusted, which will force many elsewhere. In addition, browsers will not display the tell-tale green padlock on the address line for HTTPS transactions, while some might experience performance issues. There’s also a chance some sites will be completely blocked, said Venafi. SHA-2 was created in response to weaknesses in the first iteration – specifically collision attacks which allow cyber-criminals to forge certificates and perform man-in-the-middle attacks on TLS connections. However, migration to the new algorithm isn’t as simple as applying a patch, and with thousands of SHA-1 certificates in use across websites, servers, applications and databases, visibility is a challenge, warned Venafi vice-president of security strategy and threat intelligence, Kevin Bocek. “The deadline is long overdue: National Institute of Standards and Technology (NIST) has called for eliminating the use of SHA-1 because of known vulnerabilities since 2006,” he told Infosecurity. “Most organizations do not know exactly how many certificates they have or where they are being used, and even if they do, it is a time-consuming and disruptive process to update them all manually.” Bocek recommended organizations first work out where their SHA-1 certificates are and how they’re being used, before building a migration plan. “Here, you will need to work out where your priorities are, so that you can protect your crown jewels first – i.e. the sites and servers that hold sensitive data or process payments. This way the team can focus on migrating critical systems first to ensure they are better protected,” he explained. “The best way to do this is through automation. By automating discovery of digital certificates into a central repository companies can upgrade all certificates to SHA-2 at the click of a button, where possible. And importantly you can track and report on progress to your board, executive leadership, and auditors. This allows businesses to migrate without interrupting business services or upsetting customers.” Article source
  22. Fake Flash Player update sites have long been a favorite distribution method for adware and other unwanted programs. Today, a fake Flash update site was discovered by ExecuteMalware that is pushing the Locky ransomware. When someone visits the site they will be presented with a page that states that Flash Player is out of date and then automatically downloads an executable. If you look carefully at the URL in the browser's address you can see that the domain of fleshupdate.com does not seem to be spelled right. Fake Flash Update Web Page The executable automatically downloaded by this site is named FlashPlayer.exe and includes a flash player icon as seen below. Flash Icon in Downloaded File If you look at the properties of this file, though, things start to look strange. Locky Installer Properties Ultimately, if a user runs this program thinking that Flash will be updated they will be in for a big surprise. Instead of a flash player update, they will ultimately be shown a Locky ransom note when the ransomware has finished encrypting the victim's files. Locky Ransom Note The LockyDump information for the variant I tested is below. MalwareHunterTeam also saw a sample using an affiliate ID of 19, which as far as we know has not been previously seen. Verbose: 0 The file is a PE EXE affilID: 13 Seed: 9841 Delay: 30 Persist Svchost: 0 Persist Registry: 0 Ignore Russian Machines: 1 CallbackPath: /message.php C2Servers: 85.143.212.23,185.82.217.29,107.181.174.34 RsaKeyID: 85D RsaKeySizeBytes: 114 Key Alg: A400 Key: RSA1 Key Bits: 2048 Key Exponent: 10001 As you can see, it is not only attachments and exploit kits pushing ransomware. Everyone needs to be vigilant and careful when browsing the web. Furthermore, program updates should only be downloaded from their main product sites rather than 3rd party sites where you have no idea what you are installing. Article source
  23. Web giant tries to fill the protection gap created when malicious sites clean up their act just long enough to ditch the Safe Browsing warning. Google has added a new classification to its Safe Browsing initiative to better protect users from malicious websites trying to game the system. Google's Safe Browsing warns users when they are about to visit a website known to violate the web giant's policies on malware, unwanted software, phishing or social engineering. The warning appears until Google verifies that the site in question no longer poses a threat to users. But some sites are only cleaning up their act just long enough to shake the warning, and then returning to their harmful behavior. That gap in user protection led Google to create a new label to warn users of sites that engage in this pattern. "Starting today, Safe Browsing will begin to classify these types of sites as "Repeat Offenders," Google explained in a company blog post Tuesday. "Please note that websites that are hacked will not be classified as Repeat Offenders; only sites that purposefully post harmful content will be subject to the policy." Once classified as a "repeat offender," sites will not be allowed to request a review for 30 days. During that time, users will continue to see messages warning them of the risk involved in visiting the site. Article source
  24. Chrome is starting to flag more pages as insecure. Here are five things every webmaster should know about HTTPS. Google wants the connection between Chrome and your website to be more secure. And, if you're a webmaster, your upcoming deadline to increase security is January 2017. By that time, your site needs to serve pages with password or payment fields over an HTTPS connection. If you still serve those pages on an unencrypted connection—HTTP only, not HTTPs—Chrome will warn that the page is "Not secure." A quick visit to pages on your site will show you whether or not the site supports HTTPS. Open a page with Chrome and look at the URL bar. Click (or tap) on the lock (or info icon) to the left of the URL to view the connection security status. Then select "Details" for more info. A green lock and the "Your connection to this site is private" message indicates an HTTPS connection between Chrome and the page. The icon to the left of the web address of your website indicates whether or not the site supports a secure connection (HTTPS on left) or not (HTTP on right). In the long term, Google wants every page of your site to support HTTPS—not just the ones with payments or passwords. Google search already prefers to return results from pages with HTTPS over pages that lack a secure connection. To enable an HTTPS connection between your site and visitor browsers, you need to setup an SSL certificate for your website. Here are five things things to know that may make the process easier. 1. Your web hosting provider might already serve your sites over a secured connection. For example, Automattic, which runs Wordpress.com, turned on SSL for their hosted customers in April of 2016. Customers didn't have to do anything at all—other than use Wordpress.com to host a site. 2. A few web hosting vendors make certificate setup free and easy Other web hosting providers offer a secure connection as an option, for free. Squarespace and Dreamhost, for example, both let customers choose to enable secure sites. Configuration of certificates used to be much more difficult, but these vendors streamline the process to a few steps. Some web hosting vendors make SSL certificate setup both free and easy. Let's Encrypt, a project of the nonprofit Internet Security Research Group, provides the certificates for all three of the vendors just mentioned (Dreamhost, Squarespace, and Wordpress). Many other vendors offer easy setup, too. Look at the community-maintained list of web hosting providers that support Let's Encrypt. More notably, Let's Encrypt certificate services are free. Yet, some web hosting vendors still charge significant fees for certificates. If you receive some additional authentication or security services, the fees may provide value. (For most non-technical organizations, I suggest you choose—or switch to—a web hosting vendor that supports Let's Encrypt.) 3. If you're on shared hosting, you may need an upgrade The certificates won't necessarily work in every hosting setup. In some cases, for example, a web hosting provider will only offer SSL with a dedicated server. That may mean a potential increase in hosting costs. In other cases, the certificate will work, but won't work with certain older browsers. For example, in the case of Dreamhost, you may choose to add a unique IP address to your hosting plan along with your Let's Encrypt certificate. Doing this allows the secure connection to work with certain versions of Internet Explorer on Windows XP, as well as some browsers on older Android devices (e.g., Android 2.4 and earlier). If you're on a shared hosting plan, you may need an upgrade to enable SSL or to support a secure the connection to older browsers or devices. 4. Check your login and checkout processes Many sites rely on third-party vendors for registration, e-commerce, mailing list sign-up, and/or event registration. While most trustworthy vendors already deliver these pages over HTTPS connections, verify that is the case. Make sure your vendors offer your visitors the same secure connection your site does. 5. After the switch, check your links Verify that your site links work. Follow your web hosting provider's instructions to make sure that every request for an insecure page (HTTP), redirects automatically to one delivered over a secure connection (HTTPS). You may need to make some additional changes to your content management system. For example, at Dreamhost, you will need to make additional adjustments to Wordpress settings. Gone HTTPS yet? At the time of this writing, we're just two months away from when Chrome begins to deliver more aggressive alerts to warn of insecure pages. Hopefully, you've already secured the necessary pages on your site. But, that's just the first step. For most websites, there's little downside to moving to HTTPS as soon as possible. Article source
  25. Malicious websites promoting scams, distributing malware and collecting phished credentials pervade the web. As quickly as we block or blacklist them, criminals set up new domain names to support their activities. Now a research team including Princeton University computer science professor Nick Feamster and recently graduated Ph.D. student Shuang Hao has developed a technique to make it more difficult to register new domains for nefarious purposes. In a paper presented at the 2016 ACM Conference on Computer and Communications Security on Oct. 27, the researchers describe a system called PREDATOR that distinguishes between legitimate and malicious purchasers of new websites. In doing so, the system yields important insights into how those two groups behave differently online even before the malicious users have done anything obviously bad or harmful. These early signs of likely evil-doers help security professionals take preemptive measures, instead of waiting for a security threat to surface. "The intuition has always been that the way that malicious actors use online resources somehow differs fundamentally from the way legitimate actors use them," Feamster explained. "We were looking for those signals: what is it about a domain name that makes it automatically identifiable as a bad domain name?" Feamster, the acting director of Princeton's Center for Information Technology Policy, will be participating in the upcoming fourth Princeton-Fung Global Forum, which is focused on cybersecurity. The event will be held March 20-21, 2017, in Berlin. Once a website begins to be used for malicious purposes — when it's linked to in spam email campaigns, for instance, or when it installs malicious code on visitors' machines — then defenders can flag it as bad and start blocking it. But by then, the site has already been used for the very kinds of behavior that we want to prevent. PREDATOR, which stands for Proactive Recognition and Elimination of Domain Abuse at Time-Of-Registration, gets ahead of the curve. The researchers' techniques rely on the assumption that malicious users will exhibit registration behavior that differs from those of normal users, such as buying and registering lots of domains at once to take advantage of bulk discounts, so that they can quickly and cheaply adapt when their sites are noticed and blacklisted. Additionally, criminals will often register multiple sites using slight variations on names: changing words like "home" and "homes" or switching word orders in phrases. By identifying such patterns, Feamster and his collaborators were able to start sifting through the more than 80,000 new domains registered every day to preemptively identify which ones were most likely to be used for harm. Testing their results against known blacklisted websites, they found that PREDATOR detected 70 percent of malicious websites based solely on information known at the time those domains were first registered. The false positive rate of the PREDATOR system, or rate of legitimate sites that were incorrectly identified as malicious by the tool, was only 0.35 percent. Being able to detect malicious sites at the moment of registration, before they're being used, can have multiple security benefits, Feamster said. Those sites can be blocked sooner, making it difficult to use them to cause as much harm — or, indeed, any harm at all if the operators are not permitted to purchase them. "PREDATOR can achieve early detection, often days or weeks before existing blacklists, which generally cannot detect domain abuse until an attack is already underway," the authors write in their paper. "The key advantage is to respond promptly for defense and limit the window during which miscreants might profitably use a domain." Additionally, existing blocking tools, which rely on detecting malicious activity from websites and then blocking them, allow criminals to continue purchasing new websites. Cutting off the operators of malicious websites at the moment of registration prevents this perpetual cat-and-mouse dynamic. This more permanent form of protection against online threats is a rarity in the field of computer security, where adversaries often evade new lines of defense easily, the researchers said. For the PREDATOR system to help everyday internet users, it will have to be used by existing domain blacklist services, like Spamhaus, that maintain lists of blocked websites, or by registrars, like GoDaddy.com, that sell new domain names. "Part of what we envision is if a registrar is trying to make a decision about whether to register a domain name, then if PREDATOR suggests that domain name might be used for malicious ends, the registrar can at least wait and do more due diligence before it moves forward," Feamster said. Although the registrars still must manually review domain registration attempts, PREDATOR offers them an effective tool to predict potential abuse. "Prior to work like this, I don't think a registrar would have very easy go-to method for even figuring out if the domains they registered would turn out to be malicious," Feamster said. In addition to Feamster, the authors include: Shuang Hao, now at the University of California-Santa Barbara; Alex Kantchelian and Vern Paxson, University of California-Berkeley; and Brad Miller, Google. The work was supported in part by the National Science Foundation and Google. Article source
×
×
  • Create New...