Jump to content

Search the Community

Showing results for tags 'websites'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 61 results

  1. Dangerous vulnerabilities are present in a large number of today's websites, and the percentage is only going to keep on growing, according to a new report by Acunetix. The automated web application security software company released its annual Web Application Vulnerability Report 2016, based on 45,000 website and network scans, done on 5,700 websites over the past year. The results are worrying. More than half (55 percent) of websites have at least one high-severity vulnerability, representing a nine percent growth compared to last year's report. More than four fifths (84 percent) have medium-severity vulnerabilities. There has been a small, "encouraging" reduction in SQL injection and cross-site scripting, but the company says these are "just two of the top three". There’s also Vulnerable JavaScript Libraries, which have seen a significant increase -- more than 100 percent. Among the perimeter network vulnerabilities, Secure Shell (SSH) related ones are considered "most prominent", it was said. "Our research clearly shows high-severity web app flaws are on the rise and older vulnerabilities are still hanging around", says Chris Martin, General Manager at Acunetix. "Having a plan in place to prioritize these problems -- and actually start tackling them -- is critical. Using an automated vulnerability scanner such as Acunetix is the first step to protect your brand’s online real estate". The full report can be found on this link. Article source
  2. We talked to Steven Burn (aka MysteryFCM), the lead of our Web Protection team and owner of hpHosts, and asked him about the strengths and possible improvements of the Malicious Website Protection module that comes with Malwarebytes Premium. To start off, let us explain what the module does and how you can use it. What does the Malicious Website Protection do? The Malicious Website Protection module allows for the identification and subsequent blocking of both malicious domains and IPs by intercepting DNS queries made by everything from your browser and security/conference software to those lovely little “we’ll clean up your system, honest” pieces of up-to-no-good-software. Put simply, pretty much every application that pulls in data from the internet or something as simple as checking for updates to itself (e.g. the operating system itself). Ordinarily, these would go from the application to your router and on to the ISP (or third party provider such as OpenDNS), depending of course, on your setup. In this case however, like a firewall, intercept these queries to identify malicious traffic that could harm your system or steal your data. In layman terms, it blocks traffic to and from domains and IP addresses that we consider dangerous or extremely annoying. Reasons could be: hosting malware or PUPs Tech Support Scammers sites phishing scams other kinds of scams compromised sites fraud illegal pharma How to use In the highlighted area shown below, you can enable/disable Malicious Website Protection with the radio buttons. Even if you are a careful surfer, when you are using the Malicious Website Protection, you may see this type of notification from time to time: What do the items in such a notification mean? Domain: if available, this shows the domain that was requested. If there is no domain mentioned, this usually means that the IP address was provided directly by the Process. IP: This is the IP address that the domain resolved to, and that is being blocked. Port: This is the port on the system that was used for the contact. Type: This shows the direction of the traffic. Process: This is the executable (program) that tried to make the contact. If this is not a browser or another program that displays advertisements, e.g. Skype, this is potentially worrying, especially if you do not recognize the filename. You could be dealing with a Trojan or adware. If it is a browser, it is important to note whether the Domain is the site you wanted to visit or not. There could be a redirect or malicious element on the site you wanted to reach. Manage Web Exclusions If you are sure that the contacted domain is safe or you want to visit a site despite the warning, the exclusion option allows you to do that without having to disable the protection entirely. In the notification you will see a link labeled Manage Web Exclusions. Clicking that link will take you to this screen— —which you can also reach by clicking Settings > Web Exclusions in the program. The screen offers you 3 types of exclusions: by IP by Domain by Process If you really feel you need to utilize this option we advise you to be as restrictive as possible. So, if that works, use the “Add Domain” before you use the “Add IP” and try to avoid giving a Process free play at all times, because some malware is capable of injecting malicious code into trusted processes. Reasons why your browser could be causing alarms If the Process is your browser, this does not necessarily mean that there is an infection. There could be something wrong with the site you are visiting or one of the advertisements it’s displaying. This happens to the most reputable sites sometimes. Only if you don’t have a browser window active and you still see blocks that tell you the browser process is responsible, there is reason for concern. Questions for Steven Burn Q: If someone notices that his site is blocked and he feels this is unjust, what is the best way to proceed? Contact us, either via support, the forums or indeed, email – though checking their site/server first would be advised. Author’s note: It is surprising how often people don’t realize that their site has been compromised or otherwise abused by threat actors. Q: Is it true that some sites are impossible to block with the software as it now is? And are there any plans to change that? This is indeed true. Some sites can’t easily be blocked without resulting in serious disruption of our customers visiting related, but not malicious, sites. In these cases (which are few) we work directly with both the offending site’s host and even law enforcement to get it taken down as quickly as possible. Q: If I find them annoying, is there a way to turn off the notifications without disabling the protection? Not without turning off all notifications, that I am aware of. Author’s note: Disabling notifications can be done under “General Settings”, but it’s not recommended. Q: Is there a place online where I can find out why a certain Domain/IP is blocked? hpHosts, VirusTotal, Scumware, abuse.ch, amongst a plethora of others. We don’t currently provide specifics via those we’ve got public (e.g. hpHosts) as these are held on internal only systems. Article source
  3. The internet is an amazing place where you can find more than 1 billion websites. Along with some fantastic sites there are some weird ones too. It’s impossible for a person to visit every website. Therefore we have gathered some strange websites on the internet. Some of them are funny, some are really boring and a few are like you can’t answer why they exist. We haven’t included adult site here, so you can click on all link without any hesitation. Enjoy the list! 1. Iloveyoulikeafatladylovesapples: Feel the hunger of the fat lady until you let her eat enough apples. The website is completely useless still you can enjoy the graphics and background music. 2. Thenicestplaceontheinter.net: The really sweet website that offers free hugs. Go get it. 3. SciencevsMagic.net/Tes: You can mix the words amazing and weird to describe this one. Also, the website gave AIDS to my eyes. 4. Michaeljfoxnews: Feel the earthquake on your computer. 5. Pointerpointer: I don’t know where did they find these pictures but this is how you get to the specific point. 6. Heeeeeeeey: Just click on link and get the heeey hooo party feel. 7. wwwdotcom: A serious tip for you. 8. Rainymood: Rain makes everything better. So just sit back and enjoy the sound effect to enlighten your mood. 9. Isitchristmas: The name suggests all. May be the website has been designed for people suffering from short term memory loss. 10. Cat-bounce: And that’s how humans play with emotions of cats. 11. 111111111111111111111111111111111111111111111111111111111111: Believe me; I have no idea what the exact purpose of website. But it seems like website owner is not really a fan of Arnold Schwarzenegger. 12. Heyyeyaaeyaaaeyaeyaa: A catchy music with special cartoon characters for our special readers. 13. Thisman: This is the height of weirdness! The website says that hundreds of people dream about this face. No, I don’t. 14. Breakglasstosoundalarm: The thing you wanted to do once in your life is here. 15. Internetlivestats: I don’t think this is a live data, however you will get an idea of few internet stats. 16. Simonpanrucker: No words to explain this useless thing. Kindly decide yourself how weird it is. 17. Ilooklikebarackobama: You might wanna reply this website, “No you don’t, not even a bit”. 18. Corgiorgy: The cute dog army. 19. Haneke: If you like complicated things and pay too much attention into details, you won’t regret after visiting this website. 20. Fearthegaychicken: The question is what makes you think that this chicken is gay. Is it background color or the sound? 21. Koalastothemax: An amazing creativity and fun with pixels. 22. Procatinator: Cats popularity is increasing day by day and somehow this website is the reason behind it. 23. Youfellasleepwatchingadvd: If your mom doesn’t allow you to watch TV, you could spend some time here. 24. Essaytyper: This is the place where you become a professional typist in no time. 25. Feedthehead: My advice is, don’t just feed the head, play with the whole face. 26. Nooooooooooooooo: If your boss gives you extra workload, you can reply him this link. 27. Zoomquilt: The weirdness tends to infinity. Even a telescope can’t look so far. 28. Staggeringbeauty: Just shake the mouse and see the snake’s reaction. 29. Anasomnia: This is how dreams become nightmare. 30. Eelslap: Slap tight as many times as you want. He won’t mind. Source
  4. The long and painful transition is getting there Accessing websites via IPv6 is not only comparable in speed to IPv4, but is actually faster when visiting one in five of the world's most popular sites, according to German researchers. In a new paper, Vaibhav Bajpai and Jürgen Schönwälder from the University of Bremen looked at the response times for the internet's top 10,000 most-visited websites (according to Alexa) over both IPv4 and IPv6, and concluded that not only have earlier delays been removed in the newer internet protocol, but that a connection is sometimes faster. Although IPv4 connections remained faster the vast majority of the time, the researchers noted that in those cases they were rarely more than 1ms slower; ie, the difference was negligible. The paper's abstract reads: That is a significant improvement from just a few years ago when IPv6 connections were often so slow that browsers actually timed out, which itself added yet one more reason for people not to transition their networks and systems to the incompatible protocol. As the paper notes, much of the problem was attributed to two technologies that were intended to assist in shifting traffic from IPv4 to IPv6: the Teredo automatic tunneling technology and 6to4 relays. In both cases, the technologies added "noticeable latency" to connections. In 2013, Microsoft announced it would stop using Teredo on Windows and would kill off its Teredo servers the following year. In 2015, the 6to4 prefix was phased out. The researchers noted – using data from 2013 to 2016 – that the result was a significant increase in speed over IPv6, with Teredo/6to4 now only being used for 0.01 per cent of traffic. Fail whale Other research has shown a huge drop in IPv6 failure rates, from 40 per cent in 2011 to 3.5 per cent in 2015. Still a significant amount, but no longer a barrier to adoption. What is interesting to note is that some browsers actually favor the use of IPv6 over IPv4 and include a timer to decide whether to shift over to IPv4. Firefox and Opera used parallel TCP connections over both IPv4 and IPv6, but Apple uses a 25ms timer in favor of IPv6 and Google used a 300ms timer in its Chrome browser. The researcher used the "Happy Eyeballs" (also known as Fast Fallback) algorithm developed by the Internet Engineering Task Force in 2012 (see RFC 6555) to gather their data. The algorithm gets applications to request both IPv4 and IPv6 connections and then connects using whichever comes back first. The timer can be used to give one a small handicap; by default Happy Eyeballs gives IPv6 a 300ms advantage. As to which websites were actually faster over IPv6: out of the most well known, Netflix leads the way, followed by Yahoo, with YouTube and Google behind them. Facebook, Wikipedia and Microsoft basically run at the same speed regardless of protocol. Despite the good news that the internet's future protocol is increasingly keeping up with its current ubiquitous one, there are still pockets of trouble: the researchers note than in one per cent of the 10,000 top websites on the internet, the IPv6 delay was still over 100ms. A presentation of the paper recorded at a recent conference is available online. Article source
  5. This week The Pirate Bay quietly celebrated its 13th anniversary. Where other giants have fallen in the past, the notorious Pirate ship has stayed afloat. Today we chat with the TPB-team to discuss their remarkable achievement. Hollywood hoped that it would never happen, but this week The Pirate Bay quietly turned thirteen years old. The site was founded in 2003 by Swedish pro-culture organization Piratbyrån (Piracy Bureau). The idea was to create the first public file-sharing network in Sweden, but the site soon turned into the global file-sharing icon it is today. Over the years there have been numerous attempts to shut the site down. Following pressure from the United States, Swedish authorities raided the site in 2006, only to see it come back stronger. The criminal convictions of the site’s founders didn’t kill the site either, nor did any of the subsequent attempts to take it offline. The Pirate Bay is still very much ‘alive’ today. That’s quite an achievement by itself, looking at all the other sites that have fallen over the years. Just last month KickassTorrents shut down, followed by Torrentz a few days ago. Many KickassTorrents and Torrentz users are now turning to TPB to get their daily dose of torrents. As a result, The Pirate Bay is now the most visited torrent site, once again. TorrentFreak spoke to several members of the TPB-crew. While they are not happy with the circumstances, they do say that the site has an important role to fulfil in the torrent community. “TPB is as important today as it was yesterday, and its role in being the galaxy’s most resilient torrent site will continue for the foreseeable future,” Spud17 says. “Sure, TPB has its flaws and glitches but it’s still the go-to site for all our media needs, and I can see TPB still being around in 20 or 30 years time, even if the technology changes,” she adds. Veteran TPB-crew member Xe agrees that TPB isn’t perfect but points to the site’s resilience as a crucial factor that’s particularly important today. “TPB ain’t perfect. There are plenty of things wrong with it, but it is simple, steadfast and true,” Xe tells TorrentFreak. “So it’s no real surprise that it is once more the destination of choice or that it has survived for so long in spite of the inevitable turnover of crew.” And resilient it is. Thirteen years after the site came online, The Pirate Bay is the “King of Torrents” once again. Finally, we close with a yearly overview of the top five torrent sites of the last decade. Notably, the Pirate Bay is the only site that appears in the list every year, which is perhaps the best illustration of the impact it had, and still has today. 2007 1. TorrentSpy 2. Mininova 3. The Pirate Bay 4. isoHunt 5. Demonoid 2008 1. Mininova 2. isoHunt 3. The Pirate Bay 4. Torrentz 5. BTJunkie 2009 1. The Pirate Bay 2. Mininova 3. isoHunt 4. Torrentz 5. Torrentreactor 2010 1. The Pirate Bay 2. Torrentz 3. isoHunt 4. Mininova 5. BTJunkie 2011 1. The Pirate Bay 2. Torrentz 3. isoHunt 4. KickassTorrents 5. BTJunkie 2012 1. The Pirate Bay 2. Torrentz.com 3. KickassTorrents 4. isoHunt 5. BTJunkie 2013 1. The Pirate Bay 2. KickassTorrents 3. Torrentz 4. ExtraTorrent 5. 1337X 2014 1. The Pirate Bay 2. KickassTorrents 3. Torrentz 4. ExtraTorrent 5. YIFY-Torrents 2015 1. KickassTorrents 2. Torrentz.com 3. ExtraTorrent 4. The Pirate Bay 5. YTS 2016 1. KickassTorrents 2. The Pirate Bay 3. ExtraTorrent 4. Torrentz 4. RARBG Today 1. The Pirate Bay 2. ExtraTorrent 3. RARBG 4. YTS.AG 5. 1337X TorrentFreak
  6. Is it illegal to block ads? No. According to multiple court cases, the choice to filter your own http requests is legal and ultimately up to you. It’s your computer (or your mobile device). You have the right to decide which content and scripts enter your system. The best way to understand this right is that adblockers are basically “selective downloaders“. They decide which content to download and view, and which content to ‘not download’ and ignore. That simple choice has been protected multiple times in multiple court decisions. But while that means ‘not downloading ads’ is inherently legal. It may not mean all adblockers are operating legally. The problem is that “selective downloading” isn’t all adblockers are doing. The rabbit hole goes far deeper than just ‘not downloading advertising‘… Adblockers and Anti-Circumvention laws Adblockers aren’t just blocking ads. Adblockers also employ sophisticated circumvention technologies that evade the defensive measures employed by publishers. This crucial feature marks an important legal line in the sand. “Anti-adblock” or “access control” technologies (like BlockAdblock) restrict access to the copyrighted content of websites so that readers can access content only in a manner which the publisher approves of. Specifically, access control technologies like BlockAdblock restrict browsers equipped with adblock plugins from accessing a website’s content. Users who attempt to selectively download only the copyrighted content, without the accompanying advertising may be barred from access. And here’s the rub: While blocking ads has been deemed legal in court, circumvention of access controls is expressly against the law in Europe, the United States and in all signatory-nations of the World Intellectual Property Organization’s Copyright Treaty. In other words: You’re free to block ads but as soon as an access-control technology enters the picture, you may not be within your rights to attempt to circumvent it by technological measures. And that’s exactly what so many adblockers attempt to do — with varying levels of success, depending on the adblocker and the anti-adblock technology being deployed in any given case. What laws are being broken? Anti-circumvention laws in the US, EU and countries who have signed the WIPO treaty are quite similar. Europe European national laws must reference Article 6 of the EU Directive 2001/29/EC Member States shall provide adequate legal protection against the circumvention of any effective technological measures, which the person concerned carries out in the knowledge, or with reasonable grounds to know, that he or she is pursuing that objective. Member States shall provide adequate legal protection against the manufacture, import, distribution, sale, rental, advertisement for sale or rental, or possession for commercial purposes of devices, products or components or the provision of services which: (a) are promoted, advertised or marketed for the purpose of circumvention of, or (b) have only a limited commercially significant purpose or use other than to circumvent, or (c) are primarily designed, produced, adapted or performed for the purpose of enabling or facilitating the circumvention of, any effective technological measures. For the purposes of this Directive, the expression ‘technological measures’ means any technology, device or component that, in the normal course of its operation, is designed to prevent or restrict acts, in respect of works or other subjectmatter, which are not authorised by the rightsholder… In other words, it’s not okay to circumvent technological measures which restrict access to a work. It’s also not okay to manufacture or distribute products whose purpose is to circumvent access controls. And it’s up to the publisher / rights-holder to decide what the terms of said access are. United States Likewise, across the pond in the USA, the Digital Millennium Copyright Act (DMCA) includes several sections relevant to the circumvention of website access controls: Section 103 of the DMCA includes this very clear language: No person shall circumvent a technological measure that effectively controls access to a work protected under this title. And “technological measure” is defined as follows: (3)(B) a technological measure “effectively controls access to a work” if the measure, in the ordinary course of its operation, requires the application of information, or a process or a treatment, with the authority of the copyright owner, to gain access to the work. Clearly, any anti-adblock defense used to protect website content falls under the description of a “technological measure” requiring a process or treatment to gain access to the work. (That “process or treatment” being the detection of no adblocker, in this particular case). For more information on how adblockers violate the DMCA see the previous post: Is Adblock Plus violating the DMCA It’s not adblocking that’s illegal. It’s the circumvention of ‘technological measures’ that is. Certainly, this doesn’t mean that blocking ads is illegal. But it does strongly suggest that the additional layer of technology employed by current-generation adblockers to circumvent the technological defenses of adblock-detection scripts is illegal. Anti-circumvention law was enacted for a purpose. That purpose is to protect “remuneration schemes” and to”foster substantial investment in creativity and innovation”, to use the opening language of the EC Directive. A harmonised legal framework on copyright and related rights … will foster substantial investment in creativity and innovation, including network infrastructure, and lead in turn to growth and increased competitiveness of European industry, both in the area of content provision and information technology and more generally across a wide range of industrial and cultural sectors. This will safeguard employment and encourage new job creation. Somewhere along the line that primary purpose appears to have been replaced with something less sustainable, and a whole lot less legal. Article source
  7. Mozilla plans to launch an update for the built-in password manager in Firefox that will make HTTP passwords work on HTTPS sites as well. If you use the built-in functionality to save passwords in Firefox currently, you may know that the manager distinguishes between HTTP and HTTPS protocols. When you save a password for http://www.example.com/, it won't work on https://www.example.com/. When you visit the site using HTTPS later on, Firefox won't suggest the username and password saved previously when connected via HTTP. One option was to save passwords for HTTP and HTTPS sites separately, another to open the password manager and copy username and password manually whenever needed on the HTTPS version of a site. With more and more sites migrating to HTTPS, or at least providing users with a HTTPS option to connect to it, it was time to evaluate the Firefox password manager behavior in this regard. Firefox 49: HTTP passwords on HTTPS sites Mozilla made the decision to change the behavior in the following way starting with the release of Firefox 49. Passwords for the HTTP protocol will work automatically when connected via HTTPS to the same site. In other words, if a HTTP password is stored in Firefox, it will be used for HTTP and HTTPS sites when Firefox 49 is released. The other way around does not however. Passwords saved explicitly for HTTPS, won't be used when a user connects to the HTTP version of the site. The main reason for this is security. More precisely, because HTTP does not use encryption, and that password and username may be recorded easily by third-parties. Check out the bug listing on Bugzilla if you are interested in the discussion that led to the change in Firefox 49. Closing Words Firefox users who use the password manager of the web browser may notice the change once their version of the browser is updated to version 49. It should make things a bit more comfortable for those users, especially if a lot of HTTP passwords are saved already. With more and more sites migrating over to HTTPS, it is likely that this will be beneficial to users of the browser. (via Sören) Article source
  8. The coolest talk of this year's Blackhat must have been the one of Sean Devlin and Hanno Böck. The talk summarized this early year's paper, in a very cool way: Sean walked on stage and announced that he didn't have his slides. He then said that it didn't matter because he had a good idea on how to retrieve them. He proceeded to open his browser and navigate to a malicious webpage. Some javascript there seemed to send various requests to a website in his place, until some MITM attacker found what they came for. The page refreshed and the address bar now whoed https://careers.mi5.gov.uk as well as a shiny green lock. But instead of the content we would have expected, the white title of their talk was blazing on a dark background. What happened is that a MITM attacker tampered with the mi5's website page and injected the slides in a HTML format in there. They then went ahead and gave the whole presentation via the same mi5's webpage. How it worked? The idea is that repeating a nonce in AES-GCM is... BAD. Here's a diagram from Wikipedia. You can't see it, but the counter has a unique nonce prepended to it. It's supposed to change for every different message you're encrypting. AES-GCM is THE AEAD mode. We've been using it mostly because it's a nice all-in-one function that does encryption and authentication for you. So instead of shooting yourself in the foot trying to MAC then-and-or Encrypt, an AEAD mode does all of that for you securely. We're not too happy with it though, and we're looking for alternatives in the CAESAR's competition (there is also AES-SIV). The presentation had an interesting slide on some opinions: "We conclude that common implementations of GCM are potentially vulnerable to authentication key recovery via cache timing attacks." (Emilia Käsper, Peter Schwabe, 2009) "AES-GCM so easily leads to timing side-channels that I'd like to put it into Room 101." (Adam Langley, 2013) "The fragility of AES-GCM authentication algorithm" (Shay Gueron, Vlad Krasnov, 2013) "GCM is extremely fragile" (Kenny Paterson, 2015) One of the bad thing is that if you ever repeat a nonce, and someone malicious sees it, that person will be able to recover the authentication key. It's the H in the AES-GCM diagram, and it is obtained by hashing the encryption key K. If you want to know how, check Antoine Joux' comment on AES-GCM. Now, with this attack we lose integrity/authentication as soon as a nonce repeats. This means we can modify the ciphertext in the middle and no-one will realize it. But modifying the ciphertext doesn't mean we can modify the plaintext right? Wait for it... Since AES-GCM is basically counter mode (CTR mode) with a GMac, the attacker can do the same kind of bitflip attacks he can do on CTR mode. This is pretty bad. In the context of TLS, you often (almost always) know what the website will send to a victim, and that's how you can modify the page with anything you want. Look, this is CTR mode if you don't remember it. If you know the nonce and the plaintext, fundamentally the thing behaves like a stream cipher and you can XOR the keystream out of the ciphertext. After that, you can encrypt something else by XOR'ing the something else with the keystream They found a pretty big number of vulnerable targets by just sending dozen of messages to the whole ipv4 space. My thoughts Now, here's how the TLS 1.2 specification describe the structure of the nonce + counter in AES-GCM: [salt (4) + nonce (8) + counter (4)]. The whole thing is a block size in AES (16 bytes) and the salt is a fixed part of 4 bytes derived from the key exchange happening during TLS' handshake. The only two purposes of the salt I could think of are: preventing against multi-target attacks on AES making the nonce smaller to make nonce-repeating attacks easier Pick the reason you prefer. Now if you picked the second reason, let's recap: the nonce is the part that should be different for every different message you encrypt. Some increment it like a counter, some others generate them at random. This is interesting to us because the birthday paradox tells us that we'll have more than 50% chance of seeing a nonce repeat after \(2^{32}\) messages. Isn't that pretty low? Article source
  9. If I ask someone how is your life going on, most of the people would say they are busy. Busy in making their life more comfortable and wealthy. But do you think this is all what you want from your life? Yes, no doubt earning money is very important to get things but having fun in life is equally important. You can’t act like a machine 24/7. Going on with your friends or family every weekend sounds good but not feasible at all. It is possible once or twice in month. So, what is another way to get rid from all nonsense and headache of your schedule? Well, watching television is one of the easy and best ways to get some fun and almost 80% of you would prefer movies. Unfortunately, channels repeatedly telecast same movies all the time which becomes actually boring. So, is there any way to get the latest movies and TV shows without paying something extra? 15 Best Websites to Watch Movies Online Going to movie hall sound good but you can’t go out every Friday or weekends to watch the latest release. Thankfully there are various good websites that allows you to watch movies online without downloading them which means you can watch movies right in your smartphone, Smart TV, desktop, laptop or MAC. Want to know, what are the best websites to watch movies online. Have a look! 1. Zmovie.tw Zmovie is one of the best movie streaming websites that not only allows you to watch movie but also offers you a wide range of TV shows. You can easily find newly release movies, upcoming movies and featured movies just by typing the name of the movie in the search box. All the movies provided by Zmovie are of good quality and is worth watching. 2. MyDownloadTube.com My Download Tube is one of the best website that only allows you to watch free movies online but also lets you download movies. All the movies provided on My Download Tube are of good quality. You can watch and download hundreds of latest and popular movies, TV shows and games for free. 3. IWannaWatch IWannaWatch is one of the most popular website to access online movies for free. The website provides different options to for finding your favorite movie either by looking at the latest movies or typing the name of movie in search box. 4. TinklePad.is TinklePad is another cool free movie streaming websites that gives you a cool platform to watch online movies without downloading first. The best feature provides by this website is that you can find out which movie is worth watching for by checking its rate. You can also check out the most voted or most rated or most popular movies. Searching movie according to Genre is quiet simple, go to Genres option and you will get a huge collection of movies of your choice. 5. Amazon.com Amazon has become one of the favorite platforms for instant streaming of TV shows and movies so that you can watch movies or TV shows in an easier and faster way. Amazon Prime provides you 30-days free trial to checkout unlimited movies and TV shows. It includes a huge collection of streaming video and other streaming content that you can access anywhere and anytime with easy signup. 6. Primewire.ag Primewire is one of the top rated free movies streaming websites that provides every detailed and updated information about every latest movie, video and TV shows. So, if you want to know the latest movies, Primewire will take care of you by providing possible information that you are looking for. You can join forum if you want to know about any movie before watching it. 7. FILMLINKS4U.to If you are Bollywood fan, then FILMLINKS4U will provide you with every newly release movie. If you want to watch movies from different languages like Hindi, English, Tamil, Punjabi, Gujarati, Bengali, FILMLINKS4U is the right place for you. You can either access this website for watching trailer or to watch full movie online. 8. Putlocker.ac Putlocker is another good website that lets you watch movies online for free. Putlocker provides you all possible ways to search movies that you are looking for including the search tab. You can find any TV serial or movie according to genre like history, mystery, crime, comedy, romance, sport, animation, biography, family, drama and more. 9. MovieTubeOnline.cc Movie Tube Online lets you enjoy free movies online without downloading. You can catch latest movies or TV shows from a huge collection provided by Movie Tube Online. All you just need to do is click on the movie and watch out. 10. TopMoviesOnline.cc Top Movies Online lets you watch best cinema movies without any hustle. All the movie provided are from the awesome collection of blockbuster that surely you won’t want to miss at any cost. 11. LosMovies.is LosMovies is another good website comes with clean, simple and friendly-user interface with an excellent search bar. The website has a wide range of movies; all you need to do is signup to watch new and popular movies online without downloading. 12. TubePlus.is Tube+ is cool online movie streaming website that lets you watch movies and TV shows/episodes. Every movie is depicted with short description and rating so that you can judge before actually investing time on watching any movie. 13. Vumoo.ch Vumoo provides a nice and endless collection of HD movies. This movie streaming website is attractive, well-organized and pretty structured that provides every possible information about any movie. You can get short description, rating, stars, creator and other related information about any movie. Vumoo provides social networking link through which you can share movie on any of social media. 14. WatchMovies-Pro.com Watch Movie Pro delivers movies to be watched online without downloading in HD and good quality. You can check movies based on top rated, most recommended or search any movie through its name from search bar. Before watching any movie on Watch Movies Pro, you need to sign up or join it. 15. FreeFullMovies.org Free Full Movies strives to bring the best in the world of online movie streaming with huge range of movies. You can find out the latest as well as old movies and view them in the comfort of your home. Article source
  10. However careful you are online it's always possible to get caught out by a maliciously coded website or advert that leads to malware ending up on your machine. Online safety service SafeDNS is launching a new system for detecting malicious internet resources, which it claims blocks close to 100 percent of them for better online protection. Using continuous machine learning and user behavior analysis, the new SafeDNS system takes a step forward from static lists of categorized resources to dynamically created databases. The SafeDNS research team has produced a technology able to detect malicious internet resources with 98 percent precision. "This unparalleled technology developed by the company's research team takes SafeDNS to a different, much higher level -- on par with global leaders of the industry, as our ability to detect and filter out malware and botnets has significantly improved," says Dmitry Vostretsov, CEO of SafeDNS. "The technology gives SafeDNS a competitive edge as it detects malicious resources overlooked by the analogous systems of other vendors". It works by processing and analyzing data from the company's filtering service to pinpoint malicious resources. One of the most important attributes used is group activity, malicious resources tend to be requested by a group of users over a short period of time such as a few hours hours. If a resource is legitimate it's requested by occasional users, rather than a fixed group of them, this pattern can be used to identify and blacklist malicious sites. Sites are ranked on a continuous basis and the information fed into the SafeDNS database, which drives its filtering system. Information provided by the new system is also available for use through the company's open API of categorized internet resources. More information on SafeDNS for home and business use is available on the company's website. Article source
  11. KickassTorrents is dead, here are top 5 Torrent site alternatives to that was once world’s most popular torrent website With the demise of KickassTorrents the search for alternate torrent websites has already begun. Following the shut down of kickassTorrents after its alleged founder Artem Vaulin got arrested in Poland and an order given by a federal court in Chicago which led to the seizure of all official KAT domain names including kickasstorrents.com, kastatic.com, thekat.tv, kat.cr, kickass.to, kat.ph and kickass.cr. The torrent community has now come in need of finding top alternatives to KickassTorrents aka KAT. Since KickassTorrents has lost its position forever after its deadly demise, Torrentz has taken its place to top the list of the top most popular torrent websites of 2016 Below is the top 5 torrent site alternatives to the KickassTorrents based on various traffic reports and stats. The ranking is listed in order of the torrent websites popularity on Alexa. Top 5 Most Popular Torrent Alternatives To KickassTorrents 1. Torrentz (Alexa rank-188) / Most popular alternative to KickassTorrents Torrentz has been the top BitTorrent meta-search engine for many years, unlike other torrent websites it does not have any torrent or magnet links hosted on it instead it directs users to proper search results from other torrent websites. While Torrentz started this year with ranking at 3rd in the list of most popular torrent websites it took over the rankings to beat The Pirate Bay and now has topped the chart after the demise of KickassTorrents. 2. The Pirate Bay (Alexa rank-329) / Most popular alternative to KickassTorrents The Pirate Bay started its journey long back in 2003 and since then has faced several ups and downs with its co-founders getting arrested and facing lawsuits worth in millions of $. It made a comeback of sorts after the Swedish police raided its data center in Nacka and arrested one of the admins in 2014. The Pirate Bay is popularly known as HYDRA due to its various domains. It has switched several domains time to time to avoid the authorities. Currently, it is using its original .org domain. 3. ExtraTorrent (Alexa rank-449) / Most popular alternative to KickassTorrents ExtraTorrent is well-known torrent website with its website design and interface making it easier to search for the content with proper category wise listing. ExtraTorrent has maintained its 3rd position in the list of top most popular torrent websites, starting from 2015. this site is also the home of the most popular torrent release groups including ETTV and ETRG release groups. 4. YTS aka YIFY (Alexa rank-690) / Most popular alternative to Kickasstorrents yts.ag is not the original YTS or YIFY group. After the demise of original YIFY/YTS this website took its place and since it has gained lots of popularity with its unique style website look. 5. RarBG (Alexa rank-768) / Most popular alternative to KickassTorrents Founded in 2008, RarBG gained quick popularity and provides torrent files and magnet links to facilitate p-p file sharing, the site has been blocked by UK, Saudi Arabia, Denmark, Portugal and several other countries though has managed to make it to 5th rank in top most popular torrent websites. In addition to the above list of most popular torrent website alternative to KickassTorrents, few newcomers has also tried to clone and take place of the popular KickassTorrents. DXTorrent Kat.am Users are advised to block popups and not to provide their login credentials on any of the KickassTorrents clones A member of IsoHunt team has also launched an unofficial working mirror of the KickassTorrents as kickasstorrents.website A redditor has given his list for Kickasstorrents alternatives which is as follows : General purpose : https://kat.al/ https://zooqle.com https://www.torrentdownloads.me/ https://www.demonoid.pw/ http://www.btstorr.cc/ https://rarbg.to/top10 https://www.torlock.com/ https://extratorrent.cc/ https://www.torrentfunk.com/ https://www.limetorrents.cc/ https://1337x.to/ https://bit.no.com:43110/zerotorrent.bit TV Shows torrents : http://showrss.info/browse/all https://eztv.ag/ Remember the above websites are also unverified by torrent users and you may block popups while surfing them. Article source
  12. Adult website's private user details and browsing habits found by white-hat security researchers. Security experts uncovered Pornhub's entire user database but didn't expose the dirty details, in favour of a $20,000 reward The private details of Pornhub visitors, the largest adult website in the world, could have been easily exposed after cybersecurity researchers discovered a glaring vulnerability in the site that revealed its entire user database and their browsing habits. Thankfully, for those on that database, the discovery was made by white-hat hackers (who hack for good) and shared the information with Pornhub's developers in order to highlight the flaw and bolster security. In return, they were rewarded with a $20,000 bug bounty for their work. The team of computer experts, which included Ruslan Habalov, a computer science student, explained in his blog that they found two use-after-free vulnerabilities in PHP's garbage collection algorithm. It said that by gaining remote code execution they would have been able to do anything from "dump the complete database of pornhub.com including all sensitive user information" to "track and observe user behaviour on the platform and leak the complete available source code of all sites hosted on the server." "We have taken the perspective of an advanced attacker with the full intent to get as deep as possible into the system, focusing on one main goal: gaining remote code execution capabilities. Thus, we left no stone unturned and attacked what Pornhub is built upon: PHP," said Habalov. Pornhub bug bounty Pornhub clearly has a vested interest to keep its user base confidential as well as those who upload videos to the adult site, which could expose performers' identities. Therefore they run 'bug bounty' a reward programme that pays out up to $25,000 to anyone who spots a security fault in its system. The reported fault was hastily patched up by the Pornhub team. It may seems counter-intuitive to invite experts to poke around its cybersecurity but clearly the cash bounty was more appealing than the online panic that would have been caused by releasing the data. "As you can see, offering high bug bounties can motivate security researchers to find bugs in underlying software. This positively impacts other sites and unrelated services as well," said the white-hatters. Being one of the world's most visited websites it's a constant target for malicious cyberattackers. One hacker claimed to have sold access to its servers for $1000, although this turned out to be a hoax. Malware is another big problem that attempts to exploit users by trying to get them to click on links that lead them away to another site that could install viruses to glean your personal information or ransomware, which will lock your whole computer unless you pay a ransom. Article source
  13. A Digital Citizens investigation has found that malware operators and content theft website owners are teaming up to target consumers – with an unexpected assist from U.S.-based tech firms. The research found that 1 in 3 content theft websites expose consumers to dangerous malware that can lead to serious issues such as ID theft, financial loss and ransomware. And disturbingly, U.S.-based tech firms – such as hosting companies that enable websites to remain up and running – play a vital role in enabling these websites to operate. Among content theft websites analyzed that are spreading malware, two tech firms – Hawk Host and CloudFlare – were the most often used by these rogue websites. The research, done in collaboration with RiskIQ, built on earlier research that found that malware operators and content theft website owners were collaborating to target and harm consumers. Ongoing research has found that 1 in 3 of these websites expose consumers to malware. And RiskIQ went undercover in December to scrutinize the DarkNet chat rooms where malware operators meet and negotiate prices for how to infect consumers. “Given that our research shows that 12 million Americans are exposed to malware through content theft websites, we are approaching a cyber epidemic that poses serious concerns about the long-term security of Americans’ computers,” said Tom Galvin, Executive Director of the Digital Citizens Alliance. “These rogue operators are using pirated movies and TV shows to lure consumers so they can infect their computers and steal their money, their identity or hold access to the computer for ransom,” said Galvin. “It’s time for government authorities – from the Federal Trade Commission to Congress to state attorneys general – to warn consumers about the risk content theft poses to their well-being.” Checking content theft websites for malware In this latest research, RiskIQ looked at hundreds of content theft websites and checked for malware. Here is what they found: Thirty percent of content theft websites exposed consumers to malware. The type of malware and technique was constantly changing. In some cases, rogue operators tricked consumers with a prompt to update a movie player or through an infected ad. In other cases, malware was downloaded simply by visiting a content theft website. RiskIQ has found that based on its research, 12 million Americans are exposed on a monthly basis to malware from content theft websites. RiskIQ has found that consumers are 28 times more likely to get exposed to malware on content theft websites than mainstream websites. Once infected, the hackers can access and steal personal and financial data. In some cases, it enables hackers to install a Remote Access Trojan, enabling criminals to gain access to the video camera on a laptop and secretly tape the activities of unknowing people, usually young girls. In some cases, these videos are then resold online on DarkNet websites (and even in some instances, are made available on the popular video website YouTube, which is owned by Google). Hawk Host and CloudFlare were the go-to tech firms for content theft websites spreading malware, according to the new research. Digital Citizens researchers reached out to both companies and received sharply different responses. Hawk Host reported that it conducted its own investigation, and found that the websites violated its terms of service and therefore the company suspended them. CloudFlare, in its response, said it leaves the removal of content to law enforcement. It added that in some instances if it believes that malware is spread by a customer, it will warn site visitors. But Digital Citizens to date has seen no warning on the websites found to spread malware. Article source
  14. Researchers at Western Sydney University and King's College London have published a paper comparing various 'pirate' blocking mechanisms around the world. While all have shortcomings, the regime in the UK is highlighted as being open to potential future abuse. Faced with resilient websites and services that respond quickly to attacks by law enforcement, copyright holders around the world have sought more permanent ways of limiting the growth of pirate sites. Spreading rapidly through Europe to countries as far-flung as Australia, one of the most popular has become the website-blocking injunction. Initially obtained via complex and pioneering legal processes, in some countries these court orders are now fairly easily obtained. Researchers from Western Sydney University and King’s College London have published a paper examining regimes in three key regions, the EU, Australia and Singapore. They find that all display shortcomings which negatively affect rightsholders and in some cases even due process. “There are problems not only with the remedy itself but also in the manner in which the blocking injunction is implemented,” the researchers write. “The fact that multiple proceedings have to be filed in order to obtain a global level of enforcement and the possibility of blocking measures being circumvented are problems with the remedy itself.” In other words, copyright holder dreams of rendering content inaccessible everywhere will require injunctions in all countries against every major ISP. Even then, that won’t solve the problem of the workarounds (proxies, VPNs) that are constantly being made available. But while rightsholder problems are regularly documented, the researchers also draw attention to the manner in which injunctions are obtained, particularly in the UK where many hundreds of sites are blocked by ISPs. Their first concern focuses on how site operators are effectively excluded from proceedings. “Notably, neither of [the] domestic provisions under which the High Court typically exercises jurisdiction in granting blocking injunctions provide that the operators of the online locations sought to be blocked be made party to the application before the court,” the researchers write. “Thus, a common feature in the series of cases leading to blocking injunctions in the copyright context…is that only the ISPs who were called upon to block the target online locations were before court and not the operators of the online locations in issue.” While it is unlikely that many site operators would appear in court to contest an injunction, the researchers say it is a “cause for significant concern” that the legal mechanism effectively freezes them out of the process, especially since local ISPs no longer contest proceedings. “Most orders to-date have been granted after consideration of the applications on paper alone. Essentially, what this means is that the court is only possessed of the material submitted by the rights-holders, which go uncontested by the ISPs, leaving the interests of the operators of the target online locations completely unrepresented,” they write. But while noting the above, the researchers do make it clear that the court hears the merits of these cases. They also add that the sites being blocked are “patently infringing”. However, due to the way the legal process operates, it’s possible it could be abused. “What must be emphasised is that, in future, there may be instances where the operator of an online location has a plausible defence to a claim of IP infringement,” the researchers say. “In the circumstances where the court is only privy to the pleadings and documentary evidence submitted on behalf of a right-holder, the court’s discretion may become the subject of abuse, especially since the only other party before court (i.e. the ISP) shows no interest in protecting the operators of target online locations. Indeed, due to the manner in which the relevant EU directives are implemented, even site operators who can be identified and notified don’t have to be informed of proceedings under the law. “In the EU’s context at least….the implementation of the blocking injunction fall short of due process requirements,” the researchers note. “Although, in the UK….a safeguard was incorporated into blocking orders allowing the authors/owners of the content blocked to apply to court to have the injunction varied or set aside, the lack of a notice requirement under the law may render this safeguard, at best, useless.” Thus far there have been no signs that rightsholders have attempted to abuse the blocking process in the UK, but the almost complete lack of transparency after court orders are issued remains a cause for concern. Once an injunction is obtained, rightsholders are free to add additional domains to the UK’s blocklist, as and when they see fit. No ISP currently maintains a public list of the domains being blocked meaning that the whole process is shielded from public scrutiny. It’s believed that more than 1,000 sites might be blocked in this manner but few people have this information. A public list would go some way to inspiring confidence that the kind of abuse the researchers highlight doesn’t come to pass. Article source
  15. When talking about the attacks and threats users must face every day, people often highlight those that are more or less predictable, such as malicious archives sent as email attachments. Even though these threats are still very prevalent (e.g. in the different ransomware variants), cybercriminals also use many other attack vectors. Some of the most dangerous are those that involve scripts, they are quite difficult for the average user to detect. How does a malicious script work? Malicious scripts are code fragments that, among other places, can be hidden in otherwise legitimate websites, whose security has been compromised. They are perfect bait for victims, who tend not to be suspicious because they are visiting a trusted site. Therefore, cybercriminals can execute malicious code on the users’ systems by exploiting some of the multiple vulnerabilities in the browsers, in the operative system, in third-party applications or in the website itself that allows them to place the exploits in the first place. If we take a look at recent examples, we will see that cybercriminals have been using well-known exploit kits for years to automate these infection processes. Their operation is relatively simple – they compromise the security of a legitimate website (or else create a malicious website and then redirect the users to it from other locations), and install any of the existing exploit kits. From then on, detection and exploitation of vulnerabilities in the systems of users visiting that website can be automated. This can be seen in malvertising campaigns, where ads displayed on compromised websites have malicious code embedded in them. If accessed, they would allow cybercriminals to gain control of a device and launch attacks unless protected by a quality computer security product. At this point, the malicious script (JavaScript for example), which is usually obfuscated, is responsible for downloading and executing what is known as the payload. The latter is merely a piece of malicious code able to exploit these vulnerabilities and infect the user’s system with the malware that the cybercriminal has chosen. If not protected, and all goes as planned by the criminals, all this goes almost unnoticed for the user, and thus poses a considerable risk when surfing the web. The reason why the execution of such code is accomplished automatically and without user intervention has much to do with the permissions that are granted during system configuration. Even today, the number of user accounts with administrator rights on Windows systems is still overwhelming, and this is totally unnecessary in most situations of everyday life. This, together with the poor configuration of any of the security measures integrated to the Windows system itself, such as the UAC, enables the vast majority of these malicious scripts to operate unimpeded in hundreds of thousands of computers every day. If only the users would set this security feature at a medium/high security level, many of these attacks could be avoided, provided that users are aware of the importance of reading the alert windows displayed by the system and the security suite instead of making the mistake of closing them or, worse yet, clicking on the “OK” button. How to protect yourself from malicious scripts To prevent these types of attacks, users must take into account that there is no 100% secure website on the internet, and consequently, they need to take some measures to protect themselves. Updating the operating system and those applications that are most vulnerable to these attacks (mainly browsers, Flash Player and Java) is crucial to mitigate them. Nevertheless, sometimes this is not enough, and it is necessary to have a security solution that is able to detect these types of malicious scripts – not only those using JavaScript, but also those using PowerShell, etc. Conclusion We know that malicious scripts have been used by cybercriminals for years to spread all kinds of threats like Trojans, ransomware, and bots. However, at present there are adequate security measures available at least to mitigate the impact of these attacks. The only thing you need to do is set up the security measures that can protect you against these types of attacks and think before you click. Article source
  16. steven36

    The New Censorship

    How did Google become the internet’s censor and master manipulator, blocking access to millions of websites? Google, Inc., isn't just the world's biggest purveyor of information; it is also the world's biggest censor. The company maintains at least nine different blacklists that impact our lives, generally without input or authority from any outside advisory group, industry association or government agency. Google is not the only company suppressing content on the internet. Reddit has frequently been accused of banning postings on specific topics, and a recent report suggests that Facebook has been deleting conservative news stories from its newsfeed, a practice that might have a significant effect on public opinion – even on voting. Google, though, is currently the biggest bully on the block. When Google's employees or algorithms decide to block our access to information about a news item, political candidate or business, opinions and votes can shift, reputations can be ruined and businesses can crash and burn. Because online censorship is entirely unregulated at the moment, victims have little or no recourse when they have been harmed. Eventually, authorities will almost certainly have to step in, just as they did when credit bureaus were regulated in 1970. The alternative would be to allow a large corporation to wield an especially destructive kind of power that should be exercised with great restraint and should belong only to the public: the power to shame or exclude. If Google were just another mom-and-pop shop with a sign saying "we reserve the right to refuse service to anyone," that would be one thing. But as the golden gateway to all knowledge, Google has rapidly become an essential in people's lives – nearly as essential as air or water. We don't let public utilities make arbitrary and secretive decisions about denying people services; we shouldn't let Google do so either. Let's start with the most trivial blacklist and work our way up. I'll save the biggest and baddest – one the public knows virtually nothing about but that gives Google an almost obscene amount of power over our economic well-being – until last. 1. The autocomplete blacklist. This is a list of words and phrases that are excluded from the autocomplete feature in Google's search bar. The search bar instantly suggests multiple search options when you type words such as "democracy" or "watermelon," but it freezes when you type profanities, and, at times, it has frozen when people typed words like "torrent," "bisexual" and "penis." At this writing, it's freezing when I type "clitoris." The autocomplete blacklist can also be used to protect or discredit political candidates. As recently reported, at the moment autocomplete shows you "Ted" (for former GOP presidential candidate Ted Cruz) when you type "lying," but it will not show you "Hillary" when you type "crooked" – not even, on my computer, anyway, when you type "crooked hill." (The nicknames for Clinton and Cruz coined by Donald Trump, of course.) If you add the "a," so you've got "crooked hilla," you get the very odd suggestion "crooked Hillary Bernie." When you type "crooked" on Bing, "crooked Hillary" pops up instantly. Google's list of forbidden terms varies by region and individual, so "clitoris" might work for you. (Can you resist checking?) 2. The Google Maps blacklist. This list is a little more creepy, and if you are concerned about your privacy, it might be a good list to be on. The cameras of Google Earth and Google Maps have photographed your home for all to see. If you don't like that, "just move," Google's former CEO Eric Schmidt said. Google also maintains a list of properties it either blacks out or blurs out in its images. Some are probably military installations, some the residences of wealthy people, and some – well, who knows? Martian pre-invasion enclaves? Google doesn't say. 3. The YouTube blacklist. YouTube, which is owned by Google, allows users to flag inappropriate videos, at which point Google censors weigh in and sometimes remove them, but not, according to a recent report by Gizmodo, with any great consistency – except perhaps when it comes to politics. Consistent with the company's strong and open support for liberal political candidates, Google employees seem far more apt to ban politically conservative videos than liberal ones. In December 2015, singer Joyce Bartholomew sued YouTube for removing her openly pro-life music video, but I can find no instances of pro-choice music being removed. YouTube also sometimes acquiesces to the censorship demands of foreign governments. Most recently, in return for overturning a three-year ban on YouTube in Pakistan, it agreed to allow Pakistan's government to determine which videos it can and cannot post. 4. The Google account blacklist. A couple of years ago, Google consolidated a number of its products – Gmail, Google Docs, Google+, YouTube, Google Wallet and others – so you can access all of them through your one Google account. If you somehow violate Google's vague and intimidating terms of service agreement, you will join the ever-growing list of people who are shut out of their accounts, which means you'll lose access to all of these interconnected products. Because virtually no one has ever read this lengthy, legalistic agreement, however, people are shocked when they're shut out, in part because Google reserves the right to "stop providing Services to you … at any time." And because Google, one of the largest and richest companies in the world, has no customer service department, getting reinstated can be difficult. (Given, however, that all of these services gather personal information about you to sell to advertisers, losing one's Google account has been judged by some to be a blessing in disguise.) 5. The Google News blacklist. If a librarian were caught trashing all the liberal newspapers before people could read them, he or she might get in a heap o' trouble. What happens when most of the librarians in the world have been replaced by a single company? Google is now the largest news aggregator in the world, tracking tens of thousands of news sources in more than thirty languages and recently adding thousands of small, local news sources to its inventory. It also selectively bans news sources as it pleases. In 2006, Google was accused of excluding conservative news sources that generated stories critical of Islam, and the company has also been accused of banning individual columnists and competing companies from its news feed. In December 2014, facing a new law in Spain that would have charged Google for scraping content from Spanish news sources (which, after all, have to pay to prepare their news), Google suddenly withdrew its news service from Spain, which led to an immediate drop in traffic to Spanish new stories. That drop in traffic is the problem: When a large aggregator bans you from its service, fewer people find your news stories, which means opinions will shift away from those you support. Selective blacklisting of news sources is a powerful way of promoting a political, religious or moral agenda, with no one the wiser. 6. The Google AdWords blacklist. Now things get creepier. More than 70 percent of Google's $80 billion in annual revenue comes from its AdWords advertising service, which it implemented in 2000 by infringing on a similar system already patented by Overture Services. The way it works is simple: Businesses worldwide bid on the right to use certain keywords in short text ads that link to their websites (those text ads are the AdWords); when people click on the links, those businesses pay Google. These ads appear on Google.com and other Google websites and are also interwoven into the content of more than a million non-Google websites – Google's "Display Network." The problem here is that if a Google executive decides your business or industry doesn't meet its moral standards, it bans you from AdWords; these days, with Google's reach so large, that can quickly put you out of business. In 2011, Google blacklisted an Irish political group that defended sex workers but which did not provide them; after a protest, the company eventually backed down. In May 2016, Google blacklisted an entire industry – companies providing high-interest "payday" loans. As always, the company billed this dramatic move as an exercise in social responsibility, failing to note that it is a major investor in LendUp.com, which is in the same industry; if Google fails to blacklist LendUp (it's too early to tell), the industry ban might turn out to have been more of an anticompetitive move than one of conscience. That kind of hypocrisy has turned up before in AdWords activities. Whereas Google takes a moral stand, for example, in banning ads from companies promising quick weight loss, in 2011, Google forfeited a whopping $500 million to the U.S. Justice Department for having knowingly allowed Canadian drug companies to sell drugs illegally in the U.S. for years through the AdWords system, and several state attorneys general believe that Google has continued to engage in similar practices since 2011; investigations are ongoing. 7. The Google AdSense blacklist. If your website has been approved by AdWords, you are eligible to sign up for Google AdSense, a system in which Google places ads for various products and services on your website. When people click on those ads, Google pays you. If you are good at driving traffic to your website, you can make millions of dollars a year running AdSense ads – all without having any products or services of your own. Meanwhile, Google makes a net profit by charging the companies behind the ads for bringing them customers; this accounts for about 18 percent of Google's income. Here, too, there is scandal: In April 2014, in two posts on PasteBin.com, someone claiming to be a former Google employee working in their AdSense department alleged the department engaged in a regular practice of dumping AdSense customers just before Google was scheduled to pay them. To this day, no one knows whether the person behind the posts was legit, but one thing is clear: Since that time, real lawsuits filed by real companies have, according to WebProNews, been "piling up" against Google, alleging the companies were unaccountably dumped at the last minute by AdSense just before large payments were due, in some cases payments as high as $500,000. 8. The search engine blacklist. Google's ubiquitous search engine has indeed become the gateway to virtually all information, handling 90 percent of search in most countries. It dominates search because its index is so large: Google indexes more than 45 billion web pages; its next-biggest competitor, Microsoft's Bing, indexes a mere 14 billion, which helps to explain the poor quality of Bing's search results. Google's dominance in search is why businesses large and small live in constant "fear of Google," as Mathias Dopfner, CEO of Axel Springer, the largest publishing conglomerate in Europe, put it in an open letter to Eric Schmidt in 2014. According to Dopfner, when Google made one of its frequent adjustments to its search algorithm, one of his company's subsidiaries dropped dramatically in the search rankings and lost 70 percent of its traffic within a few days. Even worse than the vagaries of the adjustments, however, are the dire consequences that follow when Google employees somehow conclude you have violated their "guidelines": You either get banished to the rarely visited Netherlands of search pages beyond the first page (90 percent of all clicks go to links on that first page) or completely removed from the index. In 2011, Google took a "manual action" of a "corrective" nature against retailer J.C. Penney – punishment for Penney's alleged use of a legal SEO technique called "link building" that many companies employ to try to boost their rankings in Google's search results. Penney was demoted 60 positions or more in the rankings. Search ranking manipulations of this sort don't just ruin businesses; they also affect people's opinions, attitudes, beliefs and behavior, as my research on the Search Engine Manipulation Effect has demonstrated. Fortunately, definitive information about Google's punishment programs is likely to turn up over the next year or two thanks to legal challenges the company is facing. In 2014, a Florida company called e-Ventures Worldwide filed a lawsuit against Google for "completely removing almost every website" associated with the company from its search rankings. When the company's lawyers tried to get internal documents relevant to Google's actions though typical litigation discovery procedures, Google refused to comply. In July 2015, a judge ruled that Google had to honor e-Ventures' discovery requests, and that case is now moving forward. More significantly, in April 2016, the Fifth Circuit Court of Appeals ruled that the attorney general of Mississippi – supported in his efforts by the attorneys general of 40 other states – has the right to proceed with broad discovery requests in his own investigations into Google's secretive and often arbitrary practices. This brings me, at last, to the biggest and potentially most dangerous of Google's blacklists – which Google's calls its "quarantine" list. 9. The quarantine list. To get a sense of the scale of this list, I find it helpful to think about an old movie – the classic 1951 film "The Day the Earth Stood Still," which starred a huge metal robot named Gort. He had laser-weapon eyes, zapped terrified humans into oblivion and had the power to destroy the world. Klaatu, Gort's alien master, was trying to deliver an important message to earthlings, but they kept shooting him before he could. Finally, to get the world's attention, Klaatu demonstrated the enormous power of the alien races he represented by shutting down – at noon New York time – all of the electricity on earth for exactly 30 minutes. The earth stood still. Substitute "ogle" for "rt," and you get "Google," which is every bit as powerful as Gort but with a much better public relations department – so good, in fact, that you are probably unaware that on Jan. 31, 2009, Google blocked access to virtually the entire internet. And, as if not to be outdone by a 1951 science fiction move, it did so for 40 minutes. Impossible, you say. Why would do-no-evil Google do such an apocalyptic thing, and, for that matter, how, technically, could a single company block access to more than 100 million websites? The answer has to do with the dark and murky world of website blacklists – ever-changing lists of websites that contain malicious software that might infect or damage people's computers. There are many such lists – even tools, such as blacklistalert.org, that scan multiple blacklists to see if your IP address is on any of them. Some lists are kind of mickey-mouse – repositories where people submit the names or IP addresses of suspect sites. Others, usually maintained by security companies that help protect other companies, are more high-tech, relying on "crawlers" – computer programs that continuously comb the internet. But the best and longest list of suspect websites is Google's, launched in May 2007. Because Google is crawling the web more extensively than anyone else, it is also in the best position to find malicious websites. In 2012, Google acknowledged that each and every day it adds about 9,500 new websites to its quarantine list and displays malware warnings on the answers it gives to between 12 and 14 million search queries. It won't reveal the exact number of websites on the list, but it is certainly in the millions on any given day. In 2011, Google blocked an entire subdomain, co.cc, which alone contained 11 million websites, justifying its action by claiming that most of the websites in that domain appeared to be "spammy." According to Matt Cutts, still the leader of Google's web spam team, the company "reserves the right" to take such action when it deems it necessary. (The right? Who gave Google that right?) And that's nothing: According to The Guardian, on Saturday, Jan. 31, 2009, at 2:40 pm GMT, Google blocked the entire internet for those impressive 40 minutes, supposedly, said the company, because of "human error" by its employees. It would have been 6:40 am in Mountain View, California, where Google is headquartered. Was this time chosen because it is one of the few hours of the week when all of the world's stock markets are closed? Could this have been another of the many pranks for which Google employees are so famous? In 2008, Google invited the public to submit applications to join the "first permanent human colony on Mars." Sorry, Marsophiles; it was just a prank. When Google's search engine shows you a search result for a site it has quarantined, you see warnings such as, "The site ahead contains malware" or "This site may harm your computer" on the search result. That's useful information if that website actually contains malware, either because the website was set up by bad guys or because a legitimate site was infected with malware by hackers. But Google's crawlers often make mistakes, blacklisting websites that have merely been "hijacked," which means the website itself isn't dangerous but merely that accessing it through the search engine will forward you to a malicious site. My own website,http://drrobertepstein.com was hijacked in this way in early 2012. Accessing the website directly wasn't dangerous, but trying to access it through the Google search engine forwarded users to a malicious website in Nigeria. When this happens, Google not only warns you about the infected website on its search engine (which makes sense), it also blocks you from accessing the website directly through multiple browsers – even non-Google browsers. (Hmm. Now that's odd. I'll get back to that point shortly.) The mistakes are just one problem. The bigger problem is that even though it takes only a fraction of a second for a crawler to list you, after your site has been cleaned up Google's crawlers sometimes take days or even weeks to delist you – long enough to threaten the existence of some businesses. This is quite bizarre considering how rapidly automated online systems operate these days. Within seconds after you pay for a plane ticket online, your seat is booked, your credit card is charged, your receipt is displayed and a confirmation email shows up in your inbox – a complex series of events involving multiple computers controlled by at least three or four separate companies. But when you inform Google's automated blacklist system that your website is now clean, you are simply advised to check back occasionally to see if any action has been taken. To get delisted after your website has been repaired, you either have to struggle with the company's online Webmaster tools, which are far from friendly, or you have to hire a security expert to do so – typically for a fee ranging between $1,000 and $10,000. No expert, however, can speed up the mysterious delisting process; the best he or she can do is set it in motion. So far, all I've told you is that Google's crawlers scan the internet, sometimes find what appear to be suspect websites and put those websites on a quarantine list. That information is then conveyed to users through the search engine. So far so good, except of course for the mistakes and the delisting problem; one might even say that Google is performing a public service, which is how some people who are familiar with the quarantine list defend it. But I also mentioned that Google somehow blocks people from accessing websites directly through multiple browsers. How on earth could it do that? How could Google block you when you are trying to access a website using Safari, an Apple product, or Firefox, a browser maintained by Mozilla, the self-proclaimed "nonprofit defender of the free and open internet"? The key here is browsers. No browser maker wants to send you to a malicious website, and because Google has the best blacklist, major browsers such as Safari and Firefox – and Chrome, of course, Google's own browser, as well as browsers that load through Android, Google's mobile operating system – check Google's quarantine list before they send you to a website. (In November 2014, Mozilla announced it will no longer list Google as its default search engine, but it also disclosed that it will continue to rely on Google's quarantine list to screen users' search requests.) If the site has been quarantined by Google, you see one of those big, scary images that say things like "Get me out of here!" or "Reported attack site!" At this point, given the default security settings on most browsers, most people will find it impossible to visit the site – but who would want to? If the site is not on Google's quarantine list, you are sent on your way. OK, that explains how Google blocks you even when you're using a non-Google browser, but why do they block you? In other words, how does blocking you feed the ravenous advertising machine – the sine qua non of Google's existence? Have you figured it out yet? The scam is as simple as it is brilliant: When a browser queries Google's quarantine list, it has just shared information with Google. With Chrome and Android, you are always giving up information to Google, but you are also doing so even if you are using non-Google browsers. That is where the money is – more information about search activity kindly provided by competing browser companies. How much information is shared will depend on the particular deal the browser company has with Google. In a maximum information deal, Google will learn the identity of the user; in a minimum information deal, Google will still learn which websites people want to visit – valuable data when one is in the business of ranking websites. Google can also charge fees for access to its quarantine list, of course, but that's not where the real gold is. Chrome, Android, Firefox and Safari currently carry about 92 percent of all browser traffic in the U.S. – 74 percent worldwide – and these numbers are increasing. As of this writing, that means Google is regularly collecting information through its quarantine list from more than 2.5 billion people. Given the recent pact between Microsoft and Google, in coming months we might learn that Microsoft – both to save money and to improve its services – has also started using Google's quarantine list in place of its own much smaller list; this would further increase the volume of information Google is receiving. To put this another way, Google has grown, and is still growing, on the backs of some of its competitors, with end users oblivious to Google's antics – as usual. It is yet another example of what I have called "Google's Dance" – the remarkable way in which Google puts a false and friendly public face on activities that serve only one purpose for the company: increasing profit. On the surface, Google's quarantine list is yet another way Google helps us, free of charge, breeze through our day safe and well-informed. Beneath the surface, that list is yet another way Google accumulates more information about us to sell to advertisers. You may disagree, but in my view Google's blacklisting practices put the company into the role of thuggish internet cop – a role that was never authorized by any government, nonprofit organization or industry association. It is as if the biggest bully in town suddenly put on a badge and started patrolling, shuttering businesses as it pleased, while also secretly peeping into windows, taking photos and selling them to the highest bidder. Consider: Heading into the holiday season in late 2013, an online handbag business suffered a 50 percent drop in business because of blacklisting. In 2009, it took an eco-friendly pest control company 60 days to leap the hurdles required to remove Google's warnings, long enough to nearly go broke. And sometimes the blacklisting process appears to be personal: In May 2013, the highly opinionated PC Magazine columnist John Dvorak wondered "When Did Google Become the Internet Police?" after both his website and podcast site were blacklisted. He also ran into the delisting problem: "It's funny," he wrote, "how the site can be blacklisted in a millisecond by an analysis but I have to wait forever to get cleared by the same analysis doing the same scan. Why is that?" Could Google really be arrogant enough to mess with a prominent journalist? According to CNN, in 2005 Google "blacklisted all CNET reporters for a year after the popular technology news website published personal information about one of Google's founders" – Eric Schmidt – "in a story about growing privacy concerns." The company declined to comment on CNN's story. Google's mysterious and self-serving practice of blacklisting is one of many reasons Google should be regulated, just as phone companies and credit bureaus are. The E.U.'s recent antitrust actions against Google, the recently leaked FTC staff report about Google's biased search rankings, President Obama's call for regulating internet service providers – all have merit, but they overlook another danger. No one company, which is accountable to its shareholders but not to the general public, should have the power to instantly put another company out of business or block access to any website in the world. How frequently Google acts irresponsibly is beside the point; it has the ability to do so, which means that in a matter of seconds any of Google's 37,000 employees with the right passwords or skills could laser a business or political candidate into oblivion or even freeze much of the world's economy. Some degree of censorship and blacklisting is probably necessary; I am not disputing that. But the suppression of information on the internet needs to be managed by, or at least subject to the regulations of, responsible public officials, with every aspect of their operations transparent to all. Updated on June 23, 2016: Readers have called my attention to a 10th Google blacklist, which the company applies to its shopping service. In 2012, the shopping service banned the sale of weapons-related items, including some items that could still be sold through AdWords. Google's shopping blacklisting policy, while reasonably banning the sale of counterfeit and copyrighted goods, also includes a catch-all category: Google can ban the sale of any product or service its employees deem to be "offensive or inappropriate." No means of recourse is stated. The Source.
  17. Pretty much every top website, in retail, financial services, consumer services, OTA members (Online Trust Alliance), news and media, and top US government agencies, is vulnerable to advanced bots, new research says. Bot detection and mitigation company Distil Networks, analyzed 1,000 top websites in these verticals, and how they behave against crude, simple, evasive and advanced bots. All of the verticals performed quite well against crude bots, (75 percent in consumer services, 70 percent in government, 65 percent in financial services, 64 percent in news and media, 78 percent in retail and 67 percent in OTA members), but when it comes to advanced bots, one percent is the best result found. Bots are used, as Distil says, for competitive data mining, online fraud, account hijacking, data theft, vulnerability scans, spam, man-in-the-middle attacks, etc. "Bots, especially Advanced Persistent Bots (APBs) are evolving in sophistication because of their polymorphic nature, and quick deployment to access sensitive information and reap monetary benefits. Our 2016 Bad Bot Landscape Report found over 88 percent of all bad bot traffic last year was made by APBs, bots that mimic human behavior", says Rami Essaid, CEO and co-founder of Distil Networks. "OTA’s Trust Audit continues to set the bar for best practices, including evaluation of bot risk. We support OTA’s efforts to promote best practices in the industry and are troubled to find that most companies are failing to keep their defenses up to the sophistication level of today’s advanced and evasive bots. This is concerning, as bots can easily paralyze website infrastructure, pirate entire online directories, and destroy a company’s competitive advantage". Detection rates by vertical Vertical Crude Simple Evasive Advanced Consumer Services 75 percent 18 percent 4 percent 1 percent Government 70 percent 7 percent 0 percent 0 percent Financial Services 65 percent 12 percent 0 percent 0 percent News and Media 64 percent 7 percent .09 percent .09 percent Retailers 78 percent 11 percent 1.6 percent .08 percent Members 67 percent 13 percent 1 percent 1 percent Article source
  18. About 7% of the 5,000 most popular websites block visitors using an adblocker, according to a report (PDF) written by researchers from several Universities. Researchers from the University of Cambridge, Stony Brook, Berkeley, Queen Mary and the International Computer Science Institute analyzed how many websites currently employ anti-adblock technology. Adblock software continues to increase in popularity as users feel it protects their privacy, enhances their internet experience and protects them against malware. Website publishers are concerned about ad blockers as most of them rely on advertising revenues. Therefore it’s not longer possible to visit some sites when using an adblocker. Users visiting such a site receive a warning asking them to disable their adblock software or to whitelist the site before they can continue. The researchers found that 6.7% of the top 5,000 websites, as reported by Alexa, is using this kind of anti-adblock technology. In many cases the sites use anti-adblock scripts as currently available from 12 different vendors. Some websites have also developed their own in-house anti-adblock technology. The anti-adblock software isn’t a waterproof solution, 6 of the 12 anti-adblock scripts are already circumvented by popular adblockers like AdBlock Plus, Ghostery and Privacy Badger. This means that the anti-adblock scripts no longer properly block users with adblock software installed. “It is hard to say how many levels deeper the ad blocking arms race might go. While anti-ad blocking may provide temporary relief to publishers, it is essentially band-aid solution to mask a deeper issue, the disequilibrium between ads (and, particularly, their behavioural tracking back-end) and information. Any long term solution must address the reasons that brought users to adblockers in the first place”, the researchers conclude. Article source
  19. As copyright holders try to make copyrighted content harder to find, many send infringement reports to Google. According to the company's Transparency Report the top ten targets are focused on file-hosting, site unblocking, and music downloads. Google has received a staggering 70 million complaints about them - in the last year alone. As 2016 nears its midway point the rhetoric over DMCA takedowns is more fiery than ever. Aware that a favorable change in the law might be possible sometime in the future, copyright holders are pushing the Copyright Office in the United States to consider a ‘takedown, staydown’ system. This proposal, should it ever become enshrined in law, would enable copyright holders to issue a DMCA notice to a site for a particular piece of content with the expectation that it will never appear again on that same platform. Opposition to such a regime is notable but it’s not difficult to see why copyright holders are so keen to have it implemented. In the meantime they are stuck with the existing system and their efforts are clearly illustrated in Google’s Transparency Report. During the past month alone 7,015 copyright holders and 3,176 reporting organizations sent requests for 87 million URLs to be removed from Google’s indexes. It’s a huge amount by any standard. What is interesting is how a relatively small number of domains account for a disproportionate number of takedowns. For instance last month two sites – file-hosting site 4shared and MP3 site GoEar – accounted for close to 11 million takedowns. That means that it took complaints against another 77,855 domains to make up the remaining 76 million URLs. When looking at the figures for the last year a similar picture emerges, with a small number of domains attracting disproportionate levels of complaints. Interestingly, those thinking that The Pirate Bay or KickassTorrents would get the top slots will be disappointed. Those sites pale into insignificance when compared to the front runners. Also of interest is the kind of site being targeted. In first position is 4shared, a file-hosting site traffic ranked 434th in the world by Alexa. While that represents huge amounts of traffic, it’s the site’s popularity in Brazil that is causing it to receive a disproportionate number of notices. 4shared is ranked the 53rd most popular domain there, something that hasn’t gone unnoticed by local anti-piracy outfit APDIF who since 2013 have filed 17 million complaints in response. MP3 indexing site FlowXD takes second place with an ‘impressive’ 8.2 million takedowns. Again APDIF has been sending the lion’s share of the notices, presumably due to the site’s popularity in Brazil and elsewhere in South America. In a close third with almost 7.7 million takedowns in the past twelve months is Chomikuj, a Cyrus-incorporated file-hosting site that is the 34th most-visited site in Poland. Overall, the UK’s BPI has shown the most interest in the site, having sent more than 4.5 million notices since 2011. More recently, however, a much wider spread of copyright holders have targeted the site. Perhaps unsurprisingly a website unblocking service is also up there with the front runners. Unblocksit.es has had almost 7 million complaints filed with Google, mostly by anti-piracy outfit Rivendell who boast being the “world Leader on Google’s transparency report for removal illegal links.” In fifth and sixth place respectively sit file-hosting giants Rapidgator and Uploaded with around 6.5 million complaints each. A wide range of copyright holders focus on both sites with an emphasis on the music sector. Rapidgator and Uploaded have experienced a decline in traffic since the start of the year but neither are showing signs of going away any time soon. From there, all remaining sites in the top 10 are dedicated to offering free music. None seem particularly popular with English-speaking users. GoEar gets quite a lot of Spanish eyes and for some reason Gooveo is rocking it in Guatemala. Viciomp3 and Esamusica both appear to be defunct although copyright holders are continuing to send complaints to Google. Article source
  20. A fresh Angler exploit kit campaign is targeting Sexting Forum and 18 other sites. According to Cyphort Labs, the initiative uses the bootstrapcdn.org redirector and sends users to malicious payloads hosted on .co.uk websites. “This is not malvertising, instead the websites are compromised directly (likely via FTP password theft) and redirect using an embedded SCRIPT tag,” explained Cyphort’s Nick Bilogorskiy, in a blog. The drive-by exploits are affecting a wide variety of sites, including a Smith & Wesson discussion forum, an “Army Recognition” site, and a leading credit union in Houston (JSC FCU has been around for 50 years and has grown to serve 123,000+ members and 2,000+ Community Business Partners (CBPs) throughout the greater Houston area). There’s also a site that offers bloggers visitor stats and the like, and UltraVNC.com, the online presence of one of the most popular remote desktop programs for remote administration. It is similar to TeamViewer, pcAnywhere or LogMeIn. The sites unfortunately have a wide reach. In the case of the UltraVNC website, many technical users go to this website to download VNC client to troubleshoot their friends’, family or clients’ PCs. “In computing, virtual network computing (VNC) is a graphical desktop sharing system that transmits the keyboard and mouse events from one computer to another, relaying the graphical screen updates back in the other direction, over a network,” said Bilogorskiy. “With over a billion copies, VNC is a de facto standard for remote control. VNC has been used widely in hundreds of different products and applications, from helpdesks to virtualization.” “As the website seems to be controlled by the attackers, it is possible that VNC software has been replaced by a trojan as well,” Bilogorsky warned. The campaign started on May 9, about a week ago, and is ongoing. It’s also just the latest in a series of drive-bys. “It is of interest to note that the use of .co.uk domains by malicious actors increased by ~150% year-over-year in 2016,” he said. “We believe that rather than registering new .co.uk domains, attackers likely compromised the co.uk registrars customers' accounts to add additional subdomain DNS pointers. Example: specialist-foods.co.uk is a legitimate commercial website, zunickender.specialist-foods.co.uk is a hacker subdomain pointing to Angler EK.” Article source
  21. A senior company executive said the attack was "one of the largest and most sophisticated," affecting millions across the US, Europe, and Asia. Many popular websites were downed Monday after the domain name server provider was hammered with a huge web attack. Attackers targeted a major networking provider with a large web attack, disrupting the access of millions of US and Europeans to high-profile websites. NS1, a domain name server provider and networking giant, was repeatedly hit throughout Monday by unnamed attackers, but recovered towards the end of the working day. "We had performance degradation in several markets with the US and Europe seeing the greatest impact," said Jonathan Lewis, vice-president of product at NS1, in an emailed statement. Lewis declined to say who was behind the attack, but described it as a "complex and evolving attack spanning a number of techniques." The New York-based networking company said serves large traffic websites, like Yelp and stick-figure strip cartoon site XKCD. Imgur, a customer of NS1, said in a tweet that it acknowledged that European users were impacted by the outage. OneLogin, a secure identity management company, also said its users were experiencing issues during the day. Many users were unable to access their sites and services on Monday. The attack started at about 10:45am in New York, according to the company's status page. The company said the "evolved" attack, a distributed denial-of-service (DDoS) attack, affected almost every region around the globe -- including Asia and the Americas. By mid-afternoon, the company was able to stabilize its systems after several configuration changes to mitigate the attack, describing it as a "defensive posture." But the attack persisted throughout the day, with further disruption hitting networks and end users into late-evening in Europe. Lewis said that the attack was "one of the largest and most sophisticated we have ever observed," with "many tens of millions of packets hammering our network every second, complex migration of traffic across the network, and a series of precise strategies for targeting our systems." NS1 did not give specific figures of how large the attack was. However, we've noted in our previous coverage that industry sources are aware of attacks matching 600 Gbps which have been previously detected and privately reported. Attacks that big are rare, and are understood to be difficult to carry out, but aren't impossible. That we know of, no group or malicious actor has publicly taken credit for the attack. Article source
  22. Has Google Admitted that it is a source of Malware? — Yes indeed! Google.com has become a potent source of malware infections. This hasn’t been claimed by any security expert or security firm but by Google itself. “Some pages on Google.com contain deceptive content right now,” this was the status that we encountered when we checked out the Safe Browsing Site Status of Google’s transparency report about two days ago. When we went further on the site’s safety details, we came across four warnings. One of the warnings read: “some pages on this website redirect visitors to dangerous websites that install malware on visitors’ computers.” Another reads: “Attackers on this site might try to trick you to download software or steal your information (for example passwords, messages, or credit card information).” It is, however, quite interesting that Google had flagged itself as partially dangerous. This status was posted two days ago. Later on, Google modified the status and changed dangerously with “not dangerous.” Moreover, the warning regarding the phishing URLs probability has also been deleted. The next morning, that was yesterday, the status again flagged itself as partially dangerous. Google might have fixed some of the issues it was facing regarding phishing links and therefore, changed the status. Some other problems of similar nature might have sprouted and hence, Google switched back to its old status. It is understandable that there are malicious links circulating on the world’s most used search engine because its database is massive and some links leading to malware is perhaps inevitable. But then, one wonders, actually how many are these malicious links, what’s their actual percentage? Well, only Google knows. At the time of publishing this article, Google was marked as a safe site, however, Tumblr was marked as a potentially unsafe site to visit. Article source
  23. G Data detects 2,098,062 malware strains in H2 2015 Website categories for malicious content German security firm G DATA published yesterday its PC Malware Report that included an analysis of the threat landscape for the second half of 2015. According to the company's report, in the second half of last year, its security products detected 2,098,062 new malware variants, bringing the 2015 total to 5,143,78, which is less compared to 2014's total of nearly 6 million malware variants. Gambling sites are the most likely to host malicious content The most popular malware variant during the second half of 2015 was an adware program called Script.Adware.DealPly.G, seen in 22.9 percent of all malware detections. Most of this malware was distributed via spam email, but also via so-called "evil" websites. Based on the evil website's server location, 57 percent were hosted in the US. This should be of no surprise since the US also harbors more than half of the world's data centers. Based on the evil website's domain of activity, G DATA experts saw a clear-cut trend of using gambling sites to spread malware. These types of sites were the source of 18.7 percent of all attacks, followed by blogs with a 12.9 percent ratio, and technology and telecommunications sites with 10.8 percent. Dridex becomes a behemoth in H2 2015 Out of all the malware detected in 2015, even if not the predominant threat, banking trojans were among the most dangerous. In the second half of 2015, Dridex massively expanded its operations, taking up a huge piece of the market, with only Gozi and Vawtrack barely managing to keep their shares intact. Analyzing banking trojans as a whole, G DATA looked at their targets, meaning the banks into whose websites banking trojans inject malicious code to steal the user's login credentials. During the second half of 2015, the most targeted bank was the Santander Group (Spain), with an attack probability of 45 percent, followed by three UK banks, Lloyds, RBS, and Barclays, all with a probability of around 35 percent. G DATA's 20-page report provides a more in-depth analysis of the whole threat landscape and is available for download from the company's website. Article source
  24. Search giant announces new notification alert system for informing webmasters of their hacked websites Google announced yesterday a new notification and remediation system for dealing with hijacked websites that have been compromised to spread malware or scam users. The new webmaster notification system was perfected during joint research with the University of California, Berkeley, research which was also presented at last week's 25th International World Wide Web Conference. Google says that the study analyzed 760,935 hijacking incidents from July 2014 to June 2015, as identified by the company's Safe Browsing and Search Quality features. The company said it used these security incidents to test and compare a new notification system that informed users their site was hacked. Contacting webmasters via email yielded the best remediation rates Google says that when webmasters added their domains to Google's Search Console and the company had the owner's email address on hand, webmasters cleaned out compromised websites in 75 percent of cases if contacted directly by email. In cases where the webmaster's email was not on hand, relying solely on Safe Browsing alerts (browser-based warnings) yielded a much smaller remediation efficiency of only 54 percent. When Google relied on search results warnings by adding the "This site may harm your computer" notification next to each search listing, only 43 percent of the compromised websites were cleaned. Google says it achieved the best results when it also included remediation tips with its emails, which cut down clean-up time by 62 percent, usually within three days. One in eight websites gets reinfected in the first month With all the good intentions, Google's researchers also noted that 12 percent of the cleaned websites ended up getting compromised again in less than 30 days after being declared clean. "To improve this process moving forward, we highlighted three paths: increasing the webmaster coverage of notifications, providing precise infection details, and equipping site operators with recovery tools or alerting webmasters to potential threats before they escalate to security breaches," Google noted. Moving forward, Google plans to improve the communications and notifications sent to webmasters, primarily by adding early warnings for outdated software or for urging webmasters into adding additional authentication systems when necessary. Article source
  25. Forum hosting platform avoids disaster at the last minute after security researchers stumble upon secret hacking plan Security researchers from SurfWatch Labs have shut down a secret plan to hack and infect hundreds or possibly thousands of forums and websites hosted on the infrastructure of Invision Power Services, who are the makers of the IP.Board forum platform, now known as the IPS Community Suite. The plan belonged to a malware coder known as AlphaLeon, who, at the start of March this year, started selling a new trojan called Thanatos. Advertised as a MaaS (Malware-as-a-Service) rentable platform, to be attractive to its customers, Thanatos had to run on a very large number of infected hosts. In the infosec community, this structure is called a botnet, and the bigger it is, the easier it is to carry out all sorts of cyber-attacks. AlphaLeon breached Invision Power Services servers In order to increase the size of the Thanatos botnet, AlphaLeon needed to find a way to deliver the trojan to as many users as possible. For this, he devised a plan and later carried it out. His idea consisted of finding and exploiting a vulnerability in the infrastructure of Invision Power Services (IPS), who offers its IPS Community Suite as a hosted platform, running on AWS (Amazon Web Services) servers. After establishing a foothold on IPS' servers, AlphaLeon then intended to access the websites of IPS' customers and place an exploit kit on their pages. The exploit kit would automatically infect site visitors with the Thanatos trojan by leveraging vulnerabilities in the visitors (outdated) browsers and browser plugins. IPS customers include large companies such as Evernote, the NHL, the Warner Music Group, Bethesda Softworks, and LiveNation. Besides classic IP.Board forums, IPS also allows customers to set up fully working sites, even e-commerce stores. AlphaLeon: And I would have gotten away with it too if it weren't for you, meddling kids His plan was stopped short when SurfWatch Labs security experts got wind of his intentions while scanning the Dark Web. Researchers contacted IPS, who was unaware of the hacker's breach, discovered the entry point, and shut down his access. This incident happened at the start of April, and IPS is still in the process of investigating the breach. According to the most recent Thanatos ads on the Dark Web, the trojan, which at the beginning of March was only a potent banking trojan, has now received new updates in the form of add-on modules. These modules allow customers of the Thanatos botnet to launch DDoS attacks, deliver ransomware, access a victim's webcam, steal Bitcoin, send spam, or steal login credentials for various gaming platforms. Our initial article on Thanatos also includes screenshots of the botnet's administration panel. Article source
×