Jump to content

Search the Community

Showing results for tags 'https'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 31 results

  1. What is this?# I've been writing about Google's efforts to deprecate HTTP, the protocol of the web. This is a summary of why I am opposed to it. DW Their pitch# Advocates of deprecating HTTP make three main points: Something bad could happen to my pages in transit from a server to the user's web browser. It's not hard to convert to HTTPS and it doesn't cost a lot. Google is going to warn people about my site being "not secure." So if I don't want people to be scared away, I should just do what they want me to do. Why this is bad# The web is an open platform, not a corporate platform. It is defined by its stability. 25-plus years and it's still going strong. Google is a guest on the web, as we all are. Guests don't make the rules. Why this is bad, practically speaking# A lot of the web consists of archives. Files put in places that no one maintains. They just work. There's no one there to do the work that Google wants all sites to do. And some people have large numbers of domains and sub-domains hosted on all kinds of software Google never thought about. Places where the work required to convert wouldn't be justified by the possible benefit. The reason there's so much diversity is that the web is an open thing, it was never owned. The web is a miracle# Google has spent a lot of effort to convince you that HTTP is not good. Let me have the floor for a moment to tell you why HTTP is the best thing ever. Its simplicity is what made the web work. It created an explosion of new applications. It may be hard to believe that there was a time when Amazon, Netflix, Facebook, Gmail, Twitter etc didn't exist. That was because the networking standards prior to the web were complicated and not well documented. The explosion happened because the web is simple. Where earlier protocols were hard to build on, the web is easy. I don't think the explosion is over. I want to make it easier and easier for people to run their own web servers. Google is doing what the programming priesthood always does, building the barrier to entry higher, making things more complicated, giving themselves an exclusive. This means only super nerds will be able to put up sites. And we will lose a lot of sites that were quickly posted on a whim, over the 25 years the web has existed, by people that didn't fully understand what they were doing. That's also the glory of the web. Fumbling around in the dark actually gets you somewhere. In worlds created by corporate programmers, it's often impossible to find your way around, by design. The web is a social agreement not to break things. It's served us for 25 years. I don't want to give it up because a bunch of nerds at Google think they know best. The web is like the Grand Canyon. It's a big natural thing, a resource, an inspiration, and like the canyon it deserves our protection. It's a place of experimentation and learning. It's also useful for big corporate websites like Google. All views of the web are important, especially ones that big companies don't understand or respect. It's how progress happens in technology. Keeping the web running simple is as important as net neutrality. They believe they have the power# Google makes a popular browser and is a tech industry leader. They can, they believe, encircle the web, and at first warn users as they access HTTP content. Very likely they will do more, requiring the user to consent to open a page, and then to block the pages outright. It's dishonest# Many of the sites they will label as "not secure" don't ask the user for any information. Of course users won't understand that. Many will take the warning seriously and hit the Back button, having no idea why they're doing it. Of course Google knows this. It's the kind of nasty political tactic we expect from corrupt political leaders, not leading tech companies. Sleight of hand# They tell us to worry about man-in-the-middle attacks that might modify content, but fail to mention that they can do it in the browser, even if you use a "secure" protocol. They are the one entity you must trust above all. No way around it. They cite the wrong stats# When they say some percentage of web traffic is HTTPS, that doesn't measure the scope of the problem. A lot of HTTP-served sites get very few hits, yet still have valuable ideas and must be preserved. It will destroy the web's history# If Google succeeds, it will make a lot of the web's history inaccessible. People put stuff on the web precisely so it would be preserved over time. That's why it's important that no one has the power to change what the web is. It's like a massive book burning, at a much bigger scale than ever done before. If HTTPS is such a great idea...# Why force people to do it? This suggests that the main benefit is for Google, not for people who own the content. If it were such a pressing problem we'd do it because we want to, not because we're being forced to. The web isn't safe# Finally, what is the value in being safe? Twitter and Facebook are like AOL in the old pre-web days. They are run by companies who are committed to provide a safe experience. They make tradeoffs for that. Limited linking. No styles. Character limits. Blocking, muting, reporting, norms. Etc etc. Think of them as Disney-like experiences. The web is not safe. That is correct. We don't want every place to be safe. So people can be wild and experiment and try out new ideas. It's why the web has been the proving ground for so much incredible stuff over its history. Lots of things aren't safe. Crossing the street. Bike riding in Manhattan. Falling in love. We do them anyway. You can't be safe all the time. Life itself isn't safe. If Google succeeds in making the web controlled and bland, we'll just have to reinvent the web outside of Google's sphere. Let's save some time, and create the new web out of the web itself. PS: Of course we want parts of the web to be safe. Banking websites, for example. But my blog archive from 2001? Really there's no need for special provisions there. Update: A threatening email from Google# On June 20, 2018, I got an email from Google, as the owner of scripting.com. According to the email I should "migrate to HTTPS to avoid triggering the new warning on your site and to help protect users' data." It's a blog. I don't ask for any user data. Google’s not secure message means this: “Google tried to take control of the open web and this site said no.” Last update: Tuesday June 26, 2018; 2:11 PM GMT+0200. Source
  2. In conjunction with the Cybersecurity Awareness Month celebration in October last year, Google shared some statistical data about how HTTPS usage in Chrome increased across different platforms. Since its inception, Chrome traffic encryption efforts have made significant progress that Google now wants to remove the green “Secure” label on HTTPS websites beginning in September once the search giant launches Chrome 69. The goal of the upcoming change to the browser is to give users the idea that the internet is safe by default by eliminating Chrome’s positive security indicators. Here's how the address bar should look like after the new scheme kicks off in September: Emily Schechter, Product Manager for Chrome Security at Google, also announced in a blog post that starting in October the company will mark all HTTP pages with a red “not secure” warning in the event that a user enters data on an HTTP page. The new label will be part of Chrome 70, which is set for release that month. That's in line with the search giant's upcoming plan, announced last February, to show a grey "not secure" warning in the address bar for HTTP pages starting in July when Chrome 68 is set to roll out. Source details < Clic here >
  3. Let's Encrypt – a SSL/TLS certificate authority run by the non-profit Internet Security Research Group (ISRG) to programmatically provide websites with free certs for their HTTPS websites – on Thursday said it is discontinuing TLS-SNI validation because it's insecure in the context of many shared hosting providers. TLS-SNI is one of three ways Let's Encrypt's Automatic Certificate Management Environment protocol validates requests for TLS certificates, which enable secure connections when browsing the web, along with the confidence-inspiring display of a lock icon. The other two validation methods, HTTP-01 and DNS-01, are not implicated in this issue. The problem is that TLS-SNI-01 and its planned successor TLS-SNI-02 can be abused under specific circumstances to allow an attacker to obtain HTTPS certificates for websites that he or she does not own. Such a person could, for example, find an orphaned domain name pointed at a hosting service, and use the domain – with an unauthorized certificate to make fake pages appear more credible – without actually owning the domain. For example, a company might have investors.techcorp.com set up and pointed at a cloud-based web host to serve content, but not investor.techcorp.com. An attacker could potentially create an account on said cloud provider, and add a HTTPS server for investor.techcorp.com to that account, allowing the miscreant to masquerade as that business – and with a Let's Encrypt HTTPS cert, too, via TLS-SNI-01, to make it look totally legit. It sounds bonkers but we're told some cloud providers allow this to happen. And that's why Let's Encrypt ditched its TLS-SNI-01 validation processor. Ownership It turns out that many hosting providers do not validate domain ownership. When such providers also host multiple users on the same IP address, as happens on AWS CloudFront and on Heroku, it becomes possible to obtain a Let's Encrypt certificate for someone else's website via the TLS-SNI-01 mechanism. On Tuesday, Frans Rosén, a security researcher for Detectify, identified and reported the issue to Let's Encrypt, and the organization suspended certificate issuance using TLS-SNI-01 validation, pending resolution of the problem. In his account of his proof-of-concept exploit, Rosén recommended three mitigations: disabling TLS-SNI-01; blacklisting .acme.invalid in certificate challenges, which is required to get a cert via TLS-SNI-01; and looking to other forms of validation because TLS-SNI-01 and 02 are broken given current cloud infrastructure practices. AWS CloudFront and Heroku have since tweaked their operations based on Rosén's recommendation, but the problem extends to other hosting providers that serve multiple users from a single IP address without domain ownership validation. Late Thursday, after temporarily reenabling the validation method for certain large hosting providers that aren't vulnerable, Let's Encrypt decided it would permanently disable TLS-SNI-01 and TLS-SNI-02 for new accounts. Those who previously validated using TLS-SNI-01 will be allowed to renew using the same mechanism for a limited time. "We have arrived at the conclusion that we cannot generally re-enable TLS-SNI validation," said ISRG executive director Josh Aas in a forum post. "There are simply too many vulnerable shared hosting and infrastructure services that violate the assumptions behind TLS-SNI validation." Aas stressed that Let's Encrypt will discontinue using the TLS-SNI-01 and TLS-SNI-02 validation methods Article
  4. We have a little problem on the web right now and I can only see this becoming a larger concern as time goes by. More and more sites are obtaining certificates, vitally important documents that we need to deploy HTTPS, but we have no way of protecting ourselves when things go wrong. Certificates We're currently seeing a bit of a gold rush for certificates on the web as more and more sites deploy HTTPS. Beyond the obvious security and privacy benefits of HTTPS, there are quite a few reasons you might want to consider moving to a secure connection that I outline in my article Still think you don't need HTTPS?. Commonly referred to as 'SSL certificates' or 'HTTPS certificates', the wider internet is obtaining them at a rate we've never seen before in the history of the web. Every day I crawl the top 1 million sites on the web and analyse various aspects of their security and every 6 months I publish a report. You can see the reports here but the main result to focus on right now is the adoption of HTTPS. Not only are we continuing to deploy HTTPS, the rate at which we're doing so is increasing too. This is what real progress looks like. The process of obtaining a certificate has become more and more simple over time and now, thanks to the amazing Let's Encrypt, it's also free to get them. Put simply, we send a Certificate Signing Request (CSR) to the Certificate Authority (CA) and the CA will challenge us to prove our ownership of the domain. This is usually done by setting a DNS TXT record or hosting a challenge code somewhere on a random path on our domain. Once this challenge has been satisfied the CA will issue the certificate and we can then use it to present to the browser and get our green padlock and HTTPS in the address bar. I have a few tutorials to help you out with this process including how to get started, smart renewal and dual certificates. But this is all great, right? What's the problem here? The problem is when things don't go according to plan and you have a bad day. We've been hacked Nobody ever wants to hear those words but the sad reality is that we do, more often than any of us would like. Hackers can go after any number of things when they gain access to our servers and often one of the things they can access is our private key. The certificates we use for HTTPS are public documents, we send them to anyone that connects to our site, but the thing that stops other people using our certificate is that they don't have our private key. When a browser establishes a secure connection to a site, it checks that the server has the private key for the certificate it's trying to use, this is why no one but us can use our certificate. If an attacker gets our private key though, that changes things. Now that an attacker has managed to obtain our private key, they can use our certificate to prove that they are us. Let's say that again. There is now somebody on the internet, that can prove they are you, when they are not you. This is a real problem and before you think 'this will never happen to me', do you remember Heartbleed? This tiny bug in the OpenSSL library allowed an attacker to steal your private key and you didn't even have to do anything wrong for it to happen. On top of this there are countless ways that private keys are exposed by accident or negligence. The simple truth is that we can lose our private key and when this happens, we need a way to stop an attacker from using our certificate. We need to revoke the certificate. Revocation In a compromise scenario we revoke our certificate so that an attacker can't abuse it. Once a certificate is marked as revoked the browser will know not to trust it, even though it's valid. The owner has requested revocation and no client should accept it. Once we know we've had a compromise we contact the CA and ask that they revoke our certificate. We need to prove ownership of the certificate in question and once we do that, the CA will mark the certificate as revoked. Now the certificate is revoked, we need a way of communicating this to any client that might require the information. Right now the browser doesn't know and of course, that's a problem. There are two mechanisms that we can use to make this information available and they are a Certificate Revocation List (CRL) or the Online Certificate Status Protocol (OCSP). Certificate Revocation Lists A CRL is a really simple concept and is quite literally just a list of all certificates that a CA has marked as revoked. A client can contact the CRL Server and download a copy of the list. Armed with a copy of the list the browser can check to see if the certificate it has been provided is on that list. If the certificate is on the list, the browser now knows the certificate is bad and it shouldn't be trusted, it will throw an error and abandon the connection. If the certificate isn't on the list then everything is fine and the browser can continue the connection. The problem with a CRL is that they contain a lot of revoked certificates from the particular CA. Without getting into too much detail they are broken down per intermediate certificate a CA has and the CA can fragment the lists into smaller chunks, but, the point I want to make remains the same. The CRL is typically not an insignificant size. The other problem is that if the client doesn't have a fresh copy of the CRL, it has to fetch one during the initial connection to the site which can make things look much slower than they actually are. This doesn't sound particularly great so how about we take a look at OCSP? Online Certificate Status Protocol The OCSP provides a much nicer solution to the problem and has a significant advantage over the CRL approach. With OCSP we ask the CA for the status of a single, particular certificate. This means all the CA has to do is respond with a good/revoked answer which is considerably smaller than a CRL. Great stuff! It is true that OCSP offered a significant performance advantage over fetching a CRL, but, that performance advantage did come with a cost (don't you hate it when that happens?). The cost was a pretty significant one too, it was your privacy... When we think about what an OCSP request is, the request for the status of a very particular, single certificate, you may start to realise that you're leaking some information. When you send an OCSP request, you're basically asking the CA this: Is the certificate for pornhub.com valid? So, not exactly an ideal scenario. You're now advertising your browsing history to some third party that you didn't even know about, all in the name of HTTPS which set out to give us more security and privacy. Kind of ironic, huh? But wait, there's something else. Hard fail I talked about the CRL and OCSP responses above, the two mechanisms a browser can use to check if a certificate is revoked, and they look like this. Upon receiving the certificate, the browser will reach out to one of these services and perform the necessary query to ultimately ascertain the status of the certificate. What if your CA is having a bad day and the infrastructure is offline? What if it looks like this? The browser has only two choices here. It can refuse to accept the certificate because it can't check the revocation status or it can take a risk and accept the certificate without knowing the revocation status. Both of these options come with their advantages and disadvantages. If the browser refuses to accept the certificate then every time your CA has a bad day and their infrastructure goes offline, your sites goes offline too. If the browser continues and accepts the certificate then it risks using a certificate that could have been stolen and exposes the user to the associated risks. It's a tough call, but right now, today, neither of these actually happen... Soft Fail What actually happens today is that a browser will do what we call a soft fail revocation check. That is, the browser will try to do a revocation check but if the response doesn't come back, or doesn't come back in a short period of time, the browser will simply forget about it. Even is worse is that Chrome doesn't even do revocation checks, at all. Yes, you did read that right, Chrome doesn't even try to check the revocation status of certificates that it encounters. What you might find even more odd is that I completely agree with their approach and I'm quite happy to report that right now Firefox looks like they will be joining the party very soon too. Let me explain. The problem we had with hard fail was obvious, the CA has a bad day and so do we, that's how we arrived at soft fail. The browser will now try to do a revocation check but will ultimately abandon the check if it takes too long or it appears the CA is offline. Wait, what was that last part? The revocation check is abandoned if "it appears the CA is offline". I wonder if an attacker could simulate that? If you have an attacker performing a MiTM attack all they need to do is simply block the revocation request and make it look like the CA is offline. The browser will then soft fail the check and continue on to happily use the revoked certificate. Every single time you browse and encounter this certificate whilst not under attack you will pay the cost of performing the revocation check to find out the certificate is not revoked. The one time you're under attack, the one time you really need revocation to work, the attacker will simply block the connection and the browser will soft fail through the failure. Adam Langley at Google came up with the best description for what revocation is, it's a seatbelt that snaps in a car crash, and he's right. You get in your car every day and you put your seatbelt on and it makes you feel all warm and fuzzy that you're safe. Then, one day, things don't quite go to plan, you're involved in a crash and out of the windscreen you go. The one time you needed it, it let you down. Fixing the problem Right now at this very moment in time the truth is that there is no reliable way to fix this problem, revocation is broken. There are a couple of things worth bringing up though and we may be able to look to a future where we have a reliable revocation checking mechanism. Proprietary mechanisms If a site is compromised and an attacker gets hold of the private key they can impersonate that site and cause a fair amount of harm. That's not great but it could be worse. What if a CA was compromised and an attacker got access to the private key for an intermediate certificate? That would be a disaster because the attacker could then impersonate pretty much any site they like by signing their own certificates. Rather than doing online checks for revocation of intermediate certificates, Chrome and Firefox both have their own mechanisms that work in the same way. Chrome calls theirs CRLsets and Firefox call theirs OneCRL and they curate lists of revoked certificates by combining available CRLs and selecting certificates from them to be included. So, we have high value certificates like intermediates covered, but what about you and I? OCSP Must-Staple To explain what OCSP Must-Staple is, we first need a quick background on OCSP Stapling. I'm not going to go into too much info, you can get that in my blog on OCSP Stapling, but here is the TL;DR. OCSP Stapling saves the browser having to perform an OCSP request by providing the OCSP response along with the certificate. It's called OCSP Stapling because the idea is that the server would 'staple' the OCSP Response to the certificate and provide both together. At first glance this seems a little odd because the server is almost 'self certifying' its own certificate as not being revoked, but it all checks out. The OCSP response is only valid for a short period and is signed by the CA in the same way that the certificate is signed. So, in the same way the browser can verify the certificate definitely came from the CA, it can also verify that the OCSP response came from the CA too. This solves the massive privacy concern with OCSP and also removes a burden on the client from having to perform this external request. Winner! But not so much actually, sorry. OCSP Stapling is great and we should all support it on our sites, but, do we honestly think an attacker is going to enable OCSP Stapling? No, I didn't think so, of course they aren't going to. What we need is a way to force the server to OCSP Staple and this is what OCSP Must-Staple is for. When requesting our certificate from the CA we ask them to set the OCSP Must-Staple flag in the certificate. This flag instructs the browser that the certificate must be served with an OCSP Staple or it has to be rejected. Setting the flag is easy. Now that we have a certificate with this flag set, we as the host must ensure that we OCSP Staple or the browser will not accept our certificate. In the event of a compromise and an attacker obtaining our key, they must also supply an OCSP Staple when they use our certificate too. If they don't include an OCSP Staple, the browser will reject the certificate, and if they do include an OCSP Staple then the OCSP response will say that the certificate is revoked and the browser will reject. Tada! OCSP Expect-Staple Whilst Must-Staple sounds like a great solution to the problem of revocation, it isn't quite there just yet. One of the biggest problems that I see is that as a site operator I don't actually know how reliably I OCSP staple and if the client is happy with the stapled response. Without OCSP Must-Staple enabled this isn't really a problem but if we do enable OCSP Must-Staple and then we don't OCSP Staple properly or reliably, our site will start to break. To try and get some feedback about how we're doing in terms of OCSP Stapling we can enable a feature called OCSP Expect-Staple. I've written about this before and you can get all of the details in the blog OCSP Expect-Staple but I will give the TL;DR here. You request an addition to the HSTS preload list that asks the browser to send you a report if it isn't happy with the OCSP Staple. You can collect the reports your self or use my service, report-uri.io, to do it for you and you can learn exactly how often you would hit problems if you turned on OCSP Must-Staple. Because getting an addition to the HSTS preload list isn't as straightforward as I'd like I also wrote a spec to define a new security header called Expect-Staple to deliver the same functionality but with less effort involved. The idea being that you can now set this header and enable reporting to get the feedback we desperately need before enabling Must-Staple. Enabling the header would be simple, just like all of the other security headers: Expect-Staple: max-age=31536000; report-uri="https://scotthelme.report-uri.io/r/d/staple"; includeSubDomains; preload Rogue certificates One of the other things that we have to consider whilst we're on the topic of revocation is rogue certificates. If somebody manages to compromise a CA or otherwise obtains a certificate that they aren't supposed to have, how are we supposed to know? If I were to breach a CA right now and obtain a certificate for your site without telling you, you wouldn't ever learn about it unless it was widely reported. You could even have an insider threat and someone in your organisation could obtain certificates without going through the proper internal channels and do with them as they please. We need a way to have 100% transparency and we will very soon, Certificate Transparency. Certificate Transparency CT is a new requirement that will be mandatory from early next year and will require that all certificates are logged in a public log if the browser is to trust them. You can read the article for more details on CT but what will generally happen is that a CA will log all certificates it issues in a CT log. These logs are totally public and anyone can search them so the idea is that if a certificate is issued for your site, you will know about it. For example, here you can see all certificates issued for my domain and you can search for your own, you can also use CertSpotter from sslmate to do the same and I use the Facebook Certificate Transparency Monitoring tool which will send you an email each time a new certificate is issued for your domain/s. CT is a fantastic idea and I can't wait for it to become mandatory but there is one thing to note and it's that CT is only the first step. Knowing about these certificates is great but we still have all of the above mentioned problems with revoking them. That said, we can only tackle one problem at once and having the best revocation mechanisms in the world is no good if we don't know about the certificates we need to revoke. CT gives us that much at least. Certificate Authority Authorisation Stopping a certificate being issued is much easier than trying to revoke it and this is exactly what Certificate Authority Authorisation allows us to start doing. Again, there are further details in the linked article but the short version is that we can now authorise only specific CAs to issue certificates for us instead of the current situation where we can't indicate any preference at all. It's as simple as creating a DNS record: scotthelme.co.uk. IN CAA 0 issue "letsencrypt.org" Whilst CAA isn't a particularly strong mechanism and it won't help in all mis-issuance scenarios, there are some where it can help us and we should assert our preference by creating a CAA record. Conclusion As it currently stands there is a real problem, we can't revoke certificates if someone obtains our private key. Just imagine how that will play out the next time Heartbleed comes along! One thing that you can do to try and limit the impact of a compromise is to reduce the validity period of certificates you obtain. Instead of three years go for one year or even less. Let's Encrypt only issue certificates that are valid for ninety days! With a reduced lifetime on your certificate you have less of a problem if you're compromised because an attacker has less time to abuse the certificate before it expires. Beyond this, there's very little we can do. To demonstrate the issue and just how real this is, try and visit the new subdomain I setup on my site, revoked.scotthelme.co.uk. As you can probably guess this subdomain is using a revoked certificate and there's a good chance that it will load just fine. If it doesn't, if you do get a warning about the certificate having expired, then your browser is still doing an OCSP check and you just told the CA you were going to my site. To prove that this soft fail check is pointless you can add a hosts entry for ocsp.int-x3.letsencrypt.org to resolve to 127.0.0.1 or block it via some other means and then try the same thing again. This time the page will load just fine because the revocation check will fail and the browser will continue loading the page. Some use that is... The final point I want to close on is a question, and it's this. Should we fix revocation? That's a story for another day though! Article source
  5. We submit hundreds of blacklist review requests every day after cleaning our clients’ websites. Google’s Deceptive Content warning applies when Google detects dangerous code that attempts to trick users into revealing sensitive information. For the past couple of months we have noticed that the number of websites blacklisted with Deceptive Content warnings has increased for no apparent reason. The sites were clean, and there was no external resources loading on the website. Recently, we discovered a few cases where Google removed the Deceptive Content warning only after SSL was enabled. We conducted the following research in collaboration with Unmask Parasites. What is an SSL Certificate? Most websites use the familiar HTTP protocol. Those that install an SSL/TLS certificate can use HTTPS instead. SSL/TLS is a cryptographic protocol used to encrypt data while it travels across the internet between computers and servers. This includes downloads, uploads, submitting forms on web pages, and viewing website content. SSL doesn’t keep your website safe from hackers, rather it protects your visitor’s data. To the average visitor, SSL is what’s behind the green padlock icon in the browser address bar. This icon signifies that communication is secure between the visitor and the web server, and any information sent or received is kept safe from prying eyes. Without SSL, an HTTP site can only transfer information “in the clear”. Therefore, bad actors can snoop on network traffic and steal sensitive user input such as passwords and credit card numbers. The problem is that many visitors don’t notice when SSL is missing on a website. Google Moves on HTTP/HTTPS We have seen Google pushing SSL as a best practice standard across the web. Not only are they rewarding sites that use HTTPS, it seems they are steadily cracking down on HTTP sites that should be using HTTPS. In 2014, Google confirmed HTTPS helps websites rank higher in their search engine results. In January 2017, they rolled out the Not Secure label in Chrome whenever a non-HTTP website handled credit cards or passwords. Google also announced they would eventually apply the label to all HTTP pages in Chrome, and make the label more obvious: There has been a lot of talk about how to promote SSL and warn users when browsing HTTP sites. Studies show that users do not perceive the lack of a “secure” icon as a warning, but also become blind to warnings that occur too frequently. Our plan to label HTTP sites clearly and accurately as non-secure will take place in gradual steps, based on increasingly stringent criteria. Source: Google Security Blog Perhaps the red triangle warning has not been as effective, and they could be working on even stronger labels through their SafeBrowsing diagnostics. Blocking Dangerous HTTP Input In a few recent cases, we had Google review a cleaned website twice over a couple of days, but the requests were denied. Once we enabled SSL, we asked again and they cleared it. Nothing else was changed. We dug further and uncovered a few more cases where this behavior had been replicated. Upon investigation, the websites contained login pages or password input fields that were not being delivered over HTTPS. This could mean that Google is expanding its definition of phishing and deception to include websites that cause users to enter sensitive information over HTTP. We don’t know what Google looks for exactly to make their determination, but it’s safe to assume they look for forms that take passwords by just looking for input type=”password” in the source code of the website when that specific page is not being served over HTTPS. Here’s an example from the Security Issues section of Google Search Console showing messages related to Harmful Content: We see that the WordPress admin area is blocked, as well as a password-protected page. Both URLs start by HTTP, indicating the SSL is missing. Both pages have some form of login requirement. Most of these sites were previously hacked, and these warnings remained after the cleanup had been completed. There were a few, however, where there was no previous compromise. In each case, enabling SSL did the trick. As the largest search engine in the world, Google has the power to reduce your traffic by 95% with their blacklist warnings. By blocking sites that should be using HTTP, Google can protect its users and send a clear message to the webmaster. Domain Age a Factor There seems to be another similar factor among the affected websites. Most appear to be recently registered domains, and as such, they did not have time to build a reputation and authority with Google. This could be another factor that Google takes into account when assessing the danger level of a particular website. Some websites were not even a month old, had no malware, and were blacklisted until we enabled SSL. Google Ranking and Malware Detection One of the many factors involved in how Google rates a website is how long the site has been registered. Websites with WHOIS records dating back several years gain a certain level of authority. Google’s scanning engines also help limit our exposure to dangerous websites. Phishing attacks often use newly-registered domains until they are blacklisted. New sites need time to develop a reputation. An older website that never had any security incidents is less likely to have any false positive assessment, while a new website won’t have this trust. As soon as Google sees a public page asking for credentials that are not secured by HTTPS, they take a precautionary action against that domain. HTTP As a Blacklist Signal Google has been slowly cracking down on HTTP sites that transfer sensitive information and may be starting to label them as potential phishing sites when they have a poor reputation. While Google has not confirmed that SSL is a factor in reviewing blacklist warnings, it makes sense. Google can ultimately keep their user’s browsing experience as safe as possible, and educate webmasters effectively by blocking sites that don’t protect the transmission of passwords and credit card numbers. Password handling is a big security concern. Every day there are cases of mishandled passwords, so it’s understandable that Google is testing their power in changing the tides and keeping users safe. Conclusion Keeping the communication on your website secure is important if you transmit any sensitive user input. Enabling SSL on your website is a wise decision. Thankfully this has become an easier process in recent years, with many hosts encouraging and streamlining the adoption of SSL. Let’s Encrypt came out of beta over a year ago, and has grown to over 40 million active domains. If you have a relatively new website and want to ensure that Google does not blacklist you for accepting form data, be sure to get SSL enabled on your website. We offer a free Lets’s Encrypt SSL certificate with all our firewall packages and are happy to help you get started. Article source
  6. When, in January 2017, Mozilla and Google made Firefox and Chrome flag HTTP login pages as insecure, the intent was to make phishing pages easier to recognize, as well as push more website owners towards deploying HTTPS. But while the latter aim was achieved, and the number of phishing sites making use of HTTPS has increased noticeably, the move also had one unintended consequence: the number of phishing sites with HTTPS has increased, too. “While the majority of today’s phishing sites still use the unencrypted HTTP protocol, a threefold increase in HTTPS phishing sites over just a few months is quite significant,” noted Netcraft’s Paul Mutton. One explanation may be that fraudsters have begun setting up more phishing sites that use secure HTTPS connections. Another may be that they have simply continued compromising websites to set up the phishing pages, but as more legitimate sites began using HTTPS, more phishing pages ended up having HTTPS. Finally, it’s possible that fraudsters are intentionally compromising HTTPS sites so that their phishing login pages look more credible. Whatever the reason – and it might simply be a combination of them all – the change made some phishing attempts even more effective. And so the battle between attackers and defenders continues. Article source
  7. Mozilla plans to implement a change in Firefox 55 that restricts plugins -- read Adobe Flash -- to run on HTTP pr HTTPS only. Adobe Flash is the only NPAPI plugin that is still supported by release versions of the Firefox web browser. Previously supported plugins such as Silverlight or Java are no longer supported, and won't be picked up by the web browser anymore. Flash is the only plugin left standing in Firefox. It is also still available for Google Chrome, Chromium-based browsers, and Microsoft Edge, but the technology used to implement Flash is different in those web browsers. Adobe Flash causes stability and security issues regularly in browsers that support it. If you check the latest Firefox crash reports for instance, you will notice that many top crashes are plugin-related. Security is another hot topic, as Flash is targeted quite often thanks to new security issues coming to light on a regular basis. Mozilla's plan to run Flash only on HTTP or HTTPS sites blocks execution of Flash on any non-HTTP non-HTTPS protocol. This includes among others FTP and FILE. Flash content will be blocked completely in these instances. This means that users won't get a "click to play" option or something similar, but just resources blocked from being loaded and executed by the Firefox web browser. Mozilla provides an explanation for the decision on the Firefox Site Compatibility website: Firefox 55 and later will prevent Flash content from being loaded from file, ftp or any other URL schemes except http and https. This change aims to improve security, because a different same-origin policy is applied to the file protocol, and loading Flash content from other minor protocols is usually not well-tested. Mozilla is also looking into extending the block to data: URIs. The change should not affect too many Firefox users and developers, but it will surely impact some. Mozilla implemented a new preference in Firefox that allows users to bypass the new restriction: Type about:config in the browser's address bar and hit the Enter-key. Confirm that you will be careful if the warning prompt appears. Search for the preference plugins.http_https_only. Double-click on it. A value of True enables the blocking of Flash content on non-HTTP/HTTPS pages, while a value of False restores the previous handling of Flash so that it runs on any protocol. Mozilla suggests however that developers set up a local web server instead for Flash testing if that is the main use case. (via Sören) Article source
  8. Firefox warns users about unencrypted pages We suppose it was only a matter of time before someone had a complaint about the notifications browsers display when a website accepts logins over unencrypted HTTP pages. In fact, Mozilla has received a complaint about this very "issue." Folks over at Ars Technica spotted the complaint over on Mozilla's Bugzilla bug-reporting service. "Your notice of insecure password and/or log-on automatically appearing on the log-in for my website, Oil and Gas International, is not wanted and was put there without our permission. Please remove it immediately. we have our own security system, and it has never been breached in more than 15 years. Your notice is causing concern by our subscribers and is detrimental to our business," the message signed by dgeorge reads. Of course, they seem to be late to the party since this type of warnings have been showing for a few months now and became standard earlier this year for both Firefox and Chrome. The benefits of HTTPS Thankfully, someone from Mozilla came forward and cleared things up for dear ol' dgeorge telling him that when a site requests a user's password over HTTP, the transmission is done in the clear. "As such, anybody listening on the network would be able to record those passwords. This puts not just users at risk when using your site, but also puts them at risk on any other website that they might share a password with yours," they explain. In the end, it's been proven time and time again, that providing email and passwords over HTTP is no longer safe. For years now, there's been a push for HTTPS and web admins have been given plenty of time to make the change, both for their sake and their users' sake. Now, Chrome will display a "Not Secure" notification next to the address bar, while Firefox takes things a step further, displaying below the user name and password fields "this connection is not secure. Logins entered here could be compromised." Source
  9. Systems Affected All systems behind a hypertext transfer protocol secure (HTTPS) interception product are potentially affected. Overview Many organizations use HTTPS interception products for several purposes, including detecting malware that uses HTTPS connections to malicious servers. The CERT Coordination Center (CERT/CC) explored the tradeoffs of using HTTPS interception in a blog post called The Risks of SSL Inspection. Organizations that have performed a risk assessment and determined that HTTPS inspection is a requirement should ensure their HTTPS inspection products are performing correct transport layer security (TLS) certificate validation. Products that do not properly ensure secure TLS communications and do not convey error messages to the user may further weaken the end-to-end protections that HTTPS aims to provide. Description TLS and its predecessor, Secure Sockets Layer (SSL), are important Internet protocols that encrypt communications over the Internet between the client and server. These protocols (and protocols that make use of TLS and SSL, such as HTTPS) use certificates to establish an identity chain showing that the connection is with a legitimate server verified by a trusted third-party certificate authority. HTTPS inspection works by intercepting the HTTPS network traffic and performing a man-in-the-middle (MiTM) attack on the connection. In MiTM attacks, sensitive client data can be transmitted to a malicious party spoofing the intended server. In order to perform HTTPS inspection without presenting client warnings, administrators must install trusted certificates on client devices. Browsers and other client applications use this certificate to validate encrypted connections created by the HTTPS inspection product. In addition to the problem of not being able to verify a web server’s certificate, the protocols and ciphers that an HTTPS inspection product negotiates with web servers may also be invisible to a client. The problem with this architecture is that the client systems have no way of independently validating the HTTPS connection. The client can only verify the connection between itself and the HTTPS interception product. Clients must rely on the HTTPS validation performed by the HTTPS interception product. A recent report, The Security Impact of HTTPS Interception, highlighted several security concerns with HTTPS inspection products and outlined survey results of these issues. Many HTTPS inspection products do not properly verify the certificate chain of the server before re-encrypting and forwarding client data, allowing the possibility of a MiTM attack. Furthermore, certificate-chain verification errors are infrequently forwarded to the client, leading a client to believe that operations were performed as intended with the correct server. This report provided a method to allow servers to detect clients that are having their traffic manipulated by HTTPS inspection products. The website badssl.com is a resource where clients can verify whether their HTTPS inspection products are properly verifying certificate chains. Clients can also use this site to verify whether their HTTPS inspection products are enabling connections to websites that a browser or other client would otherwise reject. For example, an HTTPS inspection product may allow deprecated protocol versions or weak ciphers to be used between itself and a web server. Because client systems may connect to the HTTPS inspection product using strong cryptography, the user will be unaware of any weakness on the other side of the HTTPS inspection. Impact Because the HTTPS inspection product manages the protocols, ciphers, and certificate chain, the product must perform the necessary HTTPS validations. Failure to perform proper validation or adequately convey the validation status increases the probability that the client will fall victim to MiTM attacks by malicious third parties. Solution Organizations using an HTTPS inspection product should verify that their product properly validates certificate chains and passes any warnings or errors to the client. A partial list of products that may be affected is available at The Risks of SSL Inspection. Organizations may use badssl.com as a method of determining if their preferred HTTPS inspection product properly validates certificates and prevents connections to sites using weak cryptography. At a minimum, if any of the tests in the Certificate section of badssl.com prevent a client with direct Internet access from connecting, those same clients should also refuse the connection when connected to the Internet by way of an HTTPS inspection product. In general, organizations considering the use of HTTPS inspection should carefully consider the pros and cons of such products before implementing. Organizations should also take other steps to secure end-to-end communications, as presented in US-CERT Alert TA15-120A. Article source
  10. Firefox 51 to Show Better Warnings When Logging in via Insecure HTTP Pages Mozilla's security team previewed today a new set of indicators that will be added to Firefox 51, set for launch on Monday, January 23, and subsequent Firefox versions. The biggest update to the Firefox UI is the addition of a new indicator for HTTP pages with password fields. Starting with Firefox 51, whenever users will land on a login or registration page hosted over HTTP, Mozilla will show a grey lock with a bright red line across it. This UI change is meant to alert users that their credentials might be intercepted due to the nature of the underlying non-secure HTTP connection. Firefox was the first browser to warn users when entering credentials on HTTP pages, but previously this warning appeared only when users clicked on the "?" icon, showed left of the address bar. Previous "login via HTTP" warnings in earlier Firefox versions This change was added with the release of Firefox 44 and was soon copied by browsers such as Chrome, who now show similar warnings. The difference is that Chrome's warnings are a little bit more visible. Chrome's current "login form on HTTP" warning system Starting with Firefox 51, Mozilla will boost the visibility of this warning, so users see it right away, without having to click on a generic button in the URL bar. Furthermore, Mozilla has officially confirmed a feature Bleeping Computer wrote about last November, which is in-page warnings for password fields. These are warnings that appear right under the password field when users are attempting to log in via a non-secure (HTTP) page. This feature is currently scheduled for Firefox 52, set for release on March 6. In-page warning for password fields on HTTP pages But Mozilla engineers aren't done. In a future Firefox version, Mozilla plans show the same grey lock with a red line for all non-secure (HTTP) pages. Mozilla engineers were evasive regarding the exact Firefox version when this major UI change will land, as they seem to wait for HTTPS adoption to grow to more than the current 50% before showing bright red warnings on half of all Internet pages. Source
  11. Over a third (35%) of the world’s websites are still using insecure SHA-1 certificates despite the major browser vendors saying they’ll no longer trust such sites from early next year, according to Venafi. The cybersecurity company analyzed data on over 11 million publicly visible IPv4 websites to find that many have failed to switch over to the more secure SHA-2 algorithm, despite the January deadline. With Microsoft, Mozilla and Google all claiming they won’t support SHA-1 sites, those still using the insecure certificates from the start of 2017 will find customers presented with browser warnings that the site is not to be trusted, which will force many elsewhere. In addition, browsers will not display the tell-tale green padlock on the address line for HTTPS transactions, while some might experience performance issues. There’s also a chance some sites will be completely blocked, said Venafi. SHA-2 was created in response to weaknesses in the first iteration – specifically collision attacks which allow cyber-criminals to forge certificates and perform man-in-the-middle attacks on TLS connections. However, migration to the new algorithm isn’t as simple as applying a patch, and with thousands of SHA-1 certificates in use across websites, servers, applications and databases, visibility is a challenge, warned Venafi vice-president of security strategy and threat intelligence, Kevin Bocek. “The deadline is long overdue: National Institute of Standards and Technology (NIST) has called for eliminating the use of SHA-1 because of known vulnerabilities since 2006,” he told Infosecurity. “Most organizations do not know exactly how many certificates they have or where they are being used, and even if they do, it is a time-consuming and disruptive process to update them all manually.” Bocek recommended organizations first work out where their SHA-1 certificates are and how they’re being used, before building a migration plan. “Here, you will need to work out where your priorities are, so that you can protect your crown jewels first – i.e. the sites and servers that hold sensitive data or process payments. This way the team can focus on migrating critical systems first to ensure they are better protected,” he explained. “The best way to do this is through automation. By automating discovery of digital certificates into a central repository companies can upgrade all certificates to SHA-2 at the click of a button, where possible. And importantly you can track and report on progress to your board, executive leadership, and auditors. This allows businesses to migrate without interrupting business services or upsetting customers.” Article source
  12. Chrome desktop users are spending 75 percent of their time on encrypted HTTPS sites, Google Transparency Report's new HTTPS usage tracker shows. Google has revealed the percentage of pages loaded over HTTPS in Chrome. New figures from Google show that nearly two-thirds of pages loaded on Chrome OS devices are HTTPS sites, followed closely by Mac, Linux, and Windows. HTTPS connections are encrypted, protecting traffic between the browser and web server against man-in-the-middle attacks. They are also seen as a key way to frustrate snooping by ISPs and governments. "A web with ubiquitous HTTPS is not the distant future. It's happening now, with secure browsing becoming standard for users of Chrome," say Chrome's security team. Ahead of changes to how Chrome warns users when connecting to a site over HTTP, Google has beefed up its Transparency Report with an HTTPS usage tracker. The section shows the percent of pages loaded over HTTPS and percentage of browsing time spent on HTTPS websites, as well as a breakdown for 10 countries. Chrome 56's stable release in January ushers in the first phase of new descriptions for HTTP and HTTPS sites. Chrome currently shows a neutral mark before an HTTP site's name in the address bar, but in Chrome 56 it will label them explicitly as 'Not secure'. This practice will start with HTTP pages that collect passwords or credit cards, but will eventually roll out to all HTTP sites. Other components of Google's HTTPS drive include tracking the top 100 sites that have switched to HTTPS, using HTTPS as a ranking signal for search, and its Certificate Transparency project, which monitors certificate authorities issuing the digital certificates that enable HTTPS. Separately, the Let's Encrypt non-profit certificate authority, which provides digital certificates for free, has issued well over a million certificates to websites since its launch last November. The share of webpages loaded by Firefox using HTTPS has climbed from 40 percent when Lets Encrypted launched last November to just under 50 percent today. Google's HTTPS tracker shows that worldwide the percentage of pages loaded over HTTPS on Chrome on all platforms has surpassed 50 percent, up from 40 percent in mid-2015. On Chrome OS the figure is 67 percent. Time spent on HTTPS sites currently ranges between 69 percent to 85 percent, depending on operating system. However, mobile HTTPS page loads are significantly lower, with Chrome on Android climbing from 29 percent in mid-2015 to 42 percent today. While HTTPS usage is on the rise, there's still a long way to go before it really is ubiquitous. Google figures show that just 34 of the world's top 100 sites have enabled HTTPS by default. Some sites, including Microsoft's search engine Bing and Apple's website, for example, support HTTPS, but don't use that by default. Google argues that switching to HTTPS for many sites has produced a "negligible effect" on search traffic. While most tech sites have enabled HTTPS, only a handful of the world's top news publishers have yet to make the switch. However, it's hoped that all website operators will be drawn to more powerful features of the web that are aren't available to HTTP sites on Chrome, such as offline support, credit-card autofill, and HTML5 geolocation support. Besides that, HTTP v2 means HTTPS pages now load faster than HTTP, though browser support is incomplete. Given Chrome has one billion users, access to those features are likely to be an important incentive to web developers to make the switch. Article source
  13. We have extended the original research and can now use information from public keys (HTTPS, TLS, SSH, SSL) to audit cyber security management and compliance with internal standards. This post is about our application of a research I blogged about earlier – Investigating The Origins of RSA Public Keys. You can also visit: https://enigmabridge.com/https.html . The main purpose of https – ‘s’ denoting ‘secure’ – is to create a trusted connection for sending sensitive data between your browser and a web service. This is achieved by providing a secure digital ID of the web service (public key certificate). Until now, it has been widely accepted that such a digital ID didn’t contain any sensitive information that would endanger the security of the web service. No one expected that it could leak internal information about security management – information about methods, tools, and processes were supposed to be completely hidden from users as well as attackers. The worrying discovery, made by Enigma Bridge co-founder Petr Svenda PhD, was awarded the best paper at the USENIX Security Symposium. It shows that sensitive information behind “https”, “tls” and other protocols can be extracted with sophisticated analysis using only information that every web service presents to anyone accessing it. Svenda and his team applied novel techniques to analyse millions of https keys and revealed how the keys were generated. “I am puzzled with peeps are not all over this – enormous implications.”, tweeted Daniel Bilar, Information Security Specialist at Visa. “It is striking that despite 30 years of cryptographic research, no-one has noticed this problem before. It has been hiding in plain sight all along.” Commented Professor of Security Engineering, Ross Anderson, after Svenda’s presentation at the University of Cambridge. Analysis of keys from CA certificates Analysis of keys from HTTPS certificates provided by a CDN company We have progressed the scanning methods to pinpoint how organisations, including blue chip companies, government departments or other companies, which are part of critical infrastructures manage their encryption keys using only publicly available information and identify potential weaknesses in their defenses You can get a quick insight whether companies think about the quality of their encryption keys or let their administrators use any tool at hand – instead of using secure hardware key generators. Sharing keys between different applications is another aspect that suggests insufficient controls or enforcement of cyber security processes. Whilst this vulnerability doesn’t compromise any web site directly, it demonstrates that even public information can leak security details and lead attackers to the most vulnerable targets. Use of validated secure hardware for key generation is the best approach to protect against many attacks. Article source
  14. Chrome is starting to flag more pages as insecure. Here are five things every webmaster should know about HTTPS. Google wants the connection between Chrome and your website to be more secure. And, if you're a webmaster, your upcoming deadline to increase security is January 2017. By that time, your site needs to serve pages with password or payment fields over an HTTPS connection. If you still serve those pages on an unencrypted connection—HTTP only, not HTTPs—Chrome will warn that the page is "Not secure." A quick visit to pages on your site will show you whether or not the site supports HTTPS. Open a page with Chrome and look at the URL bar. Click (or tap) on the lock (or info icon) to the left of the URL to view the connection security status. Then select "Details" for more info. A green lock and the "Your connection to this site is private" message indicates an HTTPS connection between Chrome and the page. The icon to the left of the web address of your website indicates whether or not the site supports a secure connection (HTTPS on left) or not (HTTP on right). In the long term, Google wants every page of your site to support HTTPS—not just the ones with payments or passwords. Google search already prefers to return results from pages with HTTPS over pages that lack a secure connection. To enable an HTTPS connection between your site and visitor browsers, you need to setup an SSL certificate for your website. Here are five things things to know that may make the process easier. 1. Your web hosting provider might already serve your sites over a secured connection. For example, Automattic, which runs Wordpress.com, turned on SSL for their hosted customers in April of 2016. Customers didn't have to do anything at all—other than use Wordpress.com to host a site. 2. A few web hosting vendors make certificate setup free and easy Other web hosting providers offer a secure connection as an option, for free. Squarespace and Dreamhost, for example, both let customers choose to enable secure sites. Configuration of certificates used to be much more difficult, but these vendors streamline the process to a few steps. Some web hosting vendors make SSL certificate setup both free and easy. Let's Encrypt, a project of the nonprofit Internet Security Research Group, provides the certificates for all three of the vendors just mentioned (Dreamhost, Squarespace, and Wordpress). Many other vendors offer easy setup, too. Look at the community-maintained list of web hosting providers that support Let's Encrypt. More notably, Let's Encrypt certificate services are free. Yet, some web hosting vendors still charge significant fees for certificates. If you receive some additional authentication or security services, the fees may provide value. (For most non-technical organizations, I suggest you choose—or switch to—a web hosting vendor that supports Let's Encrypt.) 3. If you're on shared hosting, you may need an upgrade The certificates won't necessarily work in every hosting setup. In some cases, for example, a web hosting provider will only offer SSL with a dedicated server. That may mean a potential increase in hosting costs. In other cases, the certificate will work, but won't work with certain older browsers. For example, in the case of Dreamhost, you may choose to add a unique IP address to your hosting plan along with your Let's Encrypt certificate. Doing this allows the secure connection to work with certain versions of Internet Explorer on Windows XP, as well as some browsers on older Android devices (e.g., Android 2.4 and earlier). If you're on a shared hosting plan, you may need an upgrade to enable SSL or to support a secure the connection to older browsers or devices. 4. Check your login and checkout processes Many sites rely on third-party vendors for registration, e-commerce, mailing list sign-up, and/or event registration. While most trustworthy vendors already deliver these pages over HTTPS connections, verify that is the case. Make sure your vendors offer your visitors the same secure connection your site does. 5. After the switch, check your links Verify that your site links work. Follow your web hosting provider's instructions to make sure that every request for an insecure page (HTTP), redirects automatically to one delivered over a secure connection (HTTPS). You may need to make some additional changes to your content management system. For example, at Dreamhost, you will need to make additional adjustments to Wordpress settings. Gone HTTPS yet? At the time of this writing, we're just two months away from when Chrome begins to deliver more aggressive alerts to warn of insecure pages. Hopefully, you've already secured the necessary pages on your site. But, that's just the first step. For most websites, there's little downside to moving to HTTPS as soon as possible. Article source
  15. A recent thread on twitter recently highlighted a field test flag in the chromium project that attempts to handle HTTPS errors on base domains. Essentially if you visit https://securedomain.com and the certificate is only for https://www.securedomain.com Chrome will detect this and automatically redirect the user to the www domain without showing an error. In his example visiting https://onlineservices.nsdl.com resulted in Chrome redirecting him to https://www.onlineservices.nsdl.com because the non-www did not have a valid certificate. The redirect only happens when a valid certificate is found on www You can see in this tweet it is Chrome itself doing the redirect. The behaviour was confirmed by Adrienne Porter Felt who works on the Chrome usability team.  — @aidantwoods This could be useful for end-users frustrated with HTTPS errors due to poor server configuration. However it could present lax administrators who do a quick test in Chrome with the false sense that a certificate is correctly configured. IE, Edge and Firefox may not implement this feature which could result in a much different user experience. It seems the flag SSLCommonNameMismatchHandling is currently only in the Chrome Canary pre-release browser at present. All certificates purchased from Servertastic with the www preface on the base domain also secure the base domain at no extra cost. Article source
  16. A majority of Mozilla users were served encrypted pageloads for the first time yesterday, meaning their web browsing data was secured from snoopers and hackers while in transit. The HTTPS milestone was tweeted by Josh Aas, head of the Let’s Encrypt initiative which has been working to help smaller websites switch to encrypting their web traffic. Mozilla, which is one of the organizations backing Let’s Encrypt, was reporting that 40 per cent of page views were encrypted as of December 2015. So it’s an impressively speedy rise. That said, there are plenty of caveats here — the biggest being it’s just one browser, Mozilla’s Firefox, which lags far behind the dominant default browsers of the mainstream web. Statista pegs Firefox at just a 7.77 per cent global marketshare for July 2016 vs 49.5 per cent for Google’s Chrome and 13.68 per cent for Apple’s Safari browser. Add to that, is also only a subset of Firefox users who are running Mozilla’s telemetry browser performance reporting feature. The telemetry feature is also not default switched on for most Firefox users (only for users of pre-release Firefox builds). And it’s just a one-day snapshot. All of which is to say the sample here is certainly very salami sliced and clearly not representative of mainstream web usage. So, while the speed of the shift to HTTPS among this user group is noteworthy and encouraging, there’s still plenty of work to be done to make encrypted connections the rule for the majority of web users and web browsing sessions. The Let’s Encrypt initiative, which exited beta back in April, is doing some of that work by providing sites with free digital certificates to help accelerate the switch to HTTPS. According to Aas, Let’s Encrypt added more than a million new active certificates in the past week — which is also a significant step up. In the initiative’s first six months (when still in beta) it only issued around 1.7 million certificates in all. As well as carrots there are sticks driving websites to shift to HTTPS. One of which is Google, which has said it intends to flag unsecured connections in its popular Chrome browser — thereby brandishing the threat of a traffic apocalypse for sites that do not roll out encryption. Article source
  17. You’ve heard us talk extensively about the importance of moving the web to HTTPS – the encrypted version of the web’s HTTP protocol. Today, CDT is releasing a one-pager aimed toward website system administrators (and their bosses!) that describes the importance of HTTPS. The very short version of our argument is as follows: Without HTTPS, ISPs and governments can spy on what your users are doing; Using HTTPS prevents malicious actors from injecting malware into the traffic you serve; You already need HTTPS to do payments if you accept money; Without HTTPS, ISPs can strip out your ads/referrals and add their own; Without HTTPS, your website cannot utilize HTTP/2 for optimal performance; Without HTTPS, you can’t use the latest web features that require HTTPS (e.g., geolocation); and Without HTTPS, you can’t know if your users received important resources like your terms of service and privacy policy without modification. At CDT we’ve been looking into ways to motivate increased HTTPS adoption, which is now at well over half of all web requests. However, the amount of unencrypted HTTP is still massive, and there are a lot of large websites that do not use HTTPS. Enter Google’s transparency report, which recently added a section that tracks HTTPS adoption on the top 100 websites. It assesses sites in terms of three factors: do they support HTTPS, do they do so by default, and do they use modern cryptography. Many major sites like Facebook, Google, and Wikimedia have made the switch. One wrinkle emerges from Google’s report quite clearly: the two big industry sectors not doing so hot in terms of HTTPS are news sites and the adult entertainment industry. If you are a sysadmin at a top-100 adult site, allow us to help you navigate the switch to a more secure web for your users. To that end, we are excited to announce a partnership to increase HTTPS adoption for online adult entertainment. Over the coming months, CDT will work with the Free Speech Coalition (FSC) – the trade association for the adult entertainment industry – and other HTTPS evangelists to engage with adult website operators and make the case that we make here: HTTPS is the best of all worlds in terms of protecting traffic online and delivering the best experience for users. We plan to conduct a series of webinars and outreach events in partnership with FSC to reach their large network of members. If you are an adult website operator who has questions we can answer, please don’t hesitate to reach out to us or the folks at FSC. If you are a sysadmin at a top-100 adult site, allow us to help you navigate the switch to a more secure web for your users. As Google’s transparency report exposed, adult websites are moving slowly; large adult websites seem to overwhelmingly use plain HTTP, or serve ads over plain HTTP. The few adult websites in the top-100 that scored well in Google’s metrics were “cam” sites – websites that facilitate remote adult interactions via real-time video chat between two individuals. That seemed intuitive; all the other top-100 adult sites were focused on one-way broadcast of adult videos, images, etc., rather than two-way real-time communication, which could be exceedingly more sensitive than passive consumption of adult content. There is some good news for adult entertainment sites in terms of how difficult it might be to switch to HTTPS. Princeton researchers Steven Englehardt and Arvind Narayanan published research earlier this year that, in part, showed adult websites have many fewer trackers than news sites. One of the biggest factors in slow adoption by news sites of HTTPS was the complexity of their ad infrastructure and website analytics; they had to track down every single instance of an insecure page element being sent and work with their partners to correct that behavior. So, perhaps the adult industry won’t face the same barriers to HTTPS adoption that journalism has faced? A more secure Web is in all of our interests Even with the challenges, there has been some good movement from news sites recently: The Washington Post, Wired, ProPublica, TechCrunch, and Buzzfeed are great examples of news properties that have all moved to HTTPS (Zack Tollman at Wired has gone so far as to document the process and various snags they’ve run into during their move to HTTPS). A more secure Web is in all of our interests – and that includes every corner, from news sites to the more private parts. We look forward to working with diverse organizations, including the Free Speech Coalition, to increase HTTPS adoption and improve all of our security as we interact online. Article source
  18. Mozilla Launches Free Website Security Testing Service Observatory code is open source and available on GitHub Mozilla security engineer April Knight released a project called Observatory, a free website security scanning utility, similar to SSL Labs and High-Tech Bridge's scanning service. The service, working on top of a Python codebase made available on GitHub, has been under development for months and was approved for a public launch only yesterday. Observatory is aimed at developers, system administrators, and security professionals that want to configure sites to use modern security protocols. Service uses A to F scores to grade website security Observatory scans for the presence of basic security features and then gives out a grade from 0 to 130, which is then converted into an A to F score. In its current form, the service scans for the following: [1] Content Security Policy (CSP) status, [2] cookie files using Secure flag, [3] Cross-Origin Resource Sharing (CORS) status, [4] HTTP Public Key Pinning (HPKP) status, [5] HTTP Strict Transport Security (HSTS) status, [6] the presence of an automatic redirection from HTTP to HTTPS, [7] Subresource Integrity (SRI) status, [8] X-Content-Type-Options status, [9] X-Frame-Options (XFO) status, and [10] X-XSS-Protection status. All basic security recommendations, albeit extremely hard to implement, a reason why many websites still don't use them. Over 91% of current websites fail Observatory's tests According to Knight, who performed automatic scans of over 1.3 million websites, over 91 percent of modern-day websites fail Observatory's tests. "When nine out of 10 websites receive a failing grade, it’s clear that this is a problem for everyone. And by “everyone”, I’m including Mozilla — among our thousands of sites, a great deal of them fail to pass," Knight wrote yesterday, revealing that Observatory was developed to help Mozilla tests their own domains first. Year Technology Attack Vector Adoption† 1995 Secure HTTP (HTTPS) Man-in-the-middle Network eavesdropping 29.6% 1997 Secure Cookies Network eavesdropping 1.88% 2008 X-Content-Type-Options MIME type confusion 6.19% 2009 - 2011 HttpOnly Cookies Cross-site scripting (XSS) Session theft 1.88% 2009 - 2011 X-Frame-Options Clickjacking 6.83% 2010 X-XSS-Protection Cross-site scripting 5.03% 2010 - 2015 Content Security Policy Cross-site scripting .012% 2012 HTTP Strict Transport Security Man-in-the-middle Network eavesdropping 1.75% 2013 - 2015 HTTP Public Key Pinning Certificate misissuance .414% 2014 HSTS Preloading Man-in-the-middle .158% 2014 - 2016 Subresource Integrity Content Delivery Network (CDN) compromise .015% 2015 - 2016 SameSite Cookies Cross-site reference forgery (CSRF) N/A 2015 - 2016 Cookie Prefixes Cookie overrides by untrusted sources N/A † Adoption rate amongst the Alexa top million websites as of April 2016. Source
  19. Researchers at INRIA, the French national research institute for computer science, have devised a new way to decrypt secret cookies which could leave your passwords vulnerable to theft. Karthikeyan Bhargavan and Gaetan Leurent, have devised and carried out an attack – in a crypto research lab – which can pirate traffic from over 600 of the web’s most popular sites and lay bare your previously secure login information. The exploit, dubbed ‘Sweet32’, isn’t easy to carry out, however. It involves mining hundreds of gigabytes of data, and targeting specific users who have accessed a malicious website which saddled them with a bit of malware. Still, the difficulty in carrying out the attack is outweighed by just how completely it subverts some of the internet’s most common encryption schemes. While the attack is very difficult to carry out in practice, the existence the exploit has security experts on the OpenSSL development team taking notice. By mining HTTPS or OpenVPN encrypted traffic, the researchers were able to use a mathematical paradox to identify portions of encrypted information and decipher login and password credentials in their entirety. Don’t panic just yet, security experts speaking with Ars Technica are convinced that the threat posed by the exploit is minimal, in part due to the fact that it’s got a relatively simple fix. The key vulnerability exploited in the secret-cookie-decryption-scheme is only found in 64-bit block ciphers, which OpenVPN developers have already addressed in the most recent version of their VPN software. Other security experts speaking with Ars have confirmed that the exploit poses little threat as long as developers get on board and stop using 64-bit block ciphers like Triple DES, or ‘3DES’. “The 3DES issue is of little practical consequence at this time. It is just a matter of good hygiene to start saying goodbye to 3DES,” said Viktor Dukhovni, a member of the OpenSSL team. Article source
  20. SSL is a great way to encrypt and protect data transferred between servers or between browser and servers from any attempt to spy on the data on its way or as known as man in the middle attack, we will focus in this article on HTTPS protocol and the method to attack it and proper way to fight against this attacks. Is HTTPS that important ? first let’s declare the importance of using SSL with HTTP traffic. Imagine the next scenario. you are trying to login to your bank account with your laptop connected in your wifi and you know its secure its you and your little sister who connect in the same wifi, secure right? ? but your wifi uses weak password or vulnerable to exploits, so someone gain access to the same wifi and with a simple tool he can run a packet sniffer and catch all your and your sister’s traffic and look into your password and even change the data if he wants. Imaging the same scenario but your bank is using HTTPS, when you access the website you receive the website certificate signed and your browser validate the signature to make sure that certificate belongs to the website, then your browser encrypt all data then send the encrypted data to the server and do it vice versa, so if our attacker try to sniff the data all what he will get is the encrypted data, cool right ? Lets be honest no one is 100% secure and SSL had a tough couple of years from attacks like Heartblead, DROWN and POODLE , this attacks target the SSL it self , all what you have to do to mitigate this attacks is to be up to date always and apply vendors patches as it appears. But what about sniffing dangerous, does using HTTPS solve it? the answer is not completely, some researchers tried to sniff HTTPS packages by inventing tools like SSL sniff and SSL strip. SSL sniff :- SSL sniff is tool programmed by Moxie Marlinspike based on vulnerability he discovered, let us quickly describe it. When you request a website for example ( example.com ) as we said before you receive the example.com certificate the certificate must be issued by one of the valid vendors, so if follow certificate chain from the root certificate ( root certificate embedded in the browsers by default) to the leaf certificate ( example.com certificate) but what if leaf certificate tried to generate another certificate in the chain? lets say to website like paypal.com! the surprising thing that it worked and no one bothered himself by checking that leaf certificate generated another leaf certificate, but how attacker can use this? the website still be example.com not paypal.com, and that’s why he made SSL Sniff tool. by intercepting the traffic (man in the middle attack) you will intercept the request to paypal.com and with SSL Sniff, then you can generate the paypal.com certificate from the leaf certificate you have example.com and send it back to the browser instead of original paypal.com certificate, when the browser try to validate the certificate it will pass because the chain is correct, then any request between the browser and the server will be signed by the certificate you generate so you can decrypt the data as you want, and then re-transfer it by using the original paypal.com certificate, Boom. fortunately it had been fixed and now the leaf certificate cannot generate another certificate. SSL Strip:- Another tool by the same man Moxie Marlinspike. but in this time he came up with another trick using man in the middle, but what if he changed the request to http instead of HTTPS, and he will request the website on behalf of the user using HTTPS but between the attacker and the user its plain http, and the user will not be so suspicious to notice the difference in his browser. How to defend against this techniques ? Using HTTPS only will not solve it completely, even if you restricted the connection to HTTPS only in the server side, the attacker still can force user to use HTTP by using SSL strip and you will not notice the request still HTTPS in your end, and here HSTS header comes. HTTP Strict Transport Security (HSTS) is a web security policy mechanism it tells the browser that he must only connect to the website using secure HTTPS connection. just send header like this from your server. Strict-Transport-Security: max-age=31536000 The key is Strict-Transport-Security that tells the browser or any other agent to strict the transportation to ssl . the value is maximum age to use this header in seconds 31536000 equal to one non-leap year. Then the user agent will automatically change any url to HTTPS before it send it to the server allowing only secure connections. Bottom line , using HTTPS comes with responsibilities , you must be up to date , patch your system if any vulnerability comes up, renew your certificate on time and don’t forget to use Strict-Transport-Security Policy. Article source
  21. Mozilla plans to launch an update for the built-in password manager in Firefox that will make HTTP passwords work on HTTPS sites as well. If you use the built-in functionality to save passwords in Firefox currently, you may know that the manager distinguishes between HTTP and HTTPS protocols. When you save a password for http://www.example.com/, it won't work on https://www.example.com/. When you visit the site using HTTPS later on, Firefox won't suggest the username and password saved previously when connected via HTTP. One option was to save passwords for HTTP and HTTPS sites separately, another to open the password manager and copy username and password manually whenever needed on the HTTPS version of a site. With more and more sites migrating to HTTPS, or at least providing users with a HTTPS option to connect to it, it was time to evaluate the Firefox password manager behavior in this regard. Firefox 49: HTTP passwords on HTTPS sites Mozilla made the decision to change the behavior in the following way starting with the release of Firefox 49. Passwords for the HTTP protocol will work automatically when connected via HTTPS to the same site. In other words, if a HTTP password is stored in Firefox, it will be used for HTTP and HTTPS sites when Firefox 49 is released. The other way around does not however. Passwords saved explicitly for HTTPS, won't be used when a user connects to the HTTP version of the site. The main reason for this is security. More precisely, because HTTP does not use encryption, and that password and username may be recorded easily by third-parties. Check out the bug listing on Bugzilla if you are interested in the discussion that led to the change in Firefox 49. Closing Words Firefox users who use the password manager of the web browser may notice the change once their version of the browser is updated to version 49. It should make things a bit more comfortable for those users, especially if a lot of HTTP passwords are saved already. With more and more sites migrating over to HTTPS, it is likely that this will be beneficial to users of the browser. (via Sören) Article source
  22. The coolest talk of this year's Blackhat must have been the one of Sean Devlin and Hanno Böck. The talk summarized this early year's paper, in a very cool way: Sean walked on stage and announced that he didn't have his slides. He then said that it didn't matter because he had a good idea on how to retrieve them. He proceeded to open his browser and navigate to a malicious webpage. Some javascript there seemed to send various requests to a website in his place, until some MITM attacker found what they came for. The page refreshed and the address bar now whoed https://careers.mi5.gov.uk as well as a shiny green lock. But instead of the content we would have expected, the white title of their talk was blazing on a dark background. What happened is that a MITM attacker tampered with the mi5's website page and injected the slides in a HTML format in there. They then went ahead and gave the whole presentation via the same mi5's webpage. How it worked? The idea is that repeating a nonce in AES-GCM is... BAD. Here's a diagram from Wikipedia. You can't see it, but the counter has a unique nonce prepended to it. It's supposed to change for every different message you're encrypting. AES-GCM is THE AEAD mode. We've been using it mostly because it's a nice all-in-one function that does encryption and authentication for you. So instead of shooting yourself in the foot trying to MAC then-and-or Encrypt, an AEAD mode does all of that for you securely. We're not too happy with it though, and we're looking for alternatives in the CAESAR's competition (there is also AES-SIV). The presentation had an interesting slide on some opinions: "We conclude that common implementations of GCM are potentially vulnerable to authentication key recovery via cache timing attacks." (Emilia Käsper, Peter Schwabe, 2009) "AES-GCM so easily leads to timing side-channels that I'd like to put it into Room 101." (Adam Langley, 2013) "The fragility of AES-GCM authentication algorithm" (Shay Gueron, Vlad Krasnov, 2013) "GCM is extremely fragile" (Kenny Paterson, 2015) One of the bad thing is that if you ever repeat a nonce, and someone malicious sees it, that person will be able to recover the authentication key. It's the H in the AES-GCM diagram, and it is obtained by hashing the encryption key K. If you want to know how, check Antoine Joux' comment on AES-GCM. Now, with this attack we lose integrity/authentication as soon as a nonce repeats. This means we can modify the ciphertext in the middle and no-one will realize it. But modifying the ciphertext doesn't mean we can modify the plaintext right? Wait for it... Since AES-GCM is basically counter mode (CTR mode) with a GMac, the attacker can do the same kind of bitflip attacks he can do on CTR mode. This is pretty bad. In the context of TLS, you often (almost always) know what the website will send to a victim, and that's how you can modify the page with anything you want. Look, this is CTR mode if you don't remember it. If you know the nonce and the plaintext, fundamentally the thing behaves like a stream cipher and you can XOR the keystream out of the ciphertext. After that, you can encrypt something else by XOR'ing the something else with the keystream They found a pretty big number of vulnerable targets by just sending dozen of messages to the whole ipv4 space. My thoughts Now, here's how the TLS 1.2 specification describe the structure of the nonce + counter in AES-GCM: [salt (4) + nonce (8) + counter (4)]. The whole thing is a block size in AES (16 bytes) and the salt is a fixed part of 4 bytes derived from the key exchange happening during TLS' handshake. The only two purposes of the salt I could think of are: preventing against multi-target attacks on AES making the nonce smaller to make nonce-repeating attacks easier Pick the reason you prefer. Now if you picked the second reason, let's recap: the nonce is the part that should be different for every different message you encrypt. Some increment it like a counter, some others generate them at random. This is interesting to us because the birthday paradox tells us that we'll have more than 50% chance of seeing a nonce repeat after \(2^{32}\) messages. Isn't that pretty low? Article source
  23. Attack relies only a piece of malicious JavaScript code New HEIST attack extracts content from HTTPS streams Two Belgian security researchers have presented their latest work at the Black Hat security conference in Las Vegas: a new Web-based attack that can steal encrypted content from HTTPS traffic using nothing more than JavaScript. Their attack is called HEIST, which stands for HTTP Encrypted Information can be Stolen through TCP-Windows. The attack relies on a malicious entity embedding special JavaScript code on a Web page. This can be done right on the website if the attacker owns the site, or via JS-based ads if the attacker needs to embed the attack vector on third-party sites. The most deadly attack scenario is the latter, when the attacker sneakily embeds malicious JS inside an ad, which is shown on your banking portal or social media accounts. HEIST is another side-channel attack on HTTPS At its core, the JavaScript code performs two main functions. The first is to try and fetch content via a hidden JavaScript call from a private page that holds sensitive information such as credit card numbers, real names, phone numbers, SSNs, etc.. This page is protected in most cases by HTTPS. Secondly, as the content is retrieved, using a repeated probing mechanism of JavaScript calls, the attacker pinpoints the size of the data embedded on the sensitive page. HEIST basically brute-forces the size of small portions of data that get added to a page as it loads. As such, the attack can take a while. If the page is loaded using the next-gen version of HTTP, the HTTP/2 protocol, the time needed to carry out the attack is much shorter because HTTP/2 supports native parallel requests. At its core, HEIST is another side-channel attack on HTTPS, one that doesn't break SSL encryption but leaks enough information about the data exchanged in HTTPS traffic to guess its content. As data is transferred in small TCP packets, by guessing the size of these packets, an attacker can infer their content. HEIST is CRIME or BREACH attacks embedded in JavaScript "Combined with the fact that SSL/TLS lacks length-hiding capabilities, HEIST can directly infer the length of the plaintext message," the researchers explain. "Concretely, this means that compression-based attacks such as CRIME and BREACH can now be performed purely in the browser, by any malicious website or script, without requiring a man-in-the-middle position." According to the researchers, the simplest way to block HEIST attacks is by disabling support for either third-party cookies or JavaScript execution in your browser. At this point in time, and judging by how the Internet has evolved, disabling JavaScript is not a solution. Below is a breakdown of the complicated attack routine. More details are available in Tom Van Goethem and Mathy Vanhoef's research paper. HEIST attack breakdown Article source
  24. Comms patterns ID OS, browser and application Encryption might hide important content from prying eyes, but a group of Israeli researchers has found that HTTPS traffic alone can fingerprint a user's operating system, browser, and application. With a big enough learning set, they write, they were able to identify users' environments with 96.06 per cent accuracy. In their paper at Arxiv, the group – from Ariel University and the Ben-Gurion University of the Negev – show that the characteristics of communication traffic (timing, flows in both directions, variations in packet size and the like) are distinctive enough to create the fingerprint. It's not only passing spooks that will take an interest in such identification: “A passive adversary may also collect statistics about groups of users for improving their marketing strategy. In addition, an attacker may use tuples statistics for identifying a specific person”, the researchers write. The information would also help someone targeting an attack, since eavesdroppers (for example, someone sucking traffic out of a public Wi-Fi hotspot) “can easily leverage the information about the user to fit an optimal attack vector”. The operating systems the researchers tested were Windows, Linux (Ubuntu) and OSX; they tested the Chrome, IE, Firefox and Safari browsers; and the applications in the dataset included Facebook and Twitter. Connection behaviour had already been used to identify Skype and other VoIP conversations in spite of encryption, but that didn't reach all the way to the underlying operating system, the paper says. The researchers have published their dataset here for others to test. The Source
  25. Electronic Frontier Foundation (EFF) has announced the release of its millionth free HTTPS certificate as part of the company’s ‘Let's Encrypt Certificate Authority’ concept. Last year EFF, who co-founded Let's Encrypt CA with Mozilla and researchers from the University of Michigan, made public its aim of building a more secure future for the World Wide Web. This began with issuing and managing free certificates for any website that needs them, aiding in the transition from HTTP to the more secure HTTPS protocol on the web. Now, just three months on from the first beta version of the service becoming available, the company has reached this significant landmark which shows they are living up to their promise of helping to ensure websites are more secure with better encryption. What’s more, because a single certificate can cover more than one domain, the million certs Let’s Encrypt CA has issued are actually valid for 2.5 million fully-qualified domain names. In a post on its website EFF said: “It is clear that the cost and bureaucracy of obtaining certificates was forcing many websites to continue with the insecure HTTP protocol, long after we've known that HTTPS needs to be the default. We're very proud to be seeing that change, and helping to create a future in which newly provisioned websites are automatically secure and encrypted.” In a statement to Infosecurity Brian Honan, Owner and CEO of BH Consulting, praised Let's Encrypt CA and said the initiative is helping to create a safer internet. “This is a great milestone for Let’s Encrypt CA to reach, particularly as it has only relatively recently been available for general use. Hopefully the take up will continue and more and more companies and web sites will take this opportunity to protect their visitors’ privacy and improve their online security.” Article source
×