Jump to content
New Members Read more... ×

Search the Community

Showing results for tags 'certificate'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...

Found 17 results

  1. Following Topics Covered: Web development basics with HTML Cascading Style Sheets (CSS) JavaScript programming jQuery JavaScript library Bootstrap framework You Will Learn to: Build a simple HTML text site Style web pages using CSS Program websites with JavaScript Build a Pipboy using Bootstrap Build and publish a Google Chrome Extension Description If you would like to get started as a front-end web developer, you are going to LOVE this course! Work on projects ranging from a simple HTML page to a complete JavaScript based Google Chrome extension. We will cover the following technologies in this course: This course covers the most popular web development frameworks, and will get you started on your path towards becoming a full-stack web developer! LINK https://www.udemy.com/front-end-web-development/?couponCode=FBFREE18 Complete Python 3 Course with certificate ... Beginner to Advance... below :)
  2. Kyle_Katarn

    MassCert 1.6

    Summary MassCert is a user-friendly batch digital signature utility. Features Automatic mass-signature using Microsoft's SignTool (Windows SDK required) Supports timestamping compliant with RFC 3161 Supports PKCS #12 personal information file (X.509 certificate + private key bundle) Automatic verification of correct execution Internationalization support. Homepage: http://www.kcsoftwares.com/?home Download: http://www.kcsoftwares.com/files/masscert.exe Portable: http://www.kcsoftwares.com/files/masscert.zip
  3. jayesh30202

    LearnFlakes Limited Signup

    LearnFlakes is a tracker specially for the people who are interested in learning more of computing and IT. The tracker is about IP , CCNA , CCNP & CCIE Routing, Linux, Admins, Security, Programming Languages & many other IT related articles. Signup link
  4. We have a little problem on the web right now and I can only see this becoming a larger concern as time goes by. More and more sites are obtaining certificates, vitally important documents that we need to deploy HTTPS, but we have no way of protecting ourselves when things go wrong. Certificates We're currently seeing a bit of a gold rush for certificates on the web as more and more sites deploy HTTPS. Beyond the obvious security and privacy benefits of HTTPS, there are quite a few reasons you might want to consider moving to a secure connection that I outline in my article Still think you don't need HTTPS?. Commonly referred to as 'SSL certificates' or 'HTTPS certificates', the wider internet is obtaining them at a rate we've never seen before in the history of the web. Every day I crawl the top 1 million sites on the web and analyse various aspects of their security and every 6 months I publish a report. You can see the reports here but the main result to focus on right now is the adoption of HTTPS. Not only are we continuing to deploy HTTPS, the rate at which we're doing so is increasing too. This is what real progress looks like. The process of obtaining a certificate has become more and more simple over time and now, thanks to the amazing Let's Encrypt, it's also free to get them. Put simply, we send a Certificate Signing Request (CSR) to the Certificate Authority (CA) and the CA will challenge us to prove our ownership of the domain. This is usually done by setting a DNS TXT record or hosting a challenge code somewhere on a random path on our domain. Once this challenge has been satisfied the CA will issue the certificate and we can then use it to present to the browser and get our green padlock and HTTPS in the address bar. I have a few tutorials to help you out with this process including how to get started, smart renewal and dual certificates. But this is all great, right? What's the problem here? The problem is when things don't go according to plan and you have a bad day. We've been hacked Nobody ever wants to hear those words but the sad reality is that we do, more often than any of us would like. Hackers can go after any number of things when they gain access to our servers and often one of the things they can access is our private key. The certificates we use for HTTPS are public documents, we send them to anyone that connects to our site, but the thing that stops other people using our certificate is that they don't have our private key. When a browser establishes a secure connection to a site, it checks that the server has the private key for the certificate it's trying to use, this is why no one but us can use our certificate. If an attacker gets our private key though, that changes things. Now that an attacker has managed to obtain our private key, they can use our certificate to prove that they are us. Let's say that again. There is now somebody on the internet, that can prove they are you, when they are not you. This is a real problem and before you think 'this will never happen to me', do you remember Heartbleed? This tiny bug in the OpenSSL library allowed an attacker to steal your private key and you didn't even have to do anything wrong for it to happen. On top of this there are countless ways that private keys are exposed by accident or negligence. The simple truth is that we can lose our private key and when this happens, we need a way to stop an attacker from using our certificate. We need to revoke the certificate. Revocation In a compromise scenario we revoke our certificate so that an attacker can't abuse it. Once a certificate is marked as revoked the browser will know not to trust it, even though it's valid. The owner has requested revocation and no client should accept it. Once we know we've had a compromise we contact the CA and ask that they revoke our certificate. We need to prove ownership of the certificate in question and once we do that, the CA will mark the certificate as revoked. Now the certificate is revoked, we need a way of communicating this to any client that might require the information. Right now the browser doesn't know and of course, that's a problem. There are two mechanisms that we can use to make this information available and they are a Certificate Revocation List (CRL) or the Online Certificate Status Protocol (OCSP). Certificate Revocation Lists A CRL is a really simple concept and is quite literally just a list of all certificates that a CA has marked as revoked. A client can contact the CRL Server and download a copy of the list. Armed with a copy of the list the browser can check to see if the certificate it has been provided is on that list. If the certificate is on the list, the browser now knows the certificate is bad and it shouldn't be trusted, it will throw an error and abandon the connection. If the certificate isn't on the list then everything is fine and the browser can continue the connection. The problem with a CRL is that they contain a lot of revoked certificates from the particular CA. Without getting into too much detail they are broken down per intermediate certificate a CA has and the CA can fragment the lists into smaller chunks, but, the point I want to make remains the same. The CRL is typically not an insignificant size. The other problem is that if the client doesn't have a fresh copy of the CRL, it has to fetch one during the initial connection to the site which can make things look much slower than they actually are. This doesn't sound particularly great so how about we take a look at OCSP? Online Certificate Status Protocol The OCSP provides a much nicer solution to the problem and has a significant advantage over the CRL approach. With OCSP we ask the CA for the status of a single, particular certificate. This means all the CA has to do is respond with a good/revoked answer which is considerably smaller than a CRL. Great stuff! It is true that OCSP offered a significant performance advantage over fetching a CRL, but, that performance advantage did come with a cost (don't you hate it when that happens?). The cost was a pretty significant one too, it was your privacy... When we think about what an OCSP request is, the request for the status of a very particular, single certificate, you may start to realise that you're leaking some information. When you send an OCSP request, you're basically asking the CA this: Is the certificate for pornhub.com valid? So, not exactly an ideal scenario. You're now advertising your browsing history to some third party that you didn't even know about, all in the name of HTTPS which set out to give us more security and privacy. Kind of ironic, huh? But wait, there's something else. Hard fail I talked about the CRL and OCSP responses above, the two mechanisms a browser can use to check if a certificate is revoked, and they look like this. Upon receiving the certificate, the browser will reach out to one of these services and perform the necessary query to ultimately ascertain the status of the certificate. What if your CA is having a bad day and the infrastructure is offline? What if it looks like this? The browser has only two choices here. It can refuse to accept the certificate because it can't check the revocation status or it can take a risk and accept the certificate without knowing the revocation status. Both of these options come with their advantages and disadvantages. If the browser refuses to accept the certificate then every time your CA has a bad day and their infrastructure goes offline, your sites goes offline too. If the browser continues and accepts the certificate then it risks using a certificate that could have been stolen and exposes the user to the associated risks. It's a tough call, but right now, today, neither of these actually happen... Soft Fail What actually happens today is that a browser will do what we call a soft fail revocation check. That is, the browser will try to do a revocation check but if the response doesn't come back, or doesn't come back in a short period of time, the browser will simply forget about it. Even is worse is that Chrome doesn't even do revocation checks, at all. Yes, you did read that right, Chrome doesn't even try to check the revocation status of certificates that it encounters. What you might find even more odd is that I completely agree with their approach and I'm quite happy to report that right now Firefox looks like they will be joining the party very soon too. Let me explain. The problem we had with hard fail was obvious, the CA has a bad day and so do we, that's how we arrived at soft fail. The browser will now try to do a revocation check but will ultimately abandon the check if it takes too long or it appears the CA is offline. Wait, what was that last part? The revocation check is abandoned if "it appears the CA is offline". I wonder if an attacker could simulate that? If you have an attacker performing a MiTM attack all they need to do is simply block the revocation request and make it look like the CA is offline. The browser will then soft fail the check and continue on to happily use the revoked certificate. Every single time you browse and encounter this certificate whilst not under attack you will pay the cost of performing the revocation check to find out the certificate is not revoked. The one time you're under attack, the one time you really need revocation to work, the attacker will simply block the connection and the browser will soft fail through the failure. Adam Langley at Google came up with the best description for what revocation is, it's a seatbelt that snaps in a car crash, and he's right. You get in your car every day and you put your seatbelt on and it makes you feel all warm and fuzzy that you're safe. Then, one day, things don't quite go to plan, you're involved in a crash and out of the windscreen you go. The one time you needed it, it let you down. Fixing the problem Right now at this very moment in time the truth is that there is no reliable way to fix this problem, revocation is broken. There are a couple of things worth bringing up though and we may be able to look to a future where we have a reliable revocation checking mechanism. Proprietary mechanisms If a site is compromised and an attacker gets hold of the private key they can impersonate that site and cause a fair amount of harm. That's not great but it could be worse. What if a CA was compromised and an attacker got access to the private key for an intermediate certificate? That would be a disaster because the attacker could then impersonate pretty much any site they like by signing their own certificates. Rather than doing online checks for revocation of intermediate certificates, Chrome and Firefox both have their own mechanisms that work in the same way. Chrome calls theirs CRLsets and Firefox call theirs OneCRL and they curate lists of revoked certificates by combining available CRLs and selecting certificates from them to be included. So, we have high value certificates like intermediates covered, but what about you and I? OCSP Must-Staple To explain what OCSP Must-Staple is, we first need a quick background on OCSP Stapling. I'm not going to go into too much info, you can get that in my blog on OCSP Stapling, but here is the TL;DR. OCSP Stapling saves the browser having to perform an OCSP request by providing the OCSP response along with the certificate. It's called OCSP Stapling because the idea is that the server would 'staple' the OCSP Response to the certificate and provide both together. At first glance this seems a little odd because the server is almost 'self certifying' its own certificate as not being revoked, but it all checks out. The OCSP response is only valid for a short period and is signed by the CA in the same way that the certificate is signed. So, in the same way the browser can verify the certificate definitely came from the CA, it can also verify that the OCSP response came from the CA too. This solves the massive privacy concern with OCSP and also removes a burden on the client from having to perform this external request. Winner! But not so much actually, sorry. OCSP Stapling is great and we should all support it on our sites, but, do we honestly think an attacker is going to enable OCSP Stapling? No, I didn't think so, of course they aren't going to. What we need is a way to force the server to OCSP Staple and this is what OCSP Must-Staple is for. When requesting our certificate from the CA we ask them to set the OCSP Must-Staple flag in the certificate. This flag instructs the browser that the certificate must be served with an OCSP Staple or it has to be rejected. Setting the flag is easy. Now that we have a certificate with this flag set, we as the host must ensure that we OCSP Staple or the browser will not accept our certificate. In the event of a compromise and an attacker obtaining our key, they must also supply an OCSP Staple when they use our certificate too. If they don't include an OCSP Staple, the browser will reject the certificate, and if they do include an OCSP Staple then the OCSP response will say that the certificate is revoked and the browser will reject. Tada! OCSP Expect-Staple Whilst Must-Staple sounds like a great solution to the problem of revocation, it isn't quite there just yet. One of the biggest problems that I see is that as a site operator I don't actually know how reliably I OCSP staple and if the client is happy with the stapled response. Without OCSP Must-Staple enabled this isn't really a problem but if we do enable OCSP Must-Staple and then we don't OCSP Staple properly or reliably, our site will start to break. To try and get some feedback about how we're doing in terms of OCSP Stapling we can enable a feature called OCSP Expect-Staple. I've written about this before and you can get all of the details in the blog OCSP Expect-Staple but I will give the TL;DR here. You request an addition to the HSTS preload list that asks the browser to send you a report if it isn't happy with the OCSP Staple. You can collect the reports your self or use my service, report-uri.io, to do it for you and you can learn exactly how often you would hit problems if you turned on OCSP Must-Staple. Because getting an addition to the HSTS preload list isn't as straightforward as I'd like I also wrote a spec to define a new security header called Expect-Staple to deliver the same functionality but with less effort involved. The idea being that you can now set this header and enable reporting to get the feedback we desperately need before enabling Must-Staple. Enabling the header would be simple, just like all of the other security headers: Expect-Staple: max-age=31536000; report-uri="https://scotthelme.report-uri.io/r/d/staple"; includeSubDomains; preload Rogue certificates One of the other things that we have to consider whilst we're on the topic of revocation is rogue certificates. If somebody manages to compromise a CA or otherwise obtains a certificate that they aren't supposed to have, how are we supposed to know? If I were to breach a CA right now and obtain a certificate for your site without telling you, you wouldn't ever learn about it unless it was widely reported. You could even have an insider threat and someone in your organisation could obtain certificates without going through the proper internal channels and do with them as they please. We need a way to have 100% transparency and we will very soon, Certificate Transparency. Certificate Transparency CT is a new requirement that will be mandatory from early next year and will require that all certificates are logged in a public log if the browser is to trust them. You can read the article for more details on CT but what will generally happen is that a CA will log all certificates it issues in a CT log. These logs are totally public and anyone can search them so the idea is that if a certificate is issued for your site, you will know about it. For example, here you can see all certificates issued for my domain and you can search for your own, you can also use CertSpotter from sslmate to do the same and I use the Facebook Certificate Transparency Monitoring tool which will send you an email each time a new certificate is issued for your domain/s. CT is a fantastic idea and I can't wait for it to become mandatory but there is one thing to note and it's that CT is only the first step. Knowing about these certificates is great but we still have all of the above mentioned problems with revoking them. That said, we can only tackle one problem at once and having the best revocation mechanisms in the world is no good if we don't know about the certificates we need to revoke. CT gives us that much at least. Certificate Authority Authorisation Stopping a certificate being issued is much easier than trying to revoke it and this is exactly what Certificate Authority Authorisation allows us to start doing. Again, there are further details in the linked article but the short version is that we can now authorise only specific CAs to issue certificates for us instead of the current situation where we can't indicate any preference at all. It's as simple as creating a DNS record: scotthelme.co.uk. IN CAA 0 issue "letsencrypt.org" Whilst CAA isn't a particularly strong mechanism and it won't help in all mis-issuance scenarios, there are some where it can help us and we should assert our preference by creating a CAA record. Conclusion As it currently stands there is a real problem, we can't revoke certificates if someone obtains our private key. Just imagine how that will play out the next time Heartbleed comes along! One thing that you can do to try and limit the impact of a compromise is to reduce the validity period of certificates you obtain. Instead of three years go for one year or even less. Let's Encrypt only issue certificates that are valid for ninety days! With a reduced lifetime on your certificate you have less of a problem if you're compromised because an attacker has less time to abuse the certificate before it expires. Beyond this, there's very little we can do. To demonstrate the issue and just how real this is, try and visit the new subdomain I setup on my site, revoked.scotthelme.co.uk. As you can probably guess this subdomain is using a revoked certificate and there's a good chance that it will load just fine. If it doesn't, if you do get a warning about the certificate having expired, then your browser is still doing an OCSP check and you just told the CA you were going to my site. To prove that this soft fail check is pointless you can add a hosts entry for ocsp.int-x3.letsencrypt.org to resolve to or block it via some other means and then try the same thing again. This time the page will load just fine because the revocation check will fail and the browser will continue loading the page. Some use that is... The final point I want to close on is a question, and it's this. Should we fix revocation? That's a story for another day though! Article source
  5. Take Google's advice and get out of CA infrastructure' Mozilla has weighed in to the ongoing Symantec-Google certificate spat, telling Symantec it should follow the Alphabet subsidiary's advice on how to restore trust in its certificates. Readers will recall that Symantec has repeatedly issued certs that didn't ring true with browser-makers and at the end of April 2017 Google started a countdown, the conclusion of which would see its Chrome browser warn users if it encountered Symantec certs. Symantec offered up a remediation plan, mostly based on putting auditors through the joint. But it looks like that's not sufficient for Mozilla. UK-based Mozilla developer Gervase Markham has posted his note to Symantec at Google Docs here. Mozilla strongly suggests that Symantec take a deep breath and swallow the bitter pills doctor that Google has prescribed here. Chief among Google's suggestions is that Symantec work with one or more existing certificate authorities (CAs) to take over its troubled infrastructure and rework its validation processes. That would relegate the company to more-or-less reseller status, letting it maintain its customer relationships but relieving it of responsibility for ongoing operations. The alternative, Markham writes, is for Symantec to: Clean up and document the extent of its publicly-trusted PKI and “cut off parts” that don't comply with the CA/Browser Forum's Baseline Requirements; Mozilla should “restrict newly-issued Symantec certificates to a maximum validity period of 13 months; and Over time, Markham says, Mozilla will also reduce the lifetime of existing Symantec certificates to 13 months. Why so harsh? The core of Mozilla's argument is that it just doesn't feel Symantec grasps how serious its issues are. As Markham writes, Symantec cannot establish that it “adequately demonstrates that they have grasped the seriousness of the issues here, and that their proposed measures mostly amount to doing more of what, in the past, has not succeeded in producing consistent high standards.” The reason, Markham writes, isn't wrongdoing (so “we are not in StartCom/WoSign territory”), it's simply that Symantec seems to have lost control of its intermediaries. Article source
  6. Kyle_Katarn

    MassCert 1.4.1

    Summary MassCert is a user-friendly batch digital signature utility. Features Automatic mass-signature using Microsoft's SignTool (Windows SDK required) Supports timestamping compliant with RFC 3161 Supports PKCS #12 personal information file (X.509 certificate + private key bundle) Automatic verification of correct execution Internationalization support. Homepage: http://www.kcsoftwares.com/?home Download: http://www.kcsoftwares.com/files/masscert.exe Portable: http://www.kcsoftwares.com/files/masscert.zip
  7. Google announced yesterday plans to become a self-standing, certified, and independent Root Certificate Authority, meaning the company would be able to issue its own TLS/SSL certificates for securing its web traffic via HTTPS, and not rely on intermediaries, as it does now. In the past years, Google has used certificates issued by several companies, with the latest suppliers being GlobalSign and GeoTrust. Currently, Google is operating a subordinate Certificate Authority (Google Internet Authority G2 - GIAG2), which manages and deploys certificates to Google's infrastructure. Google is currently in the process of migrating all services and products from GIAG2 certificates to the new Root Certificate Authority, named Google Trust Services (GTS). According to the search giant, the migration to GTS will take time, and users will see mixed certificates from both GIAG2 and GTS until then. What this means for regular users is that when they'll click to view a site's HTTPS security certificate, it will say "Google Trust Services" instead of Google Internet Authority, GeoTrust, GlobalSign, or any other term. This will make it easier to identify authentic Google services. For Google, GTS means its engineers will have full control over its HTTPS certificates since the time they're issued to the time they're revoked. Situations, when another Certificate Authority issues SSL certificates for Google domains, will stand out immediately. GTS will provide HTTPS certificates for a broad range of services, such as public websites to API servers, for all Alphabet companies, not just Google. More technical information, such as Google's current active root certificates and their https://pki.goog/SHA1 fingerprints are available on the Google Trust Services homepage. Article source
  8. Kaspersky is moving to fix a bug that disabled certificate validation for 400 million users. Discovered by Google's dogged bug-sleuth Tavis Ormandy, the flaw stems from how the company's antivirus inspects encrypted traffic. Since it has to decrypt traffic before inspection, Kaspersky presents its certificates as a trusted authority. If a user opens Google in their browser, for example, the certificate will appear to come from Kaspersky Anti-Virus Personal Root. The problem Ormandy identified is that those internal certificates are laughably weak. "As new leaf certificates and keys are generated, they're inserted using the first 32 bits of MD5(serialNumber||issuer) as the key ... You don't have to be a cryptographer to understand a 32bit key is not enough to prevent brute-forcing a collision in seconds. In fact, producing a collision with any other certificate is trivial," he writes here. Ormandy's bug report gave, by way of demonstration, a collision between Hacker News and manchesterct.gov: "If you use Kaspersky Antivirus in Manchester, Connecticut and were wondering why Hacker News didn't work sometimes, it's because of a critical vulnerability that has effectively disabled SSL certificate validation for all 400 million Kaspersky users." Kaspersky fixed the issue on December 28. Source
  9. SHA-1 is a hashing algorithm that has been used extensively since it was published in 1995, however, it is no longer considered secure. It was deemed vulnerable to attacks from well-funded adversaries back in 2005 and was replaced by SHA-2 and SHA-3 which are considerably more secure hashing functions. Many companies including Google, Mozilla, and Microsoft have already announced that they'll stop accepting SHA-1 TLS certificates by 2017. Now, Microsoft has detailed how numerous websites, users, and third-party applications will be affected once the company deprecates SHA-1 signed certificates starting February 4, 2017. Microsoft states that in an effort to further enhance security features on Edge and Internet Explorer 11, the two browsers will prevent sites using SHA-1 signed certificates from loading and will display an "invalid certificate" warning. While it isn't recommended, users will have the option to bypass the warning and access the potentially vulnerable website. The company has clarified that this will only impact websites with SHA-1 signed certificates that link to a Microsoft Trusted Root CA, while manually installed enterprise or self-signed SHA-1 certificates will remain unaffected. The Redmond giant states that developers who have installed the latest 2016 November Windows updates can test if their websites will be affected by the change. The detailed procedure can be viewed in the company's blog post here. Microsoft has clarified that third-party Windows applications utilizing the Windows cryptographic API set or older versions of Internet Explorer will not be affected by the changes. Similarly, the update will not prevent clients from using the SHA-1 certificate in client authentication. Regarding cross-signed certificates, Microsoft has explicitly confirmed that Windows will only check the thumbprint of the root certificate is in the Microsoft Trusted Root Certified Program. The company has clarified that certificates "cross-signed with a Microsoft Trusted Root that chains to an enterprise/self-signed root" will not be affected by the changes next year. Source: Microsoft Article source
  10. Digital certificates are an essential part of online security and as the number of Internet of things devices continues to grow they'll become more important still. But as we rely more on certificates so managing them becomes more complex. Cyber security solutions company Comodo, the world's leading certificate authority, is launching the latest release of its Comodo Certificate Manager (CCM), a full-lifecycle digital certificate management platform which makes it easier for enterprises to manage their certificates. CCM allows businesses to self-administer and instantly provision Comodo certificates and to auto-discover and manage all certificates, from any certificate authority, throughout the organization. The CCM platform can automatically discover all internal and external SSL/TLS certificates and organize them into one central inventory to simplify SSL/private key information (PKI) tracking and management. It also issues alerts when certificates are about to expire. Organizations can become their own private certificate authorities too, either directly through CCM or through the use of Microsoft Active Directory certificate templates. This provides them with a cost-effective way to enrol certificates for internally-trusted applications, as well as enable enhanced security offerings such as SSL, secure logon, user and machine authentication, web server authentication and smart cards. "Based on our deep understanding of the certificate market, Comodo anticipated the challenges brought on by IoT devices and resource strapped IT teams", says John Peterson, vice president and general manager, Comodo. "With the new release, our customers can easily manage their digital certificates from a unified dashboard that provides a comprehensive view of their entire certificate inventory, providing quick access to alert settings and certificate controls, no matter where in their environment that certificate is installed". You can find out more about CCM on the Comodo website. Article source
  11. We have extended the original research and can now use information from public keys (HTTPS, TLS, SSH, SSL) to audit cyber security management and compliance with internal standards. This post is about our application of a research I blogged about earlier – Investigating The Origins of RSA Public Keys. You can also visit: https://enigmabridge.com/https.html . The main purpose of https – ‘s’ denoting ‘secure’ – is to create a trusted connection for sending sensitive data between your browser and a web service. This is achieved by providing a secure digital ID of the web service (public key certificate). Until now, it has been widely accepted that such a digital ID didn’t contain any sensitive information that would endanger the security of the web service. No one expected that it could leak internal information about security management – information about methods, tools, and processes were supposed to be completely hidden from users as well as attackers. The worrying discovery, made by Enigma Bridge co-founder Petr Svenda PhD, was awarded the best paper at the USENIX Security Symposium. It shows that sensitive information behind “https”, “tls” and other protocols can be extracted with sophisticated analysis using only information that every web service presents to anyone accessing it. Svenda and his team applied novel techniques to analyse millions of https keys and revealed how the keys were generated. “I am puzzled with peeps are not all over this – enormous implications.”, tweeted Daniel Bilar, Information Security Specialist at Visa. “It is striking that despite 30 years of cryptographic research, no-one has noticed this problem before. It has been hiding in plain sight all along.” Commented Professor of Security Engineering, Ross Anderson, after Svenda’s presentation at the University of Cambridge. Analysis of keys from CA certificates Analysis of keys from HTTPS certificates provided by a CDN company We have progressed the scanning methods to pinpoint how organisations, including blue chip companies, government departments or other companies, which are part of critical infrastructures manage their encryption keys using only publicly available information and identify potential weaknesses in their defenses You can get a quick insight whether companies think about the quality of their encryption keys or let their administrators use any tool at hand – instead of using secure hardware key generators. Sharing keys between different applications is another aspect that suggests insufficient controls or enforcement of cyber security processes. Whilst this vulnerability doesn’t compromise any web site directly, it demonstrates that even public information can leak security details and lead attackers to the most vulnerable targets. Use of validated secure hardware for key generation is the best approach to protect against many attacks. Article source
  12. A recent thread on twitter recently highlighted a field test flag in the chromium project that attempts to handle HTTPS errors on base domains. Essentially if you visit https://securedomain.com and the certificate is only for https://www.securedomain.com Chrome will detect this and automatically redirect the user to the www domain without showing an error. In his example visiting https://onlineservices.nsdl.com resulted in Chrome redirecting him to https://www.onlineservices.nsdl.com because the non-www did not have a valid certificate. The redirect only happens when a valid certificate is found on www You can see in this tweet it is Chrome itself doing the redirect. The behaviour was confirmed by Adrienne Porter Felt who works on the Chrome usability team.  — @aidantwoods This could be useful for end-users frustrated with HTTPS errors due to poor server configuration. However it could present lax administrators who do a quick test in Chrome with the false sense that a certificate is correctly configured. IE, Edge and Firefox may not implement this feature which could result in a much different user experience. It seems the flag SSLCommonNameMismatchHandling is currently only in the Chrome Canary pre-release browser at present. All certificates purchased from Servertastic with the www preface on the base domain also secure the base domain at no extra cost. Article source
  13. After getting to play with Windows 10 for a few hours, something unexpected caught my attention. Hey, there's some new stuff in there - a third, undocumented CTL! Googling for 'PinRulesEncodedCtl' turned up nothing at all. The first few bytes of the binary data (30 82 .. .. 06 09 2a 86 48) looked familiar though: it was probably ASN.1 encoded data, just like the other two well-documented CTLs. That meant I could probably just feed the blob into my existing tools for quick and painless decoding. Success! We get a nice list of 152 Microsoft-owned domains. Also, the lastsync timestamp (2016-09-24 14:22:44 UTC) shows that this list is being regularly updated. So this very much looks like evidence of an active system-wide certificate pinning mechanism protecting against MITM attacks on high-value Microsoft domains. Which, per se, is a good thing! Some official documentation would be nice, though. Edit 1 (2016-09-24): This seems to be - at least partially - related to Telemetry, as briefly mentioned at the only page I could find: https://technet.microsoft.com/en-us/itpro/windows/manage/configure-windows-telemetry-in-your-organization Article source
  14. If you're using Chrome 53, which was released last week, you might find that some websites which worked under Chrome 52 now fail with "Your connection is not private" with an error code of NET::ERR_CERTIFICATE_TRANSPARENCY_REQUIRED. For example, choosemyreward.chase.com shows the following error as of publication time: The short explanation is that Chase's system administrators made a mistake when they requested their SSL certificate from their certificate authority, Symantec, but as we shall see, Symantec shares responsibility too. The History of Certificate Transparency The security of HTTPS relies on organizations called certificate authorities, who issue certificates that help ensure your connections to websites are secure and private. HTTPS is only secure if certificate authorities do their job properly. If a certificate authority messes up and issues an unauthorized certificate, an attacker can use it to intercept HTTPS connections. Unfortunately, it's difficult to ensure that certificate authorities do their job properly, and certificate authorities have repeatedly violated the public's trust by issuing unauthorized certificates, including ones which have been used in real attacks against HTTPS connections. In response, Google created Certificate Transparency. Under Certificate Transparency, all certificates are submitted to publicly-auditable logs by either the certificate authority or a third-party observer such as the Googlebot. Domain owners can monitor these logs using a service like Cert Spotter and take action if they see an unauthorized certificate for one of their domains. Web browsers will eventually reject certificates that aren't logged using Certificate Transparency. However, Google is proceeding slowly towards mandatory logging so that they and others can gain operational experience first. The first milestone towards mandatory logging came in January 2015 when Chrome started requiring Certificate Transparency for Extended Validation certificates. The second milestone came last October, when Google caught Symantec, a large certificate authority, issuing unauthorized "test" certificates for google.com and 75 other domains. Since issuing certificates for a domain without its owner's approval is such a serious violation of trust, Google announced that Chrome would require Certificate Transparency for all certificates issued by Symantec on or after June 1, 2016. This change rolled out last week in Chrome 53. Symantec and Certificate Transparency Symantec is, for the most part, complying with Google's logging requirement, and by default any certificate they issue will be properly logged and will work in Chrome 53. However, Certificate Transparency has a downside: it requires the complete contents of every certificate, including the hostnames, to be logged to a public log. For a public website, this is no big deal, but some organizations prefer to keep the hostnames of their internal servers private. Even the hostnames of public websites might need to be kept private until a certain date to avoid leaking information such as new product announcements or corporate acquisitions. To address the privacy concerns, the IETF working group responsible for Certificate Transparency developed a redaction mechanism which would allow certificate authorities to redact components of the hostname beneath the registered domain. For example, a certificate for secretserver.secretdivision.example.com could be logged as ?.secretdivision.example.com, ?.?.example.com, but not ?.?.?.com. Reaction allows domain owners to keep their hostnames private, while still allowing them to detect that a certificate has been issued for some hostname under their domain. Unfortunately for Symantec, there were some obstacles in the way of offering redaction to their privacy-sensitive customers. First, redaction is only defined for the next version of Certificate Transparency, which is still a draft and has not been implemented by Chrome or any public log server. Second, the Chrome team has raised several concerns with redaction, and stated that Chrome will not support redaction unless their concerns are addressed. Partly because of Chrome's concerns, the IETF working group removed redaction from the next version of Certificate Transparency and placed it in a separate document which has not yet been officially adopted by the working group. Despite the fact that redaction, practically speaking, does not exist, Symantec forged ahead and grafted redaction onto the original version of Certificate Transparency. The result is a Franken-certificate that works fine in browsers that don't support Certificate Transparency, but fails to validate in Chrome. Pointless Redaction Symantec defaults to logging certificates in a compliant, unredacted form, but they provide their customers the option to log certificates in redacted form instead. Customers who choose this option get Franken-certificates that cause the above warning in Chrome 53. Despite the incompatibility with Chrome and the utter pointlessness of redacting the certificates of public websites, both Chase Bank and United Airlines have chosen to redact such certificates. United fixed their websites before Chrome 53 became stable by replacing their certificates with fully-logged ones, but as of publication time, choosemyreward.chase.com is still serving a Franken-certificate that's rejected by Chrome 53. Data collected from Certificate Transparency logs reveal quite a few other websites that are probably public yet use redaction, including websites at Amazon, Fedex, Goldman Sachs, Mitsubishi, and Siemens. Why would someone choose redacted logging for a public website? Symantec's documentation might be to blame. Their documentation describes the two options as follows: Full domain names: Publicly logs root domain names and subdomains in the certificate. Recommended for all public websites. Only root domain names: Publicly logs only root domain names in the certificate. Intended only for private internal domains. Although they say that logging root domain names is "intended only for private internal domains" while recommending full domain name logging for public websites, they don't mention the downside until later in the document: All certificates with root domain logging may display browser warnings when users connect to the website. Saying that a warning "may" be displayed doesn't seem adequate when a warning absolutely will be displayed, by the world's most popular web browser to boot! Symantec needs to do a better job informing their customers of the downsides of choosing redaction. Too many websites have chosen redaction incorrectly, and I expect this to continue unless Symantec improves their messaging. Meanwhile, Chrome users will encounter avoidable browser errors when visiting these websites, which is a horrible experience for Symantec's customer's customers, and risks desensitizing people to security warnings. Finally, if you represent an organization that wants to use redaction appropriately (that is, to hide the hostnames of a non-public server), please send an email to the IETF working group mailing list. The working group has had a difficult time designing redaction, and addressing Chrome's concerns will require hearing from the people who want to use redaction. The fate of redaction depends on your input! If you're worried about certificate authorities like Symantec issuing unauthorized "test" certificates for your domains, you should check out Cert Spotter, a tool to monitor Certificate Transparency logs for unauthorized certificates. Available as open source or a hosted service. Article source
  15. New trick allows Dridex to bypass antivirus detection Geographical distribution of recent Dridex infections Dridex, the most infamous banking trojans of them all, received a major upgrade in the month of May, which security researchers say would allow it to bypass security software with greater ease. For the past few years, Dridex has been one of the most active cyber-crime infrastructures on the planet, with the group behind this operation building several botnets through which they deliver their malware, exfiltrate funds, hide illegal transactions, and spam users, with both the Dridex malware and the Locky ransomware. Dormant Dridex makes a comeback In the past, there were several security firms that reported seeing a downscaling of Dridex activity and an increased focused on Locky spam. Most recently, multiple security firms have noticed one of the biggest spam floods in years delivering the Locky ransomware. But this period of calm seems to have ended, if we are to take a look at Trend Micro's latest report, which claims that, on May 25, Dridex started making a comeback with new waves of spam email distributing the reputable banking trojan in massive numbers once again. The security firm also says that Dridex itself has now changed as well and is using a new trick to infect computers. Dridex poses as a certificate to evade antivirus detection In the past, the trojan relied on malicious Microsoft Office files asking users to enable macro support. Once this happened, the malicious script would download Dridex and install on the victim's PC. The most recent version of Dridex now features a change of M.O., and instead of downloading the Dridex malware, the macro scripts download a PFX (Personal Information Exchange) file, usually used by software certificates for storing public and private encryption keys for various operations. "Perhaps, you are wondering why these cybercriminals added another layer in infecting systems," the Trend Micro team asks. "Since the file dropped is initially in .PFX format, it enables DRIDEX to bypass detection." Antivirus and other security solutions usually recognize these types of files as friendly and mark them as such, ignoring them from future scans. Dridex now abuses the built-in Windows Certutil utility After the PFX files reach the infected host, the same macro script that downloaded it then starts Certutil, a Windows command-line utility built inside Windows for the specific purpose of handling certificates, as part of the Certificate Services, starting with Windows 8 and Windows Server 2012. Certutil takes the PFX file and converts it into the Dridex EXE file, which can then infect your system. Since the antivirus has already marked this file as friendly, it won't keep an eye on it anymore, allowing Dridex to go under the radar. The only solution to counteract this new change in Dridex's mode of operation is to, once again, remind employees and your friends not to open files from unknown senders. Article source
  16. For anyone who needs to worry about keeping a website up to date, now may be the time to do a little bit of housekeeping. In a blog post, Microsoft has revealed the timeline they’re taking in regards to the phasing out of SHA-1 certificates and told us that websites running with SHA-1 won’t be valid very much longer. Starting with the arrival of the Windows 10 Anniversary update, websites with SHA-1 certificates are no longer going to have the padlock symbol in the address bar guaranteeing security. While these websites will continue to function as they always have, users will no longer have the guarantee of their safety while browsing them and may be deterred from further use. When February of 2017 rolls around, outdated certificates will be much more of a problem. After this deadline hits, websites that are still using SHA-1 are going to be blocked outright by Microsoft Edge and Internet Explorer, letting users know that the website isn’t secure and giving users a clear warning not to continue. Article source
  17. Electronic Frontier Foundation (EFF) has announced the release of its millionth free HTTPS certificate as part of the company’s ‘Let's Encrypt Certificate Authority’ concept. Last year EFF, who co-founded Let's Encrypt CA with Mozilla and researchers from the University of Michigan, made public its aim of building a more secure future for the World Wide Web. This began with issuing and managing free certificates for any website that needs them, aiding in the transition from HTTP to the more secure HTTPS protocol on the web. Now, just three months on from the first beta version of the service becoming available, the company has reached this significant landmark which shows they are living up to their promise of helping to ensure websites are more secure with better encryption. What’s more, because a single certificate can cover more than one domain, the million certs Let’s Encrypt CA has issued are actually valid for 2.5 million fully-qualified domain names. In a post on its website EFF said: “It is clear that the cost and bureaucracy of obtaining certificates was forcing many websites to continue with the insecure HTTP protocol, long after we've known that HTTPS needs to be the default. We're very proud to be seeing that change, and helping to create a future in which newly provisioned websites are automatically secure and encrypted.” In a statement to Infosecurity Brian Honan, Owner and CEO of BH Consulting, praised Let's Encrypt CA and said the initiative is helping to create a safer internet. “This is a great milestone for Let’s Encrypt CA to reach, particularly as it has only relatively recently been available for general use. Hopefully the take up will continue and more and more companies and web sites will take this opportunity to protect their visitors’ privacy and improve their online security.” Article source