Jump to content

Search the Community

Showing results for tags 'https'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...

Found 17 results

  1. Microsoft will integrate DNS over HTTPS in Windows 10 Microsoft revealed plans to integrate native support for DNS over HTTPS in the company's Windows 10 operating system in November 2019. The announcement was made on Microsoft's Networking blog on November 17, 2019. DNS over HTTPS is designed to improve privacy, security and the reliability by encrypting DNS queries that are handled in plaintext currently. DNS over HTTPS has been on the rise lately. Mozilla, Google, Opera as as well as several public DNS providers announced support for the standard. Support in programs, e.g. a web browser, means that the DNS queries that originate from that program are encrypted. Other queries, e.g. from another browser that does not support DNS over HTTPS or is configured not to use it, won't benefit from that integration however. Microsoft's announcement brings DNS over HTTPS support to the Windows operating system. The company plans to introduce it to preview builds of Windows 10 in the future before it releases it in a final version of the operating system. Microsoft plans to follow Google's implementation, at least initially. Google revealed some time ago that it will roll out DNS over HTTPS in Chrome, but only on systems that use a DNS service that supports DNS over HTTPS. In other words: Google won't alter the DNS provider of the system. Mozilla and Opera decided to pick a provider, at least initially, and that means that the local DNS provider may be overridden in the browser. Microsoft notes that it won't be making changes to the DNS server configuration of the Windows machine. Administrators (and users) are in control when it comes to the selection of the DNS provider on Windows and the introduction of support for DNS over HTTPS on Windows won't change that. The change may benefit users without them knowing about it. If a system is configured to use a DNS provider that supports DNS over HTTPS, that system will automatically use the new standard so that DNS data is encrypted. The company plans to introduce "more privacy-friendly ways" for its customers to discover DNS settings in Windows and raise awareness for DNS over HTTPS in the operating system. Microsoft revealed four guiding principles for the implementation: Windows DNS needs to be as private and functional as possible by default without the need for user or admin configuration because Windows DNS traffic represents a snapshot of the user’s browsing history. Privacy-minded Windows users and administrators need to be guided to DNS settings even if they don't know what DNS is yet. Windows users and administrators need to be able to improve their DNS configuration with as few simple actions as possible. Windows users and administrators need to explicitly allow fallback from encrypted DNS once configured. Closing words Microsoft did not reveal a schedule for the integration but it is clear that it will land in a future Insider build for Windows 10 first. Integration in Windows -- and other client operating systems -- makes more sense than integrating the functionality into individual programs. Users who want to use DNS over HTTPS may simply pick a DNS provider that supports it to enable the feature for all applications that run on the system. Source: Microsoft will integrate DNS over HTTPS in Windows 10 (gHacks - Martin Brinkmann)
  2. Google plans to test DNS over HTTPS in Chrome 78 Google revealed plans to test the company's implementation of DNS over HTTPS (DoH) in Chrome 78. DNS over HTTPS aims to improve security and privacy of DNS requests by utilizing HTTPS. The current stable version of Chrome is 77 released on September 10, 2019. Google notes that DoH prevents other WiFi users from seeing visited websites; common attacks such as spoofing or pharming could potentially be prevented by using DoH. Google decided to test the DoH implementation in a different way than Mozilla. Mozilla selected Cloudflare as its partner in the testing phase and will use Cloudflare as the default provider when it rolls out the feature to US users in late September 2019. Firefox users have options to change the DNS over HTTPS provider or turn off the feature entirely in the browser. Google's DNS over HTTPS plan Google picked a different route for the test. The company decided to test the implementation using multiple DoH providers. The company could have used its own DoH service for the tests but decided to select multiple providers instead. Tests will upgrade Chrome installations to use DoH if the DNS service that is used on the system supports DoH. Google circumnavigates any criticism in regards to privacy that Mozilla faced when it announced the partnership with Cloudflare. Google selected the cooperating providers for "their strong stance on security and privacy" and "readiness of their DoH services" and agreement to participate in the test. The following providers were picked by the company: Cleanbrowsing Cloudflare DNS.SB Google OpenDNS Quad9 If Chrome runs on a system that uses one of these services for DNS, it will start using DoH instead when Chrome 78 launches. The experiment will run on all platforms for a fraction of Chrome users with the exception of Chrome on Linux and iOS. Chrome will revert to the regular DNS service in the case of errors. Most managed Chrome deployments will be excluded from the experiment, and Google plans to provide details on DoH policies on the company's Chrome Enterprise blog before release to provide administrators with information on configuring those. Chrome users may use the flag chrome://flags/#dns-over-http to opt in or out of the experiment. The flag is not integrated in any version of the Chrome browser yet. Secure DNS lookups Enables DNS over HTTPS. When this feature is enabled, your browser may try to use a secure HTTPS connection to look up the addresses of websites and other web resources. – Mac, Windows, Chrome OS, Android Closing Words Most Chromium-based browsers and Firefox will start to use DNS over HTTPS in the near future. Firefox provides options to disable the feature and Chrome comes with an experimental flag that offers the same. Experimental flags may be removed at one point in the future however and it is unclear at this point whether Google plans to add a switch to Chrome's preference to enable or disable the feature. Source: Google plans to test DNS over HTTPS in Chrome 78 (gHacks - Martin Brinkmann)
  3. Mozilla plans to roll out DNS over HTTPS to US users in late September 2019 Starting in late September 2019, DNS over HTTPS (DoH) is going to be rolled out to Firefox users in the United States. DNS over HTTPS encrypts DNS requests to improve security and privacy of these requests. Most DNS requests happen in the open currently; anyone listening to the traffic gets records of site and IP addresses that were looked up while using an Internet connection among other things. DoH encrypts the traffic and while that looks good on first glance, it needs to be noted that TLS still gives away the destination in plaintext. One example: Internet providers may block certain DNS requests, e.g. when they have received a court order to block certain resources on the Internet. It is not the best method to prevent people from accessing a site on the Internet but it is used nevertheless. DoH is excellent against censorship that uses DNS manipulation. Tip: check out our detailed guide on configuring DNS over HTTPS in Firefox. Mozilla started to look into the implementation of DoH in Firefox in 2018. The organization ran a controversial Shield study in 2018 to gather data that it needed for the planned implementation of the feature. The study was controversial because Mozilla used the third-party Cloudflare as the DNS over HTTPS service which meant that all user traffic flowed through the Cloudflare network. Mozilla revealed in April 2019 that its plan to enable DoH in Firefox had not changed. The organization created a list of policies that DoH providers had to conform to if they wanted their service to be integrated in Firefox. In "What's next in making encrypted DNS-over-HTTPS the Default", Mozilla confirmed that it would begin to enable DoH in Firefox starting in late September 2019. The feature will be enabled for some users from the United States and Mozilla plans to monitor the implementation before DoH is rolled out to a larger part of the user base and eventually all users from the United States. We plan to gradually roll out DoH in the USA starting in late September. Our plan is to start slowly enabling DoH for a small percentage of users while monitoring for any issues before enabling for a larger audience. If this goes well, we will let you know when we’re ready for 100% deployment. While DNS over HTTPS will be the default for the majority of Firefox installations in the United States, it won't be enabled for some configurations: If parental controls are used, DoH won't be enabled provided that Mozilla detects the use correctly. Enterprise configurations are respected as well and DoH is disabled unless "explicitly enabled by enterprise configuration". Fall back option if DNS issues or split horizon configuration cause lookup failures. Network administrations may configure their networks in the following way to highlight to Firefox that the network is unsuitable for DoH usage: DNS queries for the A and AAAA records for the domain “use-application-dns.net” must respond with NXDOMAIN rather than the IP address retrieved from the authoritative nameserver. How to block DNS over HTTPS You have two options when it comes to DoH in Firefox. You can change the default provider -- Cloudflare is the default -- to another provider (for whatever reason) or block the entire feature so that it won't be used. If you don't want to use it, set the value of network.trr.mode to 0 5 on about:config. Source: Mozilla plans to roll out DNS over HTTPS to US users in late September 2019 (gHacks - Martin Brinkmann)
  4. Mozilla revamps Firefox's HTTPS address bar information Mozilla plans to make changes to the information that the organization's Firefox browser displays in its address bar when it connects to sites. Firefox displays an i-icon and a lock symbol currently when connecting to sites. The i-icon displays information about the security of the connection, content blocking, and permissions, the lock icon indicates the security state of the connection visually. A green lock indicates a secure connection and if a site has an Extended Validation certificate, the name of the company is displayed in the address bar as well. Mozilla plans to make changes to the information that is displayed in the browser's address bar that all Firefox users need to be aware of. One of the core changes removes the i-icon from the Firefox address bar, another the Extended Validation certificate name, a third displays a crossed out lock icon for all HTTP sites, and a fourth changes the colour of the lock for HTTPS sites from green to gray. Why are browser makers making these changes? Most Internet traffic happens over HTTPS; latest Firefox statistics show that more than 79% of global pageloads happen using HTTPS and that it is already at more than 87% for users in the United States. The shield icon was introduced to indicate to users that the connection to the site uses HTTPS and to give users options to look up certificate information. It made sense to indicate that to users back when only a fraction of sites used HTTPS. With more and more connections using HTTPS, browser makers like Mozilla or Google decided that it was time to evaluate what is displayed to users in the address bar. Google revealed plans in 2018 to remove Secure and HTTPS indicators from the Chrome browser; Chrome 76, released in August 2019, does not display HTTPS or WWW anymore in the address bar by default. Mozilla launched changes in Firefox in 2018, hidden behind a flag, to add a new "not secure" indicator to HTTP sites in Firefox. Google and Mozilla plan to remove information that indicate that a site's connection is secure. It makes some sense, if you think about it, considering that most connections are secure on today's Internet. Instead of highlighting that a connection is secure, browsers will highlight if a connection is not secure instead. The changes are not without controversy though. For more than two decades, Internet users were told that they needed to verify the security of sites by looking at the lock symbol in the browser's address bar. Mozilla does not remove the lock icon entirely in Firefox 70 and the organization won't touch the protocol in the address bar either at this point; that is better than what Google has already implemented in recent versions of Chrome. The following changes will land in Firefox 70: Firefox won't display the i-icon anymore in the address bar. Firefox won't display the owner of Extended Verification certificates anymore in the address bar. A shield icon is displayed that lists protection information. The lock icon is still displayed, it displays certificate and permission information and controls. HTTPS sites feature a gray lock icon. All sites that use HTTP will be shown with a crossed out shield icon (previously only HTTP sites with login forms). Mozilla aims to launch these changes in Firefox 70. The browser is scheduled for a release on October 23, 2019. Firefox users may add a "not secure" indicator to the browser's address bar. Mozilla, just like Google, plans to display it for sites that use HTTP. The additional indicator needs to be enabled separately at the time of writing, it won't launch in Firefox 70. Load about:config in the Firefox address bar. Search for security.identityblock.show_extended_validation. Set the preference to TRUE to display the name of the owner of Extended Validation certificates in Firefox's address bar, or set it to FALSE to hide it. The new gray icon for HTTPS sites can be toggled as well in the advanced configuration: On about:config, search for security.secure_connection_icon_colour_gray Set the value to TRUE to display a gray icon for HTTPS sites, or set it to FALSE to return to the status quo. Source: Mozilla revamps Firefox's HTTPS address bar information (gHacks - Martin Brinkmann)
  5. Cloudflare aims to make HTTPS certificates safe from BGP hijacking attacks Free service prevents BGP hijackers from fraudulently obtaining browser-trusted certs. Enlarge nternet1.jpg by Rock1997 modified. Content delivery network Cloudflare is introducing a free service designed to make it harder for browser-trusted HTTPS certificates to fall into the hands of bad guys who exploit Internet weaknesses at the time the certificates are issued. The attacks were described in a paper published last year titled Bamboozling Certificate Authorities with BGP. In it, researchers from Princeton University warned that attackers could manipulate the Internet’s border gateway protocol to obtain certificates for domains the attackers had no control over. Browser-trusted certificate authorities are required to use a process known as domain control validation to verify that a person requesting a certificate for a given domain is the legitimate owner. It requires the requesting party to do one of three things: create a domain name system resource record with a specific text string; upload a document with a specific text string to a Web server using the domain; prove receipt of the email address containing a text string sent to the administrative contact for the domain The Princeton researchers demonstrated that this validation process can be bypassed by BGP attacks. Before applying for a certificate to a targeted domain, an adversary can update the Internet’s BGP routing tables to hijack traffic destined for the domain. Then, when a CA checks the DNS record or visits a URL, the CA's query goes to an attacker-controlled server rather than the legitimate server of the domain operator. When the attacker is able to produce the text string designated by the CA, that is considered proof of domain ownership and the CA issues a certificate to the wrong party. Reining it in But these attacks come with limitations. BGP attacks usually hijack only a portion of a domain’s incoming traffic, rather than all of it. As a result, computers in one part of the world will be directed to the attacker’s imposter server, while computers elsewhere will still reach the legitimate server. Cloudflare, with more than 175 datacenters worldwide, is unveiling a new service called multipath domain control validation that’s designed to exploit this limitation of BGP hijacking. As its name suggests, it performs the validation process from multiple origins that follow different Internet paths to the domain. Unless the results from multiple queries are identical, the validation will fail. “We’re going to be leveraging Cloudflare’s global network to perform this domain check, whether it’s DNS or HTTP, from various vantage points that are connected through various networks,” Nick Sullivan, head of cryptography at Cloudflare, told Ars. “If you’re hijacked, [the fraudulent data] only applies to a subset of the requests.” Agents and orchestrators Cloudflare will be making a programming interface available for free to all certificate authorities. The multipath check for domain control validation consists of two services: agents that perform domain validation out of a specific datacenter, and a domain validation “orchestrator” that handles multipath requests from CAs and dispatches them to a subset of agents. When a CA wants to ensure a domain validation hasn’t been intercepted, it can send a request to the Cloudflare API that specifies the type of check it wants. The orchestrator then forwards a request to more than 20 randomly selected agents in different datacenters. Each agent performs the domain validation request and forwards the result to the orchestrator, which aggregates what each agent observed and returns the results to the CA. Sullivan said Cloudflare has designed the new service to be an effective measure against another potential domain validation attack that spoofs IP addresses in DNS requests that use the user datagram protocol (UDP). Because the IP address of the computer making the request can be spoofed, an attacker can make a request to a targeted domain appear to come from a CA. Then, by manipulating a maximum fragment size setting, the attacker can receive a second identical response. The new Cloudflare API prevents these DNS spoofing attacks because it sends queries from multiple locations that can’t be predicted by the attacker, Sullivan said. In a message, he wrote: Multipath DCV was designed for and is primarily effective against on-path attacks. An additional feature that we built into the service that helps protect against off-path attackers is DNS query source IP randomization. By making the source IP unpredictable to the attacker, it becomes more challenging to spoof the second fragment of the forged DNS response to the DCV validation agent. Sullivan said Cloudflare is offering the service for free because the company believes that attacks on the certificate authority system harms the security of the entire Internet. He said he expects the use of multipath domain validation to become standard practice, particularly if it’s offered by other large networks. Eventually, he said, it may be mandated by the CA/browser forum, which sets industry guidelines for the issuance of TLS certificates. “I’m a little surprised this hasn’t happened yet,” Sullivan said. “We’re hoping that this announcement and this product helps spur the CA/Browser forum to adopt and require this more robust multiperspective validation for certificate authorities. It truly is a risk that hasn’t been exploited yet, and it’s just a matter of time.” Source: Cloudflare aims to make HTTPS certificates safe from BGP hijacking attacks (Ars Technica)
  6. Maybe you were once advised to “look for the padlock” as a means of telling legitimate e-commerce sites from phishing or malware traps. Unfortunately, this has never been more useless advice. New research indicates that half of all phishing scams are now hosted on Web sites whose Internet address includes the padlock and begins with “https://”. A live Paypal phishing site that uses https:// (has the green padlock). Recent data from anti-phishing company PhishLabs shows that 49 percent of all phishing sites in the third quarter of 2018 bore the padlock security icon next to the phishing site domain name as displayed in a browser address bar. That’s up from 25 percent just one year ago, and from 35 percent in the second quarter of 2018. This alarming shift is notable because a majority of Internet users have taken the age-old “look for the lock” advice to heart, and still associate the lock icon with legitimate sites. A PhishLabs survey conducted last year found more than 80% of respondents believed the green lock indicated a website was either legitimate and/or safe. In reality, the https:// part of the address (also called “Secure Sockets Layer” or SSL) merely signifies the data being transmitted back and forth between your browser and the site is encrypted and can’t be read by third parties. The presence of the padlock does not mean the site is legitimate, nor is it any proof the site has been security-hardened against intrusion from hackers. A live Facebook phish that uses SSL (has the green padlock). Most of the battle to combat cybercrime involves defenders responding to offensive moves made by attackers. But the rapidly increasing adoption of SSL by phishers is a good example in which fraudsters are taking their cue from legitimate sites. “PhishLabs believes that this can be attributed to both the continued use of SSL certificates by phishers who register their own domain names and create certificates for them, as well as a general increase in SSL due to the Google Chrome browser now displaying ‘Not secure’ for web sites that do not use SSL,” said John LaCour, chief technology officer for the company. “The bottom line is that the presence or lack of SSL doesn’t tell you anything about a site’s legitimacy.” The major Web browser makers work with a number of security organizations to index and block new phishing sites, often serving bright red warning pages that flag the page of a phishing scam and seek to discourage people from visiting the sites. But not all phishing scams get flagged so quickly. I spent a few minutes browsing phishtank.com for phishing sites that use SSL, and found this cleverly crafted page that attempts to phish credentials from users of Bibox, a cryptocurrency exchange. Click the image below and see if you can spot what’s going on with this Web address: This live phish targets users of cryptocurrency exchange Bibox. Look carefully at the URL in the address bar, and you’ll notice a squiggly mark over the “i” in Bibox. This is an internationalized domain name, and the real address is https://www.xn--bbox-vw5a[.]com/login Load the live phishing page at https://www.xn--bbox-vw5a[.]com/login (that link has been hobbled on purpose) in Google Chrome and you’ll get a red “Deceptive Site Ahead” warning. Load the address above — known as “punycode” — in Mozilla Firefox and the page renders just fine, at least as of this writing. This phishing site takes advantage of internationalized domain names (IDNs) to introduce visual confusion. In this case, the “i” in Bibox.com is rendered as the Vietnamese character “ỉ,” which is extremely difficult to distinguish in a URL address bar. As KrebsOnSecurity noted in March, while Chrome, Safari and recent versions of Microsoft’s Internet Explorer and Edge browsers all render IDNs in their clunky punycode state, Firefox will happily convert the code to the look-alike domain as displayed in the address bar. If you’re a Firefox (or Tor) user and would like Firefox to always render IDNs as their punycode equivalent when displayed in the browser address bar, type “about:config” without the quotes into a Firefox address bar. Then in the “search:” box type “punycode,” and you should see one or two options there. The one you want is called “network.IDN_show_punycode.” By default, it is set to “false”; double-clicking that entry should change that setting to “true. Source
  7. Bon anniversaire, Let’s Encrypt! The free-to-use nonprofit was founded in 2014 in part by the Electronic Frontier Foundation and is backed by Akamai, Google, Facebook, Mozilla and more. Three years ago Friday, it issued its first certificate. Since then, the numbers have exploded. To date, more than 380 million certificates have been issued on 129 million unique domains. That also makes it the largest certificate issuer in the world, by far. Now, 75 percent of all Firefox traffic is HTTPS, according to public Firefox data — in part thanks to Let’s Encrypt. That’s a massive increase from when it was founded, where only 38 percent of website page loads were served over an HTTPS encrypted connection. “Change at that speed and scale is incredible,” a spokesperson told TechCrunch. “Let’s Encrypt isn’t solely responsible for this change, but we certainly catalyzed it.” HTTPS is what keeps the pipes of the web secure. Every time your browser lights up in green or flashes a padlock, it’s a TLS certificate encrypting the connection between your computer and the website, ensuring nobody can intercept and steal your data or modify the website. But for years, the certificate market was broken, expensive and difficult to navigate. In an effort to “encrypt the web,” the EFF and others banded together to bring free TLS certificates to the masses. That means bloggers, single-page websites and startups alike can get an easy-to-install certificate for free — even news sites like TechCrunch rely on Let’s Encrypt for a secure connection. Security experts and encryption advocates Scott Helme and Troy Hunt last month found that more than half of the top million websites by traffic are on HTTPS. And as it’s grown, the certificate issuer has become trusted by the major players — including Apple, Google, Microsoft, Oracle and more. A fully encrypted web is still a ways off. But with close to a million Let’s Encrypt certificates issued each day, it looks more within reach than ever. Source
  8. What is this?# I've been writing about Google's efforts to deprecate HTTP, the protocol of the web. This is a summary of why I am opposed to it. DW Their pitch# Advocates of deprecating HTTP make three main points: Something bad could happen to my pages in transit from a server to the user's web browser. It's not hard to convert to HTTPS and it doesn't cost a lot. Google is going to warn people about my site being "not secure." So if I don't want people to be scared away, I should just do what they want me to do. Why this is bad# The web is an open platform, not a corporate platform. It is defined by its stability. 25-plus years and it's still going strong. Google is a guest on the web, as we all are. Guests don't make the rules. Why this is bad, practically speaking# A lot of the web consists of archives. Files put in places that no one maintains. They just work. There's no one there to do the work that Google wants all sites to do. And some people have large numbers of domains and sub-domains hosted on all kinds of software Google never thought about. Places where the work required to convert wouldn't be justified by the possible benefit. The reason there's so much diversity is that the web is an open thing, it was never owned. The web is a miracle# Google has spent a lot of effort to convince you that HTTP is not good. Let me have the floor for a moment to tell you why HTTP is the best thing ever. Its simplicity is what made the web work. It created an explosion of new applications. It may be hard to believe that there was a time when Amazon, Netflix, Facebook, Gmail, Twitter etc didn't exist. That was because the networking standards prior to the web were complicated and not well documented. The explosion happened because the web is simple. Where earlier protocols were hard to build on, the web is easy. I don't think the explosion is over. I want to make it easier and easier for people to run their own web servers. Google is doing what the programming priesthood always does, building the barrier to entry higher, making things more complicated, giving themselves an exclusive. This means only super nerds will be able to put up sites. And we will lose a lot of sites that were quickly posted on a whim, over the 25 years the web has existed, by people that didn't fully understand what they were doing. That's also the glory of the web. Fumbling around in the dark actually gets you somewhere. In worlds created by corporate programmers, it's often impossible to find your way around, by design. The web is a social agreement not to break things. It's served us for 25 years. I don't want to give it up because a bunch of nerds at Google think they know best. The web is like the Grand Canyon. It's a big natural thing, a resource, an inspiration, and like the canyon it deserves our protection. It's a place of experimentation and learning. It's also useful for big corporate websites like Google. All views of the web are important, especially ones that big companies don't understand or respect. It's how progress happens in technology. Keeping the web running simple is as important as net neutrality. They believe they have the power# Google makes a popular browser and is a tech industry leader. They can, they believe, encircle the web, and at first warn users as they access HTTP content. Very likely they will do more, requiring the user to consent to open a page, and then to block the pages outright. It's dishonest# Many of the sites they will label as "not secure" don't ask the user for any information. Of course users won't understand that. Many will take the warning seriously and hit the Back button, having no idea why they're doing it. Of course Google knows this. It's the kind of nasty political tactic we expect from corrupt political leaders, not leading tech companies. Sleight of hand# They tell us to worry about man-in-the-middle attacks that might modify content, but fail to mention that they can do it in the browser, even if you use a "secure" protocol. They are the one entity you must trust above all. No way around it. They cite the wrong stats# When they say some percentage of web traffic is HTTPS, that doesn't measure the scope of the problem. A lot of HTTP-served sites get very few hits, yet still have valuable ideas and must be preserved. It will destroy the web's history# If Google succeeds, it will make a lot of the web's history inaccessible. People put stuff on the web precisely so it would be preserved over time. That's why it's important that no one has the power to change what the web is. It's like a massive book burning, at a much bigger scale than ever done before. If HTTPS is such a great idea...# Why force people to do it? This suggests that the main benefit is for Google, not for people who own the content. If it were such a pressing problem we'd do it because we want to, not because we're being forced to. The web isn't safe# Finally, what is the value in being safe? Twitter and Facebook are like AOL in the old pre-web days. They are run by companies who are committed to provide a safe experience. They make tradeoffs for that. Limited linking. No styles. Character limits. Blocking, muting, reporting, norms. Etc etc. Think of them as Disney-like experiences. The web is not safe. That is correct. We don't want every place to be safe. So people can be wild and experiment and try out new ideas. It's why the web has been the proving ground for so much incredible stuff over its history. Lots of things aren't safe. Crossing the street. Bike riding in Manhattan. Falling in love. We do them anyway. You can't be safe all the time. Life itself isn't safe. If Google succeeds in making the web controlled and bland, we'll just have to reinvent the web outside of Google's sphere. Let's save some time, and create the new web out of the web itself. PS: Of course we want parts of the web to be safe. Banking websites, for example. But my blog archive from 2001? Really there's no need for special provisions there. Update: A threatening email from Google# On June 20, 2018, I got an email from Google, as the owner of scripting.com. According to the email I should "migrate to HTTPS to avoid triggering the new warning on your site and to help protect users' data." It's a blog. I don't ask for any user data. Google’s not secure message means this: “Google tried to take control of the open web and this site said no.” Last update: Tuesday June 26, 2018; 2:11 PM GMT+0200. Source
  9. In conjunction with the Cybersecurity Awareness Month celebration in October last year, Google shared some statistical data about how HTTPS usage in Chrome increased across different platforms. Since its inception, Chrome traffic encryption efforts have made significant progress that Google now wants to remove the green “Secure” label on HTTPS websites beginning in September once the search giant launches Chrome 69. The goal of the upcoming change to the browser is to give users the idea that the internet is safe by default by eliminating Chrome’s positive security indicators. Here's how the address bar should look like after the new scheme kicks off in September: Emily Schechter, Product Manager for Chrome Security at Google, also announced in a blog post that starting in October the company will mark all HTTP pages with a red “not secure” warning in the event that a user enters data on an HTTP page. The new label will be part of Chrome 70, which is set for release that month. That's in line with the search giant's upcoming plan, announced last February, to show a grey "not secure" warning in the address bar for HTTP pages starting in July when Chrome 68 is set to roll out. Source details < Clic here >
  10. Let's Encrypt – a SSL/TLS certificate authority run by the non-profit Internet Security Research Group (ISRG) to programmatically provide websites with free certs for their HTTPS websites – on Thursday said it is discontinuing TLS-SNI validation because it's insecure in the context of many shared hosting providers. TLS-SNI is one of three ways Let's Encrypt's Automatic Certificate Management Environment protocol validates requests for TLS certificates, which enable secure connections when browsing the web, along with the confidence-inspiring display of a lock icon. The other two validation methods, HTTP-01 and DNS-01, are not implicated in this issue. The problem is that TLS-SNI-01 and its planned successor TLS-SNI-02 can be abused under specific circumstances to allow an attacker to obtain HTTPS certificates for websites that he or she does not own. Such a person could, for example, find an orphaned domain name pointed at a hosting service, and use the domain – with an unauthorized certificate to make fake pages appear more credible – without actually owning the domain. For example, a company might have investors.techcorp.com set up and pointed at a cloud-based web host to serve content, but not investor.techcorp.com. An attacker could potentially create an account on said cloud provider, and add a HTTPS server for investor.techcorp.com to that account, allowing the miscreant to masquerade as that business – and with a Let's Encrypt HTTPS cert, too, via TLS-SNI-01, to make it look totally legit. It sounds bonkers but we're told some cloud providers allow this to happen. And that's why Let's Encrypt ditched its TLS-SNI-01 validation processor. Ownership It turns out that many hosting providers do not validate domain ownership. When such providers also host multiple users on the same IP address, as happens on AWS CloudFront and on Heroku, it becomes possible to obtain a Let's Encrypt certificate for someone else's website via the TLS-SNI-01 mechanism. On Tuesday, Frans Rosén, a security researcher for Detectify, identified and reported the issue to Let's Encrypt, and the organization suspended certificate issuance using TLS-SNI-01 validation, pending resolution of the problem. In his account of his proof-of-concept exploit, Rosén recommended three mitigations: disabling TLS-SNI-01; blacklisting .acme.invalid in certificate challenges, which is required to get a cert via TLS-SNI-01; and looking to other forms of validation because TLS-SNI-01 and 02 are broken given current cloud infrastructure practices. AWS CloudFront and Heroku have since tweaked their operations based on Rosén's recommendation, but the problem extends to other hosting providers that serve multiple users from a single IP address without domain ownership validation. Late Thursday, after temporarily reenabling the validation method for certain large hosting providers that aren't vulnerable, Let's Encrypt decided it would permanently disable TLS-SNI-01 and TLS-SNI-02 for new accounts. Those who previously validated using TLS-SNI-01 will be allowed to renew using the same mechanism for a limited time. "We have arrived at the conclusion that we cannot generally re-enable TLS-SNI validation," said ISRG executive director Josh Aas in a forum post. "There are simply too many vulnerable shared hosting and infrastructure services that violate the assumptions behind TLS-SNI validation." Aas stressed that Let's Encrypt will discontinue using the TLS-SNI-01 and TLS-SNI-02 validation methods Article
  11. We have a little problem on the web right now and I can only see this becoming a larger concern as time goes by. More and more sites are obtaining certificates, vitally important documents that we need to deploy HTTPS, but we have no way of protecting ourselves when things go wrong. Certificates We're currently seeing a bit of a gold rush for certificates on the web as more and more sites deploy HTTPS. Beyond the obvious security and privacy benefits of HTTPS, there are quite a few reasons you might want to consider moving to a secure connection that I outline in my article Still think you don't need HTTPS?. Commonly referred to as 'SSL certificates' or 'HTTPS certificates', the wider internet is obtaining them at a rate we've never seen before in the history of the web. Every day I crawl the top 1 million sites on the web and analyse various aspects of their security and every 6 months I publish a report. You can see the reports here but the main result to focus on right now is the adoption of HTTPS. Not only are we continuing to deploy HTTPS, the rate at which we're doing so is increasing too. This is what real progress looks like. The process of obtaining a certificate has become more and more simple over time and now, thanks to the amazing Let's Encrypt, it's also free to get them. Put simply, we send a Certificate Signing Request (CSR) to the Certificate Authority (CA) and the CA will challenge us to prove our ownership of the domain. This is usually done by setting a DNS TXT record or hosting a challenge code somewhere on a random path on our domain. Once this challenge has been satisfied the CA will issue the certificate and we can then use it to present to the browser and get our green padlock and HTTPS in the address bar. I have a few tutorials to help you out with this process including how to get started, smart renewal and dual certificates. But this is all great, right? What's the problem here? The problem is when things don't go according to plan and you have a bad day. We've been hacked Nobody ever wants to hear those words but the sad reality is that we do, more often than any of us would like. Hackers can go after any number of things when they gain access to our servers and often one of the things they can access is our private key. The certificates we use for HTTPS are public documents, we send them to anyone that connects to our site, but the thing that stops other people using our certificate is that they don't have our private key. When a browser establishes a secure connection to a site, it checks that the server has the private key for the certificate it's trying to use, this is why no one but us can use our certificate. If an attacker gets our private key though, that changes things. Now that an attacker has managed to obtain our private key, they can use our certificate to prove that they are us. Let's say that again. There is now somebody on the internet, that can prove they are you, when they are not you. This is a real problem and before you think 'this will never happen to me', do you remember Heartbleed? This tiny bug in the OpenSSL library allowed an attacker to steal your private key and you didn't even have to do anything wrong for it to happen. On top of this there are countless ways that private keys are exposed by accident or negligence. The simple truth is that we can lose our private key and when this happens, we need a way to stop an attacker from using our certificate. We need to revoke the certificate. Revocation In a compromise scenario we revoke our certificate so that an attacker can't abuse it. Once a certificate is marked as revoked the browser will know not to trust it, even though it's valid. The owner has requested revocation and no client should accept it. Once we know we've had a compromise we contact the CA and ask that they revoke our certificate. We need to prove ownership of the certificate in question and once we do that, the CA will mark the certificate as revoked. Now the certificate is revoked, we need a way of communicating this to any client that might require the information. Right now the browser doesn't know and of course, that's a problem. There are two mechanisms that we can use to make this information available and they are a Certificate Revocation List (CRL) or the Online Certificate Status Protocol (OCSP). Certificate Revocation Lists A CRL is a really simple concept and is quite literally just a list of all certificates that a CA has marked as revoked. A client can contact the CRL Server and download a copy of the list. Armed with a copy of the list the browser can check to see if the certificate it has been provided is on that list. If the certificate is on the list, the browser now knows the certificate is bad and it shouldn't be trusted, it will throw an error and abandon the connection. If the certificate isn't on the list then everything is fine and the browser can continue the connection. The problem with a CRL is that they contain a lot of revoked certificates from the particular CA. Without getting into too much detail they are broken down per intermediate certificate a CA has and the CA can fragment the lists into smaller chunks, but, the point I want to make remains the same. The CRL is typically not an insignificant size. The other problem is that if the client doesn't have a fresh copy of the CRL, it has to fetch one during the initial connection to the site which can make things look much slower than they actually are. This doesn't sound particularly great so how about we take a look at OCSP? Online Certificate Status Protocol The OCSP provides a much nicer solution to the problem and has a significant advantage over the CRL approach. With OCSP we ask the CA for the status of a single, particular certificate. This means all the CA has to do is respond with a good/revoked answer which is considerably smaller than a CRL. Great stuff! It is true that OCSP offered a significant performance advantage over fetching a CRL, but, that performance advantage did come with a cost (don't you hate it when that happens?). The cost was a pretty significant one too, it was your privacy... When we think about what an OCSP request is, the request for the status of a very particular, single certificate, you may start to realise that you're leaking some information. When you send an OCSP request, you're basically asking the CA this: Is the certificate for pornhub.com valid? So, not exactly an ideal scenario. You're now advertising your browsing history to some third party that you didn't even know about, all in the name of HTTPS which set out to give us more security and privacy. Kind of ironic, huh? But wait, there's something else. Hard fail I talked about the CRL and OCSP responses above, the two mechanisms a browser can use to check if a certificate is revoked, and they look like this. Upon receiving the certificate, the browser will reach out to one of these services and perform the necessary query to ultimately ascertain the status of the certificate. What if your CA is having a bad day and the infrastructure is offline? What if it looks like this? The browser has only two choices here. It can refuse to accept the certificate because it can't check the revocation status or it can take a risk and accept the certificate without knowing the revocation status. Both of these options come with their advantages and disadvantages. If the browser refuses to accept the certificate then every time your CA has a bad day and their infrastructure goes offline, your sites goes offline too. If the browser continues and accepts the certificate then it risks using a certificate that could have been stolen and exposes the user to the associated risks. It's a tough call, but right now, today, neither of these actually happen... Soft Fail What actually happens today is that a browser will do what we call a soft fail revocation check. That is, the browser will try to do a revocation check but if the response doesn't come back, or doesn't come back in a short period of time, the browser will simply forget about it. Even is worse is that Chrome doesn't even do revocation checks, at all. Yes, you did read that right, Chrome doesn't even try to check the revocation status of certificates that it encounters. What you might find even more odd is that I completely agree with their approach and I'm quite happy to report that right now Firefox looks like they will be joining the party very soon too. Let me explain. The problem we had with hard fail was obvious, the CA has a bad day and so do we, that's how we arrived at soft fail. The browser will now try to do a revocation check but will ultimately abandon the check if it takes too long or it appears the CA is offline. Wait, what was that last part? The revocation check is abandoned if "it appears the CA is offline". I wonder if an attacker could simulate that? If you have an attacker performing a MiTM attack all they need to do is simply block the revocation request and make it look like the CA is offline. The browser will then soft fail the check and continue on to happily use the revoked certificate. Every single time you browse and encounter this certificate whilst not under attack you will pay the cost of performing the revocation check to find out the certificate is not revoked. The one time you're under attack, the one time you really need revocation to work, the attacker will simply block the connection and the browser will soft fail through the failure. Adam Langley at Google came up with the best description for what revocation is, it's a seatbelt that snaps in a car crash, and he's right. You get in your car every day and you put your seatbelt on and it makes you feel all warm and fuzzy that you're safe. Then, one day, things don't quite go to plan, you're involved in a crash and out of the windscreen you go. The one time you needed it, it let you down. Fixing the problem Right now at this very moment in time the truth is that there is no reliable way to fix this problem, revocation is broken. There are a couple of things worth bringing up though and we may be able to look to a future where we have a reliable revocation checking mechanism. Proprietary mechanisms If a site is compromised and an attacker gets hold of the private key they can impersonate that site and cause a fair amount of harm. That's not great but it could be worse. What if a CA was compromised and an attacker got access to the private key for an intermediate certificate? That would be a disaster because the attacker could then impersonate pretty much any site they like by signing their own certificates. Rather than doing online checks for revocation of intermediate certificates, Chrome and Firefox both have their own mechanisms that work in the same way. Chrome calls theirs CRLsets and Firefox call theirs OneCRL and they curate lists of revoked certificates by combining available CRLs and selecting certificates from them to be included. So, we have high value certificates like intermediates covered, but what about you and I? OCSP Must-Staple To explain what OCSP Must-Staple is, we first need a quick background on OCSP Stapling. I'm not going to go into too much info, you can get that in my blog on OCSP Stapling, but here is the TL;DR. OCSP Stapling saves the browser having to perform an OCSP request by providing the OCSP response along with the certificate. It's called OCSP Stapling because the idea is that the server would 'staple' the OCSP Response to the certificate and provide both together. At first glance this seems a little odd because the server is almost 'self certifying' its own certificate as not being revoked, but it all checks out. The OCSP response is only valid for a short period and is signed by the CA in the same way that the certificate is signed. So, in the same way the browser can verify the certificate definitely came from the CA, it can also verify that the OCSP response came from the CA too. This solves the massive privacy concern with OCSP and also removes a burden on the client from having to perform this external request. Winner! But not so much actually, sorry. OCSP Stapling is great and we should all support it on our sites, but, do we honestly think an attacker is going to enable OCSP Stapling? No, I didn't think so, of course they aren't going to. What we need is a way to force the server to OCSP Staple and this is what OCSP Must-Staple is for. When requesting our certificate from the CA we ask them to set the OCSP Must-Staple flag in the certificate. This flag instructs the browser that the certificate must be served with an OCSP Staple or it has to be rejected. Setting the flag is easy. Now that we have a certificate with this flag set, we as the host must ensure that we OCSP Staple or the browser will not accept our certificate. In the event of a compromise and an attacker obtaining our key, they must also supply an OCSP Staple when they use our certificate too. If they don't include an OCSP Staple, the browser will reject the certificate, and if they do include an OCSP Staple then the OCSP response will say that the certificate is revoked and the browser will reject. Tada! OCSP Expect-Staple Whilst Must-Staple sounds like a great solution to the problem of revocation, it isn't quite there just yet. One of the biggest problems that I see is that as a site operator I don't actually know how reliably I OCSP staple and if the client is happy with the stapled response. Without OCSP Must-Staple enabled this isn't really a problem but if we do enable OCSP Must-Staple and then we don't OCSP Staple properly or reliably, our site will start to break. To try and get some feedback about how we're doing in terms of OCSP Stapling we can enable a feature called OCSP Expect-Staple. I've written about this before and you can get all of the details in the blog OCSP Expect-Staple but I will give the TL;DR here. You request an addition to the HSTS preload list that asks the browser to send you a report if it isn't happy with the OCSP Staple. You can collect the reports your self or use my service, report-uri.io, to do it for you and you can learn exactly how often you would hit problems if you turned on OCSP Must-Staple. Because getting an addition to the HSTS preload list isn't as straightforward as I'd like I also wrote a spec to define a new security header called Expect-Staple to deliver the same functionality but with less effort involved. The idea being that you can now set this header and enable reporting to get the feedback we desperately need before enabling Must-Staple. Enabling the header would be simple, just like all of the other security headers: Expect-Staple: max-age=31536000; report-uri="https://scotthelme.report-uri.io/r/d/staple"; includeSubDomains; preload Rogue certificates One of the other things that we have to consider whilst we're on the topic of revocation is rogue certificates. If somebody manages to compromise a CA or otherwise obtains a certificate that they aren't supposed to have, how are we supposed to know? If I were to breach a CA right now and obtain a certificate for your site without telling you, you wouldn't ever learn about it unless it was widely reported. You could even have an insider threat and someone in your organisation could obtain certificates without going through the proper internal channels and do with them as they please. We need a way to have 100% transparency and we will very soon, Certificate Transparency. Certificate Transparency CT is a new requirement that will be mandatory from early next year and will require that all certificates are logged in a public log if the browser is to trust them. You can read the article for more details on CT but what will generally happen is that a CA will log all certificates it issues in a CT log. These logs are totally public and anyone can search them so the idea is that if a certificate is issued for your site, you will know about it. For example, here you can see all certificates issued for my domain and you can search for your own, you can also use CertSpotter from sslmate to do the same and I use the Facebook Certificate Transparency Monitoring tool which will send you an email each time a new certificate is issued for your domain/s. CT is a fantastic idea and I can't wait for it to become mandatory but there is one thing to note and it's that CT is only the first step. Knowing about these certificates is great but we still have all of the above mentioned problems with revoking them. That said, we can only tackle one problem at once and having the best revocation mechanisms in the world is no good if we don't know about the certificates we need to revoke. CT gives us that much at least. Certificate Authority Authorisation Stopping a certificate being issued is much easier than trying to revoke it and this is exactly what Certificate Authority Authorisation allows us to start doing. Again, there are further details in the linked article but the short version is that we can now authorise only specific CAs to issue certificates for us instead of the current situation where we can't indicate any preference at all. It's as simple as creating a DNS record: scotthelme.co.uk. IN CAA 0 issue "letsencrypt.org" Whilst CAA isn't a particularly strong mechanism and it won't help in all mis-issuance scenarios, there are some where it can help us and we should assert our preference by creating a CAA record. Conclusion As it currently stands there is a real problem, we can't revoke certificates if someone obtains our private key. Just imagine how that will play out the next time Heartbleed comes along! One thing that you can do to try and limit the impact of a compromise is to reduce the validity period of certificates you obtain. Instead of three years go for one year or even less. Let's Encrypt only issue certificates that are valid for ninety days! With a reduced lifetime on your certificate you have less of a problem if you're compromised because an attacker has less time to abuse the certificate before it expires. Beyond this, there's very little we can do. To demonstrate the issue and just how real this is, try and visit the new subdomain I setup on my site, revoked.scotthelme.co.uk. As you can probably guess this subdomain is using a revoked certificate and there's a good chance that it will load just fine. If it doesn't, if you do get a warning about the certificate having expired, then your browser is still doing an OCSP check and you just told the CA you were going to my site. To prove that this soft fail check is pointless you can add a hosts entry for ocsp.int-x3.letsencrypt.org to resolve to or block it via some other means and then try the same thing again. This time the page will load just fine because the revocation check will fail and the browser will continue loading the page. Some use that is... The final point I want to close on is a question, and it's this. Should we fix revocation? That's a story for another day though! Article source
  12. We submit hundreds of blacklist review requests every day after cleaning our clients’ websites. Google’s Deceptive Content warning applies when Google detects dangerous code that attempts to trick users into revealing sensitive information. For the past couple of months we have noticed that the number of websites blacklisted with Deceptive Content warnings has increased for no apparent reason. The sites were clean, and there was no external resources loading on the website. Recently, we discovered a few cases where Google removed the Deceptive Content warning only after SSL was enabled. We conducted the following research in collaboration with Unmask Parasites. What is an SSL Certificate? Most websites use the familiar HTTP protocol. Those that install an SSL/TLS certificate can use HTTPS instead. SSL/TLS is a cryptographic protocol used to encrypt data while it travels across the internet between computers and servers. This includes downloads, uploads, submitting forms on web pages, and viewing website content. SSL doesn’t keep your website safe from hackers, rather it protects your visitor’s data. To the average visitor, SSL is what’s behind the green padlock icon in the browser address bar. This icon signifies that communication is secure between the visitor and the web server, and any information sent or received is kept safe from prying eyes. Without SSL, an HTTP site can only transfer information “in the clear”. Therefore, bad actors can snoop on network traffic and steal sensitive user input such as passwords and credit card numbers. The problem is that many visitors don’t notice when SSL is missing on a website. Google Moves on HTTP/HTTPS We have seen Google pushing SSL as a best practice standard across the web. Not only are they rewarding sites that use HTTPS, it seems they are steadily cracking down on HTTP sites that should be using HTTPS. In 2014, Google confirmed HTTPS helps websites rank higher in their search engine results. In January 2017, they rolled out the Not Secure label in Chrome whenever a non-HTTP website handled credit cards or passwords. Google also announced they would eventually apply the label to all HTTP pages in Chrome, and make the label more obvious: There has been a lot of talk about how to promote SSL and warn users when browsing HTTP sites. Studies show that users do not perceive the lack of a “secure” icon as a warning, but also become blind to warnings that occur too frequently. Our plan to label HTTP sites clearly and accurately as non-secure will take place in gradual steps, based on increasingly stringent criteria. Source: Google Security Blog Perhaps the red triangle warning has not been as effective, and they could be working on even stronger labels through their SafeBrowsing diagnostics. Blocking Dangerous HTTP Input In a few recent cases, we had Google review a cleaned website twice over a couple of days, but the requests were denied. Once we enabled SSL, we asked again and they cleared it. Nothing else was changed. We dug further and uncovered a few more cases where this behavior had been replicated. Upon investigation, the websites contained login pages or password input fields that were not being delivered over HTTPS. This could mean that Google is expanding its definition of phishing and deception to include websites that cause users to enter sensitive information over HTTP. We don’t know what Google looks for exactly to make their determination, but it’s safe to assume they look for forms that take passwords by just looking for input type=”password” in the source code of the website when that specific page is not being served over HTTPS. Here’s an example from the Security Issues section of Google Search Console showing messages related to Harmful Content: We see that the WordPress admin area is blocked, as well as a password-protected page. Both URLs start by HTTP, indicating the SSL is missing. Both pages have some form of login requirement. Most of these sites were previously hacked, and these warnings remained after the cleanup had been completed. There were a few, however, where there was no previous compromise. In each case, enabling SSL did the trick. As the largest search engine in the world, Google has the power to reduce your traffic by 95% with their blacklist warnings. By blocking sites that should be using HTTP, Google can protect its users and send a clear message to the webmaster. Domain Age a Factor There seems to be another similar factor among the affected websites. Most appear to be recently registered domains, and as such, they did not have time to build a reputation and authority with Google. This could be another factor that Google takes into account when assessing the danger level of a particular website. Some websites were not even a month old, had no malware, and were blacklisted until we enabled SSL. Google Ranking and Malware Detection One of the many factors involved in how Google rates a website is how long the site has been registered. Websites with WHOIS records dating back several years gain a certain level of authority. Google’s scanning engines also help limit our exposure to dangerous websites. Phishing attacks often use newly-registered domains until they are blacklisted. New sites need time to develop a reputation. An older website that never had any security incidents is less likely to have any false positive assessment, while a new website won’t have this trust. As soon as Google sees a public page asking for credentials that are not secured by HTTPS, they take a precautionary action against that domain. HTTP As a Blacklist Signal Google has been slowly cracking down on HTTP sites that transfer sensitive information and may be starting to label them as potential phishing sites when they have a poor reputation. While Google has not confirmed that SSL is a factor in reviewing blacklist warnings, it makes sense. Google can ultimately keep their user’s browsing experience as safe as possible, and educate webmasters effectively by blocking sites that don’t protect the transmission of passwords and credit card numbers. Password handling is a big security concern. Every day there are cases of mishandled passwords, so it’s understandable that Google is testing their power in changing the tides and keeping users safe. Conclusion Keeping the communication on your website secure is important if you transmit any sensitive user input. Enabling SSL on your website is a wise decision. Thankfully this has become an easier process in recent years, with many hosts encouraging and streamlining the adoption of SSL. Let’s Encrypt came out of beta over a year ago, and has grown to over 40 million active domains. If you have a relatively new website and want to ensure that Google does not blacklist you for accepting form data, be sure to get SSL enabled on your website. We offer a free Lets’s Encrypt SSL certificate with all our firewall packages and are happy to help you get started. Article source
  13. When, in January 2017, Mozilla and Google made Firefox and Chrome flag HTTP login pages as insecure, the intent was to make phishing pages easier to recognize, as well as push more website owners towards deploying HTTPS. But while the latter aim was achieved, and the number of phishing sites making use of HTTPS has increased noticeably, the move also had one unintended consequence: the number of phishing sites with HTTPS has increased, too. “While the majority of today’s phishing sites still use the unencrypted HTTP protocol, a threefold increase in HTTPS phishing sites over just a few months is quite significant,” noted Netcraft’s Paul Mutton. One explanation may be that fraudsters have begun setting up more phishing sites that use secure HTTPS connections. Another may be that they have simply continued compromising websites to set up the phishing pages, but as more legitimate sites began using HTTPS, more phishing pages ended up having HTTPS. Finally, it’s possible that fraudsters are intentionally compromising HTTPS sites so that their phishing login pages look more credible. Whatever the reason – and it might simply be a combination of them all – the change made some phishing attempts even more effective. And so the battle between attackers and defenders continues. Article source
  14. Mozilla plans to implement a change in Firefox 55 that restricts plugins -- read Adobe Flash -- to run on HTTP pr HTTPS only. Adobe Flash is the only NPAPI plugin that is still supported by release versions of the Firefox web browser. Previously supported plugins such as Silverlight or Java are no longer supported, and won't be picked up by the web browser anymore. Flash is the only plugin left standing in Firefox. It is also still available for Google Chrome, Chromium-based browsers, and Microsoft Edge, but the technology used to implement Flash is different in those web browsers. Adobe Flash causes stability and security issues regularly in browsers that support it. If you check the latest Firefox crash reports for instance, you will notice that many top crashes are plugin-related. Security is another hot topic, as Flash is targeted quite often thanks to new security issues coming to light on a regular basis. Mozilla's plan to run Flash only on HTTP or HTTPS sites blocks execution of Flash on any non-HTTP non-HTTPS protocol. This includes among others FTP and FILE. Flash content will be blocked completely in these instances. This means that users won't get a "click to play" option or something similar, but just resources blocked from being loaded and executed by the Firefox web browser. Mozilla provides an explanation for the decision on the Firefox Site Compatibility website: Firefox 55 and later will prevent Flash content from being loaded from file, ftp or any other URL schemes except http and https. This change aims to improve security, because a different same-origin policy is applied to the file protocol, and loading Flash content from other minor protocols is usually not well-tested. Mozilla is also looking into extending the block to data: URIs. The change should not affect too many Firefox users and developers, but it will surely impact some. Mozilla implemented a new preference in Firefox that allows users to bypass the new restriction: Type about:config in the browser's address bar and hit the Enter-key. Confirm that you will be careful if the warning prompt appears. Search for the preference plugins.http_https_only. Double-click on it. A value of True enables the blocking of Flash content on non-HTTP/HTTPS pages, while a value of False restores the previous handling of Flash so that it runs on any protocol. Mozilla suggests however that developers set up a local web server instead for Flash testing if that is the main use case. (via Sören) Article source
  15. Firefox warns users about unencrypted pages We suppose it was only a matter of time before someone had a complaint about the notifications browsers display when a website accepts logins over unencrypted HTTP pages. In fact, Mozilla has received a complaint about this very "issue." Folks over at Ars Technica spotted the complaint over on Mozilla's Bugzilla bug-reporting service. "Your notice of insecure password and/or log-on automatically appearing on the log-in for my website, Oil and Gas International, is not wanted and was put there without our permission. Please remove it immediately. we have our own security system, and it has never been breached in more than 15 years. Your notice is causing concern by our subscribers and is detrimental to our business," the message signed by dgeorge reads. Of course, they seem to be late to the party since this type of warnings have been showing for a few months now and became standard earlier this year for both Firefox and Chrome. The benefits of HTTPS Thankfully, someone from Mozilla came forward and cleared things up for dear ol' dgeorge telling him that when a site requests a user's password over HTTP, the transmission is done in the clear. "As such, anybody listening on the network would be able to record those passwords. This puts not just users at risk when using your site, but also puts them at risk on any other website that they might share a password with yours," they explain. In the end, it's been proven time and time again, that providing email and passwords over HTTP is no longer safe. For years now, there's been a push for HTTPS and web admins have been given plenty of time to make the change, both for their sake and their users' sake. Now, Chrome will display a "Not Secure" notification next to the address bar, while Firefox takes things a step further, displaying below the user name and password fields "this connection is not secure. Logins entered here could be compromised." Source
  16. Systems Affected All systems behind a hypertext transfer protocol secure (HTTPS) interception product are potentially affected. Overview Many organizations use HTTPS interception products for several purposes, including detecting malware that uses HTTPS connections to malicious servers. The CERT Coordination Center (CERT/CC) explored the tradeoffs of using HTTPS interception in a blog post called The Risks of SSL Inspection. Organizations that have performed a risk assessment and determined that HTTPS inspection is a requirement should ensure their HTTPS inspection products are performing correct transport layer security (TLS) certificate validation. Products that do not properly ensure secure TLS communications and do not convey error messages to the user may further weaken the end-to-end protections that HTTPS aims to provide. Description TLS and its predecessor, Secure Sockets Layer (SSL), are important Internet protocols that encrypt communications over the Internet between the client and server. These protocols (and protocols that make use of TLS and SSL, such as HTTPS) use certificates to establish an identity chain showing that the connection is with a legitimate server verified by a trusted third-party certificate authority. HTTPS inspection works by intercepting the HTTPS network traffic and performing a man-in-the-middle (MiTM) attack on the connection. In MiTM attacks, sensitive client data can be transmitted to a malicious party spoofing the intended server. In order to perform HTTPS inspection without presenting client warnings, administrators must install trusted certificates on client devices. Browsers and other client applications use this certificate to validate encrypted connections created by the HTTPS inspection product. In addition to the problem of not being able to verify a web server’s certificate, the protocols and ciphers that an HTTPS inspection product negotiates with web servers may also be invisible to a client. The problem with this architecture is that the client systems have no way of independently validating the HTTPS connection. The client can only verify the connection between itself and the HTTPS interception product. Clients must rely on the HTTPS validation performed by the HTTPS interception product. A recent report, The Security Impact of HTTPS Interception, highlighted several security concerns with HTTPS inspection products and outlined survey results of these issues. Many HTTPS inspection products do not properly verify the certificate chain of the server before re-encrypting and forwarding client data, allowing the possibility of a MiTM attack. Furthermore, certificate-chain verification errors are infrequently forwarded to the client, leading a client to believe that operations were performed as intended with the correct server. This report provided a method to allow servers to detect clients that are having their traffic manipulated by HTTPS inspection products. The website badssl.com is a resource where clients can verify whether their HTTPS inspection products are properly verifying certificate chains. Clients can also use this site to verify whether their HTTPS inspection products are enabling connections to websites that a browser or other client would otherwise reject. For example, an HTTPS inspection product may allow deprecated protocol versions or weak ciphers to be used between itself and a web server. Because client systems may connect to the HTTPS inspection product using strong cryptography, the user will be unaware of any weakness on the other side of the HTTPS inspection. Impact Because the HTTPS inspection product manages the protocols, ciphers, and certificate chain, the product must perform the necessary HTTPS validations. Failure to perform proper validation or adequately convey the validation status increases the probability that the client will fall victim to MiTM attacks by malicious third parties. Solution Organizations using an HTTPS inspection product should verify that their product properly validates certificate chains and passes any warnings or errors to the client. A partial list of products that may be affected is available at The Risks of SSL Inspection. Organizations may use badssl.com as a method of determining if their preferred HTTPS inspection product properly validates certificates and prevents connections to sites using weak cryptography. At a minimum, if any of the tests in the Certificate section of badssl.com prevent a client with direct Internet access from connecting, those same clients should also refuse the connection when connected to the Internet by way of an HTTPS inspection product. In general, organizations considering the use of HTTPS inspection should carefully consider the pros and cons of such products before implementing. Organizations should also take other steps to secure end-to-end communications, as presented in US-CERT Alert TA15-120A. Article source
  17. Hi all, I recently have an issue while using Chrome browser, Google.com doesn't load in Chrome, other sites work fine!, it's not a DNS issue because everything is working fine in Firefox, can someone suggest sth?, thanks in advance Screenshot
  • Create New...