Jump to content

Search the Community

Showing results for tags 'dns'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...

Found 29 results

  1. Mozilla published a list of requirements that companies need to meet if they want to be included as Trusted Recursive Resolvers for Firefox's upcoming DNS-over-HTTPS feature. DNS-over-HTTPS aims to improve user privacy, security and the reliability of connections by sending and receiving DNS information using HTTPS. Mozilla ran a Shield study in 2018 to test the DNS-over-HTTPS implementation in Firefox Nightly versions. The organization selected Cloudflare as its partner for the study after Cloudflare agreed to Mozilla's requirements to not keep records or sell or transfer data to third-parties. Firefox users may configure DNS-over-HTTPS in the browser. Mozilla plans to make it the default in Firefox going forward; while that is beneficial overall, doing so comes with its own set of issues and concerns. Firefox will use the feature for DNS related activities and not the DNS configured on the computer. Means: local hosts files, resolvers, or custom DNS providers will be ignored. The selection of Cloudflare as the first partner was controversial. Mozilla plans to make DNS-over-HTTPS the default in the Firefox web browser. Firefox users may still disable the feature once Mozilla makes the switch from off to on though. The organization wants to select a number of companies for use as Trusted Recursive Resolvers in the Firefox web browser. To address concerns in regards to privacy, Mozilla created a list of policies that these organizations need to conform to. User data may only be retained for up to 24 hours and that needs to be done "for the purpose of operating the service". Aggregate data may be kept for longer. Personal information, IP addresses, user query patterns, or other data that may identify users may not be retained, sold, or transferred. Data gathered from acting as a resolver may not be combined with other data that "can be used to identify individual users". Rights to user data may not be sold, licensed, sublicensed or granted. Resolver must support DNS Query Name Minimisation (to improve privacy, the resolver does not send the full original QNAME to the upstream name server). The resolver must not "propagate unnecessary information about queries to authoritative name servers". Organizations need a "public privacy notice specifically for the resolver service". Organizations need to publish a transparency report "at least yearly". The company that operates the resolver should not block or filter domains unless required by law. Organizations need to maintain public documentation that lists all domains that are blocked and maintain a log that highlights when domains get added or removed. The resolver needs to provide an "accurate NXDOMAIN response" when a domain cannot be resolved and not alter the response, e.g. redirect a user to alternative content. Mozilla's system will be opt-out means that it is enabled by default for all Firefox users if Mozilla does not change that prior to integration in Firefox Stable. Source: Mozilla still on track to enable DNS-over-HTTPS by default in Firefox (gHacks - Martin Brinkmann)
  2. selesn777

    NetSetMan Pro 3.7.3 Retail

    NetSetMan Pro 3.7.3 Retail NetSetMan is a network settings manager which can easily switch between 6 different, visually structured profiles including IP addresses, gateways (incl. Metric), DNS servers, WINS servers, IPv4 and IPv6, extensive WiFi managment, computer name, workgroup, DNS domain, default printer, network drives, NIC status, SMTP server, hosts and scripts. NetSetMan offers you a powerful, easy-to-use interface to manage all your network settings at a glance. Main features: Management for network settings (LAN & WLAN)Tray-Info for all current IP settingsNSM Service to allow the use without admin privilegesAdministration for defining usage permissionsQuick switch from the tray iconAuto-saving of all settingsCommand line activationQuick access to frequently used Windows locationsTwo different user interfaces (Full & Compact)3.7. - 2014-06-03 Free vs Pro Website: http://www.netsetman.com/ OS: Windows XP / Vista / 7 / 8 (x86-x64) Language: Ml Medicine: Keygen Size: 3,66 Mb.
  3. In recent years, there has been an explosion of services designed to let you access geo-restricted content from anywhere in the world. Originally, VPNs were all the rage. But with the VPN clampdown by services like Netflix and BBC iPlayer, some users have turned to smart DNS providers instead. For people who are desperate to access such apps, they both have pros and cons. Of course, changing your DNS servers or using a VPN can have exceptional benefits outside the world of geo-blocking. However, many users won’t care about those benefits. To help you out, I’m going to focus on the two solutions specifically from the standpoint of someone who is using them to access blocked content. What are they? How to they work? And, most importantly, what impact do they have on your online security? Keep reading to find out. What Is a VPN? A VPN (Virtual Private Network) lets you connect to a secure private network remotely. They are widely used by companies to allow employees to access databases and business-critical apps when they are out of the office. Connecting to a VPN (such as ExpressVPN or any provider in our best VPNs list) will direct all your internet traffic to the new network, and you effectively do your browsing through that network. In addition to getting around geo-blocking, VPNs significantly improve your online security and privacy. In an age when it seems like every company in the world is trying to get access to your data and browsing history, everyone should be using one. What Is DNS? DNS stands for “Domain Name System.” It’s like the phone book of the internet. DNS servers are responsible for pairing web domains (such as google.com) with the site’s underlying IP address. As such, changing your DNS provider away from your ISP’s default service can bring awesome benefits, including faster browsing, parental controls, and increased security technology. Unlike regular DNS, smart DNS directs users to a proxy server which is specifically designed to help unblock restricted content. How Do VPNs Help Access Restricted Content? When connecting to a VPN, your computer acts like it’s in the physical location of the VPN network. More importantly, websites see an IP address in a particular location and automatically assume you’re based there. For example, if you live in the United Kingdom and connect to a VPN in the United States, websites will display the American version of the site. What’s the Problem With VPNs? In the last couple of years, websites that offer streaming content have started blocking users on VPNs. It’s surprisingly straightforward to achieve: the companies collate a list of IP addresses used by VPN providers and block any traffic that originates from them. Of course, some IP addresses will always slip through the cracks, thus resulting in a game of whack-a-mole between the content providers and VPN companies. How Do DNS Servers Help Access Restricted Content? With the ever-decreasing reliability of VPNs for accessing geo-blocked content, users have been migrating to smart DNS providers instead. The principle is the same as VPNs: both your computer and websites you visit are spoofed into thinking you’re in a different place from your true locale. However, while the effect for the user is the same, the underlying process is very different. A smart DNS will receive information about a user’s location and change it to a new location before resolving the IP query. It does this by routing all your traffic through a dedicated proxy server. The server is located in the country where the website you want to visit is based. The Security Implications of VPNs VPNs are the number one weapon in the battle to keep yourself safe from prying eyes. If you use a VPN, the biggest benefit is encrypted traffic. A hacker won’t be able to see what you’re doing online, and neither will your ISP. It passes through a secure tunnel to the VPN network, and won’t be visible by anyone until it enters the public internet. And remember, if you only visit HTTPS sites, your browsing will always be encrypted. If you’re choosing a VPN provider, you still need to pay attention to the VPN protocols. Most providers offer SSL/TLS, PPTP, IPSec, and L2TP — but they are not all equal, especially from a security standpoint. For example, there are known vulnerabilities with PPTP, with many problems deriving from the authentication processes it uses. As a rule of thumb, you should use SSL protocols. The most security-conscious VPNs won’t even anonymously log traffic. Theoretically, logs could allow a VPN provider to match an IP address and a time stamp to one of their customers. If the provider finds itself on the end of a court’s subpoena because some of its users have been accessing illegal content or downloading copyrighted videos, the company might potentially “fold” rather quickly and relinquish any information they have. The Security Implications of Smart DNS Smart DNS servers are not security measures. Yes, some top-end DNS providers introduce technology such as DNS-over-HTTPS and DNSSEC, but you won’t find those features on services that solely focus on forging your location. Most importantly, DNS servers do not encrypt your data. This dramatically increases their speed compared to VPNs (which is a big reason why they’re popular among cord-cutters), but they will not hide your traffic from companies, websites, your ISP, governments, or anyone else who wants to spy on you. Ultimately, all your traffic is logged against your IP address, and anyone with the right tools can view it. You’re also putting yourself at risk from man-in-the-middle attacks (MITM). MITM attacks occur when an attacker is intercepting and altering any traffic between two parties who believe they are communicating directly with each other. DNS servers are one of the main ways in which hackers launch MITM attacks. It is very easy for an unscrupulous smart DNS provider to offer rock bottom prices then conduct DNS hijacking on all its customers. Look no further than the now infamous Hola VPN incident to see how low some people are willing to stoop in the pursuit of profit. Before signing up to a smart DNS provider, spend a few hours carefully studying the company’s privacy policy. It will help shed light on what your provider is logging, what it knows about you, and if it is profiting off your data. The Bottom Line If you are desperate to watch the latest season of Orange Is The New Black, you need to give VPNs a wide berth. They are unreliable and no longer fit for purpose if you want to unblock content. Instead, you should use a smart DNS service. However, users should also use a VPN service. If you value your privacy and security, there is no better way to keep yourself safe online. Remember, smart DNS providers do not help your security — if anything, they hinder it. Article source
  4. On May 31, 2018 we had a 17 minute outage on our resolver service; this was our doing and not the result of an attack. Cloudflare is protected from attacks by the Gatebot DDoS mitigation pipeline. Gatebot performs hundreds of mitigations a day, shielding our infrastructure and our customers from L3/L4 and L7 attacks. Here is a chart of a count of daily Gatebot actions this year: In the past, we have blogged about our systems: Meet Gatebot, a bot that allows us to sleep Today, things didn't go as planned. Gatebot Cloudflare’s network is large, handles many different types of traffic and mitigates different types of known and not-yet-seen attacks. The Gatebot pipeline manages this complexity in three separate stages: attack detection - collects live traffic measurements across the globe and detects attacks reactive automation - chooses appropriate mitigations mitigations - executes mitigation logic on the edge The benign-sounding "reactive automation" part is actually the most complicated stage in the pipeline. We expected that from the start, which is why we implemented this stage using a custom Functional Reactive Programming (FRP) framework. If you want to know more about it, see the talk and the presentation. Our mitigation logic often combines multiple inputs from different internal systems, to come up with the best, most appropriate mitigation. One of the most important inputs is the metadata about our IP address allocations: we mitigate attacks hitting HTTP and DNS IP ranges differently. Our FRP framework allows us to express this in clear and readable code. For example, this is part of the code responsible for performing DNS attack mitigation: def action_gk_dns(...): [...] if port != 53: return None if whitelisted_ip.get(ip): return None if ip not in ANYCAST_IPS: return None [...] It's the last check in this code that we tried to improve today. Clearly, the code above is a huge oversimplification of all that goes into attack mitigation, but making an early decision about whether the attacked IP serves DNS traffic or not is important. It's that check that went wrong today. If the IP does serve DNS traffic then attack mitigation is handled differently from IPs that never serve DNS. Cloudflare is growing, so must Gatebot Gatebot was created in early 2015. Three years may not sound like much time, but since then we've grown dramatically and added layers of services to our software stack. Many of the internal integration points that we rely on today didn't exist then. One of them is what we call the Provision API. When Gatebot sees an IP address, it needs to be able to figure out whether or not it’s one of Cloudflare’s addresses. Provision API is a simple RESTful API used to provide this kind of information. This is a relatively new API, and prior to its existence, Gatebot had to figure out which IP addresses were Cloudflare addresses by reading a list of networks from a hard-coded file. In the code snippet above, the ANYCAST_IPS variable is populated using this file. Things went wrong Today, in an effort to reclaim some technical debt, we deployed new code that introduced Gatebot to Provision API. What we did not account for, and what Provision API didn’t know about, was that and are special IP ranges. Frankly speaking, almost every IP range is "special" for one reason or another, since our IP configuration is rather complex. But our recursive DNS resolver ranges are even more special: they are relatively new, and we're using them in a very unique way. Our hardcoded list of Cloudflare addresses contained a manual exception specifically for these ranges. As you might be able to guess by now, we didn't implement this manual exception while we were doing the integration work. Remember, the whole idea of the fix was to remove the hardcoded gotchas! Impact The effect was that, after pushing the new code release, our systems interpreted the resolver traffic as an attack. The automatic systems deployed DNS mitigations for our DNS resolver IP ranges for 17 minutes, between 17:58 and 18:13 May 31st UTC. This caused DNS resolver to be globally inaccessible. Lessons Learned While Gatebot, the DDoS mitigation system, has great power, we failed to test the changes thoroughly. We are using today’s incident to improve our internal systems. Our team is incredibly proud of and Gatebot, but today we fell short. We want to apologize to all of our customers. We will use today’s incident to improve. The next time we mitigate traffic, we will make sure there is a legitimate attack hitting us. < Here >
  5. Jime234

    Changing Mobile Data DNS

    Hi, I wanted to change the DNS of the Mobile Data of my Android Smart Phone. Its a simple process to Change DNS of WiFi but Mobile Data is just something else.. I've searched and tried some apps to change DNS but then I don't know it worked or not, there is no way to check ! Has anyone here tried it ?
  6. Smart multi-homed name resolution is a DNS related feature that Microsoft introduced in Windows 8 and implemented in Windows 10 as well. The feature is designed to speed up DNS resolution on a device running Windows 8 or newer by sending DNS requests across all available network adapters. Microsoft refined the feature in Windows 10 as it selects the information that is returned the fastest automatically. While the feature makes sense from a performance point of view, it introduces an issue from a privacy one. If you connect to a VPN network on a Windows machine for instance, smart multi-homed name resolution may lead to DNS leakage. Since requests are sent out to all network adapters at the same time, all configured DNS servers receive the requests and with them information on the sites that you visit. Turn off smart multi-homed name resolution in Windows Microsoft introduced a Registry key and policy to manage the feature in Windows 8. Registry (Windows 8.x only) Note: manipulating the Registry may lead to issues if done incorrectly. It is suggested that you create a backup of the Windows Registry before you continue. This can be done by selecting a Registry Hive in the Registry Editor, and then File > Export from the menu bar. 1.Open the Windows Registry Editor. One easy option to do that is to tap on the Windows-key, type regedit.exe, and hit the Enter-key. Windows throws an UAC prompt which you need to confirm. 2.Go to HKEY_LOCAL_MACHINE\Software\Policies\Microsoft\Windows NT\DNSClient 3.If the Dword value DisableSmartNameResolution exists already, make sure it is set to 1. 4.If it does not exist, right-click on DNSClient, and select New > Dword (32-bit) Value from the menu. 5.Name it DisableSmartNameResolution. 6.Set its value to 1. You may turn the feature back on at any time by setting the value to 0, or by deleting the Dword value. 7.Go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Dnscache\Parameters 8.If the Dword value DisableParallelAandAAAA exists already, make sure its value is set to 1. 9.If the value does not exist, right-click on Parameters, and select New > Dword (32-bit) Value. 10.Name it DisableParallelAandAAAA. 11.Set the value of the Dword to 1. You can turn the feature back on by setting the value to 0, or by deleting the value. I have created a Registry file that makes both changes to the Windows Registry when executed. You can download it with a click on the following link: disable-smart-name-resolution.zip https://www.ghacks.net/download/136552/ Group Policy (Windows 8 and Windows 10) The Registry key that worked under Windows 8 does not seem to work under Windows 10 anymore. Windows 10 users and admins may set a policy however to turn the feature off.  Specifies that a multi-homed DNS client should optimize name resolution across networks. The setting improves performance by issuing parallel DNS, link local multicast name resolution (LLMNR) and NetBIOS over TCP/IP (NetBT) queries across all networks. In the event that multiple positive responses are received, the network binding order is used to determine which response to accept. Note that the Group Policy Editor is only available in professional editions of Windows 10. Windows 10 Home users may want to check out Policy Plus that introduces policy editing to Home editions of Windows 10. 1.Do the following to open the Group Policy Editor in Windows: Tap on the Windows-key on the keyboard, type gpedit.msc, and hit the Enter-key on the keyboard. 2.Go to Computer Configuration > Administrative Templates > Network > DNS Client > Turn off smart multi-homed name resolution. 3.Set the policy to enabled, to disable the smart multi-homed name resolution feature of the system.  If you enable this policy setting, the DNS client will not perform any optimizations. DNS queries will be issued across all networks first. LLMNR queries will be issued if the DNS queries fail, followed by NetBT queries if LLMNR queries fail. Closing Words Some DNS clients that you may run on Windows machines come with DNS leak protection to prevent these leaks. OpenDNS users may enable the block-outside-dns option for instance in the client to do so. Source
  7. straycat19

    Internet Speed Up By Changing DNS

    What DNS is best for you? The big question is how to find a new DNS and how to know it will be any better than your current one. Google has a solution called namebench. This lightweight program will test your DNS against other popular DNS servers. Once it finishes the comparison, it will give you detailed statistics on performance and recommend the best DNS for you to use. Download Instructions To download the program, navigate to the namebench download page by using the links at the end of this article. On the left side of the namebench download page, there is a green header labeled "Featured." Here is where you will find the program you need. For PC users, click the second download link with the ending "Windows.exe". Mac users should select the download link ending in "Mac_OS_X.dmg". You will be redirected to another page that has another download link. This link should be highlighted in green and have the same name as the previous download link you clicked. Note: If the download link is not highlighted in green or the download link is different from the first, do not click on it. It is not the download link you're looking for. Click the highlighted download link and your download will begin immediately. After the download is complete, extract the installation files. namebench will launch automatically. Managing namebench On the first window, you'll see a field labeled Nameservers. This will automatically be filled with the IP address of your current DNS. Below the Nameservers field are two checkboxes. One says "Include global DNS providers," the other says "Include best available regional DNS services." Leave both of these checked. The next area is for secondary options. The first checkbox lets you check if the DNS is blocking certain sites. If you're looking for a DNS with filtering options, definitely select this one. It will tell you how effective a DNS is at blocking unwanted content. The second checkbox will publish your results anonymously. This will help provide more accurate results to you and others in the future. You can leave this blank or check it. Neither will affect the comparisons you're given. Next, set the location dropdown to your country. In the Query Data Source dropdown menu, select your default browser. If you aren't sure what browser you use, visit this site. In the Health Check Performance section, you normally want this set to Fast. This will test the speed of 40 nameservers. But if your Internet connection is slow or unreliable, change Fast to "Slow (unstable network)." For the number of queries, the standard 250 should be sufficient. But if you're on a slower network, you may want to decrease the amount. Once you've got your settings in place, hit Start Benchmark. While namebench is running, you should avoid using the Internet as that can affect its results. When namebench completes, it will open up a new browser window with your results. There's a lot of information in this window. We'll focus on the most important parts. The first box gives you a DNS recommendation and tells you the possible speed increase by switching. The box immediately to the right gives you the settings of the recommended DNS and two backups. Below these boxes are a series of charts and graphs. These visualize and breakdown the performance of each DNS. You can find exact details about each graph at Namebench's wiki. Change your router settings Now that you know which DNS is best for you, you need to change the settings on your router. That improves all the gadgets on your network. To edit your router settings, you'll need to open your browser and type in your router's IP address and enter your username and password. You can find out your router's default IP address and login information in the router manual. Once you access the router's settings, take a look under the basic settings. You should see fields for Primary DNS and Secondary DNS. Write down both of the IP addresses in case you need to go back to them later. Next, replace the existing IP addresses with the Primary and Secondary IP addresses from Namebench's "Recommended configuration" box. Then Save your router settings and log out. If you don't have a router, you can change the DNS settings right on your computer. For Windows, look under Start>>Control Panel>>Network and Internet>>Network and Share Center. Click the "Manage network connections" link on the left. Right-click on the Local Area Connection icon and select Properties. Under the Networking tab, click on Internet Protocol Version 4 and click the Properties button. Under the General tab, click "Use the following DNS server addresses" and enter the DNS addresses provided by namebench. Then click OK. On a Mac, go to System Preferences>>Network. Click the lock icon in the lower left corner and enter your password. Select Built-in Ethernet and click Advanced. Select the DNS tab and click the + icon. Add the DNS addresses from namebench and put them at the top of the list. Click Apply and OK. Flush the old DNS cache Once your DNS is changed on your router or computer, there is still one more task. To finish, you'll want to flush your computer's current DNS cache. This prevents it from trying to use the old DNS server to look up sites you visit often. To flush your DNS on Windows Vista or later, type CMD into the search field in the Start menu and hit Enter. A Command window should open up. Type "ipconfig /flushdns" (minus quotes). Now hit Enter and you should see "Successfully flushed the DNS Resolver Cache." To flush your DNS on Mac OS X, first click on Spotlight. It's the magnifying glass at the top right. Now type in Terminal and hit Enter. When the Terminal window opens, enter "dscacheutil -flushcache" (no quotes). Now hit Enter. You should see "bash-2.05a$ dscacheutil -flushcache" if all went well. Namebench Downloads Page Namebench Wiki
  8. PS, Alpine users, you need to get patching, too – for other reasons Systemd, the Linux world's favorite init monolith, can be potentially crashed or hijacked by malicious DNS servers. Patches are available to address the security flaw, and should be installed ASAP if you're affected. Looking up a hostname from a vulnerable Systemd-powered PC, handheld, gizmo or server can be enough to trigger an attack by an evil DNS service: the software's resolved component can be fooled into allocating too little memory for a lookup response, and when a large reply is eventually received, this data overflows the buffer allowing the attacker to overwrite memory. This can crash the process or lead to remote code execution, meaning the remote evil DNS service can run malware on your box. "A malicious DNS server can exploit this by responding with a specially crafted TCP payload to trick systemd-resolved in to allocating a buffer that's too small, and subsequently write arbitrary data beyond the end of it," explained Chris Coulson, of Ubuntu maker Canonical, who discovered the out-of-bounds write in systemd-resolved. The programming blunder, assigned the ID CVE-2017-9445, was accidentally introduced in Systemd version 223 in June 2015 and is present all the way up to and including version 233 in March this year. This means it is present in Ubuntu versions 17.04 and 16.10. Canonical has put out a pair of fixes for 17.04 and 16.10 to address the flaw. The bug is technically present in Debian Stretch (aka Debian 9), Buster (aka 10) and Sid (aka Unstable), however "systemd-resolved is not enabled by default in Debian," according to the project's Salvatore Bonaccorso, so either you have nothing to worry about, apply the patch yourself, or hang tight for the next point release. Various other Linux distros use Systemd, too: check to make sure there are no updates available and ready to install for your version of systemd-resolved via the usual package manager. If there are, well, you know what to do. Meanwhile, researcher Ariel Zelivansky has found some security bugs in Alpine Linux's package manager apk. The flaws, assigned CVE-2017-9669 and CVE-2017-9671, allow remote code execution on Alpine Linux instances (including Docker runs), via a buffer overflows in the handling of package files. "The only prerequisite would be to figure out the memory layout of the program," Zelivansky said. "Protections like ASLR or other hardenings may block the attacker from succeeding, but he may be able to get around it and still achieve execution." Article source
  9. Stanners

    Free Domain

    Hi Guys Another gem this one for a free domain name. Go to: http://www.freenom.com/en/index.html?lang=en Enter the domain name you want select check availibility. Chose what domain you want Go to checkout Use drop down to select 12 Months From here enter email address and follow the bouncing ball to checkout with your new domain. After that your good to go! Again thanks to boulawan for his post about OneDrive storage. Original site the offer was posted at is http://a.v9s.win/ I just used Chrome and translated the pages if you guys want to check it out, as they some other interesting stuff
  10. By analyzing network traffic going to suspicious domains, security administrators could detect malware infections weeks or even months before they're able to capture a sample of the invading malware, a new study suggests. The findings point toward the need for new malware-independent detection strategies that will give network defenders the ability to identify network security breaches in a more timely manner. The strategy would take advantage of the fact that malware invaders need to communicate with their command and control computers, creating network traffic that can be detected and analyzed. Having an earlier warning of developing malware infections could enable quicker responses and potentially reduce the impact of attacks, the study’s researchers say. “Our study shows that by the time you find the malware, it’s already too late because the network communications and domain names used by the malware were active weeks or even months before the actual malware was discovered,” said Manos Antonakakis, an assistant professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. “These findings show that we need to fundamentally change the way we think about network defense.” Traditional defenses depend on the detection of malware in a network. While analyzing malware samples can identify suspicious domains and help attribute network attacks to their sources, relying on samples to drive defensive actions gives malicious actors a critical time advantage to gather information and cause damage. “What we need to do is minimize the amount of time between the compromise and the detection event,” Antonakakis added. The research, which will be presented May 24 at the 38th IEEE Security and Privacy Symposium in San Jose, California, was supported by the U.S. Department of Commerce, the National Science Foundation, the Air Force Research Laboratory and the Defense Advanced Research Projects Agency. The project was done in collaboration with EURECOM in France and the IMDEA Software Institute in Spain – whose work was supported by the regional government of Madrid and the government of Spain. In the study, Antonakakis, Graduate Research Assistant Chaz Lever and colleagues analyzed more than five billion network events from nearly five years of network traffic carried by a major U.S. internet service provider (ISP). They also studied domain name server (DNS) requests made by nearly 27 million malware samples, and examined the timing for the re-registration of expired domains – which often provide the launch sites for malware attacks. “There were certain networks that were more prone to abuse, so looking for traffic into those hot spot networks was potentially a good indicator of abuse underway,” said Lever, the first author of the paper and a student in Georgia Tech’s School of Electrical and Computer Engineering. “If you see a lot of DNS requests pointing to hot spots of abuse, that should raise concerns about potential infections.” The researchers also found that requests for dynamic DNS also related to bad activity, as these often correlate with services used by bad actors because they provide free domain registrations and the ability to add quickly add domains. The researchers had hoped that the registration of previously expired domain names might provide a warning of impending attacks. But Lever found there was often a lag of months between when expired domains were re-registered and attacks from them began. The research required development of a filtering system to separate benign network traffic from malicious traffic in the ISP data. The researchers also conducted what they believe is the largest malware classification effort to date to differentiate the malicious software from potentially unwanted programs (PUPs). To study similarities, they assigned the malware to specific “families.” By studying malware-related network traffic seen by the ISPs prior to detection of the malware, the researchers were able to determine that malware signals were present weeks and even months before new malicious software was found. Relating that to human health, Antonakakis compares the network signals to the fever or general feeling of malaise that often precedes identification of the microorganism responsible for an infection. “You know you are sick when you have a fever, before you know exactly what’s causing it,” he said. “The first thing the adversary does is set up a presence on the internet, and that first signal can indicate an infection. We should try to observe that symptom first on the network because if we wait to see the malware sample, we are almost certainly allowing a major infection to develop.” In all, the researchers found more than 300,000 malware domains that were active for at least two weeks before the corresponding malware samples were identified and analyzed. But as with human health, detecting a change indicating infection requires knowledge of the baseline activity, he said. Network administrators must have information about normal network traffic so they can detect the abnormalities that may signal a developing attack. While many aspects of an attack can be hidden, malware must always communicate back to those who sent it. “If you have the ability to detect traffic in a network, regardless of how the malware may have gotten in, the action of communicating through the network will be observable,” Antonakais said. “Network administrators should minimize the unknowns in their networks and classify their appropriate communications as much as possible so they can see the bad activity when it happens.” Antonakakis and Lever hope their study will lead to development of new strategies for defending computer networks. “The choke point is the network traffic, and that’s where this battle should be fought,” said Antonakakis. “This study provides a fundamental observation of how the next generation of defense mechanisms should be designed. As more complicated attacks come into being, we will have to become smarter at detecting them earlier.” In addition to those already mentioned, the study included Davide Balzarotti from EURECOM, and Platon Kotzias and Juan Caballero from IMDEA Software Institute. Article source
  11. The fix is simple: turn your modem on and off again to get a new IP address. Or ask your ISP to assign them more often Domain-name lookups only tell you site visits, not pages viewed, right? Wrong: the interaction between a user and the Domain Name System is more revealing than previously believed, according to a paper from German postdoc researcher Dominik Herrmann. In work published at pre-print server Arxiv (in German – thank you, Google Translate), Herrmann writes that behavioural tracking using recursive name servers is a genuine privacy risk. DNS – the infrastructure that converts, say, www.theregister.co.uk into the IP address – does, of course, reveal which sites a user visits. However, as Herrmann writes, that is an association between the user's public-facing IP address and the requests they make. Since ISPs have to use dynamic IP addresses to cope with the IPv4 address shortage, a user's address changes, making it harder to track them over time. However, Herrmann writes, someone with access to the infrastructure can easily watch a user's behaviour while they have one IP address, create a classifier for that user, and look for behaviour that matches that classifier when the IP address changes. “Each user pursues his interests and preferences while surfing, and ... each user has a unique combination of interests and preferences,” the paper states. Visits from one IP to Google followed by favourite newspapers, shopping sites, government services or transport are enough to identify a user when they pop up under a different IP, Herrmann reckons, and this “behavioural chaining” doesn't have to rely on tracking cookies. To put this idea to the test, Herrmann ran a naive Bayes classifier over five months of anonymised DNS data from the University of Regensburg, covering thousands of users. In a sample of 3,800 students over two months, behavioural chaining correctly identified 86 per cent of individuals from one IP address to the next; and when the experiment was run for 12,000 students the accuracy remained high, at 76 per cent. Why worry? Herrmann offers two observations about why this is more worrying than it may appear first sight. While people will correctly point out that DNS resolves only as far as (for example) www.wikipedia.org – a DNS record doesn't show law enforcement that someone read en.wikipedia.org/wiki/Alcoholism, so their privacy is intact. Not so, he responds: “Many websites produce a so distinctive DNS retrieval pattern” that requests can be recognised “more or less unequivocally.” An analysis of retrieval of 5,000 Wikipedia entries, 6,200 news posts on Heise, and the top 100,000 websites, most pages showed unique demand patterns, he writes. In many countries' data retention regimes, the IP addresses a user visits are recorded, but browser histories are off limits. Herrmann asserts law enforcement to use DNS records, IP address records, and behavioural chaining to reconstruct a more detailed browsing history than most users expect. It can, however, be disrupted by ISPs, should they wish, by refreshing users' IP addresses more frequently. With an hourly change to IP address, Herrmann writes, the reconstruction fails 45 per cent of the time, and at five-minute changes, accuracy drops to 31 per cent – and if the user is inactive for enough intervals, “the trail disappears.” Article source
  12. Please tell me that the firewall may be blocking outgoing DNS requests from a computer? DNS Firewall 4.01 does not offer. The program is a good but unfortunately start too late after loading the operating system.
  13. Researchers devised two correlation attacks, dubbed DefecTor, to deanonymize Tor users using also data from observation of DNS traffic from Tor exit relays. Law enforcement and intelligence agencies dedicate an important commitment in the fight of illegal activities in the Dark Web where threat actors operate in a condition of pseudo-anonymity. A group of security researchers at the Princeton University, Karlstad University and KTH Royal Institute of Technology have devised two new correlation attack technique to deanonymize Tor users. “While the use of Tor constitutes a significant privacy gain over off-the-shelf web browsers, it is no panacea, and the Tor Project is upfront about its limitations. These limitations are not news to the research community. It is well understood that low-latency anonymity networks such as Tor cannot protect against so-called global passive adversaries. We define such adversaries as those with the ability to monitor both network traffic that enters and exits the network.” says Phillip Winter, a researcher at Princeton University that was involved in the research. The techniques were dubbed DefecTor by the researchers, they leverage on the observation of the DNS traffic from Tor exit relays, for this reason, the methods could integrate existing attack strategies. “We show how an attacker can use DNS requests to mount highly precise website fingerprinting attacks: Mapping DNS traffic to websites is highly accurate even with simple techniques, and correlating the observed websites with a website fingerprinting attack greatly improves the precision when monitoring relatively unpopular websites. ” reads the analysis published by the researchers. “ “Our results show that DNS requests from Tor exit relays traverse numerous autonomous systems that subsequent web traffic does not traverse. We also find that a set of exit relays, at times comprising 40% of Tor’s exit bandwidth, uses Google’s public DNS servers—an alarmingly high number for a single organization. We believe that Tor relay operators should take steps to ensure that the network maintains more diversity into how exit relays resolve DNS domains.” The test results obtained with the DefecTor technique are excellent anyway we have to consider that such attacks request a significant effort, typically spent by persistent attackers like government bodies. The simulations of the attacks conducted by the researchers allowed them to identify the vast majority of the visitors to unpopular visited sites. The experts highlighted that Google operates public DNS servers that observe almost 40% of all DNS requests exiting the Tor network, a privileged point of observation for attackers. Google is also able to monitor some network traffic that is entering the Tor network, the experts reported as an example the traffic via Google Fiber or via guard relays that are occasionally running in Google’s cloud. “there are entities on the Internet such as ISPs, autonomous systems, or Internet exchange points that can monitor some DNS traffic but not web traffic coming out of the Tor networkand potentially use the DNS traffic to deanonymize Tor users.” says Winter. “Past traffic correlation studies have focused on linking the TCP stream entering the Tor network to the one(s) exiting the network. We show that an adversary can also link the associated DNS traffic, which can be exposed to many more autonomous systems than the TCP stream.” The researchers also developed a tool, dubbed “DNS Delegation Path Traceroute” (dptr), that could be used to determine the DNS delegation path for a fully qualified domain name. The tool runs UDP traceroutes to all DNS servers on the path that are then compared to a TCP traceroute to the web server behind the same fully qualified domain name. On the other side, experts from the Tor Project are already working on a series of significant improvements to the popular anonymizing network. In March the Tor Project revealed how the organization has conducted a three-year long work to improve its ability to detect fraudulent software. While Tor developers are already working on implementing techniques to make website fingerprinting attacks harder to execute, there are other actions that can be taken to prevent DefecTor attacks, such as Tor relay operators ensuring that the network maintains more diversity in how exit relays resolve DNS domains. The experts invite the security community to review their paper, for further information visit the DefecTor project page. Source: http://securityaffairs.co/wordpress/51848/deep-web/defector-tor-deanonymizing.html
  14. Keenow (aka Keen One World) is a FREE Smart DNS App which would allow you to easily access top streaming from anywhere in the world. Currently Keenow unblocks more than 70 different websites which are listed here. Homepage The free version of Keenow require the installation of one of our browser extensions which are listed below. A paid version will be available soon as well. Install one of the following browser extensions: Keen MediaTab.TV (for Chrome) Keen Media Runner (for Firefox) Guide
  15. Switching your DNS servers can improve web performance, enhance security and help you reach some sites you can’t normally access. It’s awkward to do this manually, but Change DNS Helper is a free tool which makes the process much easier. A straightforward interface displays everything you need to know in a single tab. Choose a target network adapter, your preferred DNS server, click "Change DNS" and you’re done. The initial DNS server list includes more than 20 options, including Google, OpenDNS, Comodo, Yandex, Norton. These are stored in an INI file and very easy to change. Here’s the basic format. [IPV4] US – Google Public DNS=, US – Comodo Secure DNS=, Just add your new DNS server and IP addresses in the form name=, and it’ll appear in the list. Unusually, Change DNS Helper enables changing DNS servers for IPv4 and IPv6 connections separately. There are also some handy options to reset, save or back up your DNS settings. You’ll be briefly annoyed by features like a "Hide IP Address" button, which doesn’t hide your IP address itself, instead opening your web browser at the Hide My Ass site (gee, thanks). But once you know not to click it, this doesn’t really matter. There’s no shortage of similar tools around, but Change DNS Helper’s IPv4 and IPv6 support and it’s lengthy, editable default list of DNS servers help the program stand out. Give it a try. Change DNS Helper requires Windows XP or later. Article source Similar DNS changer tool.
  16. Prototype security tool stops clicks on bad links, blocking DNS lookup for 24 hours. Go ahead and click it. You know you want to. Clickers gonna click. Despite mandatory corporate training, general security awareness, and constant harping about the risks of clicking on unverified links in e-mails and other documents, people have been, are now, and forever will click links where exploit kits and malware lurk. It's simply too easy with the slightest amount of targeted work to convince users to click. Eric Rand and Nik Labelle believe they have an answer to this problem—an answer that could potentially derail not just phishing attacks but other manner of malware as well. Instead of relying on the intelligence of users, Rand and Labele have been working on software that takes humans completely out of the loop in phishing defense by giving clicks on previously unseen domains a time out, "greylisting" them for 24 hours by default. The software, a project called Foghorn, does this by intercepting requests made to the Domain Name Service (DNS). Greylisting has been used in spam filtering for e-mails, where it deliberately delays e-mails delivered from previously unseen sources and sends temporary errors back to the sender for a few minutes or hours. Spam greylisting operates under the assumption that a real mail server will re-attempt delivery, while spambots likely will not. Foghorn applies the same approach to unseen domain names, but it does so for a different reason: many of the domains behind phishing attacks are active for less than 24 hours before they're rotated to another domain, according to an Anti-Phishing Working Group survey. As Rand said in his presentation about Foghorn at DefCon, "Lots of people are very invested in taking [phishing domains] down quickly, so phishers have to keep moving." By delaying the availability of previously unseen domains, the likelihood of users getting phished could be significantly reduced. Plus, known good domains can always be whitelisted. Additionally, greylisting domains can cut off the command and control for botnet malware that may have already infected systems on the network, since many botnets use random domain generation algorithms to evade detection and change the domains they access frequently—sometimes in as little as hours or minutes. Foghorn is a proof-of-concept prototype DNS greylisting system. Built with Python and the Twisted event-driven networking engine, Foghorn acts as a DNS proxy, filtering outbound DNS requests from devices on a local network. Before being activated, it can be set in "baselining" mode to collect a list of domains typically visited by users on the systems to be protected—these can be "whitelisted" to ensure that they're always reachable. According to Rand's whitepaper describing the project, after Foghorn is activated, "when a workstation attempts to fetch a DNS record not previously seen on the network, the greylister will initially resolve that domain to some locally-controlled asset rather than allowing the request to complete; after some timeout period, the request will then resolve as normal." If a domain isn't requested again within a certain amount of time—by default, seven days—Foghorn resets it for greylisting again. This is intended to protect against phishes from previously safe domains that might get hijacked when they expire. The sites that get greylisted also get recorded in logs by Foghorn, and those logs can be used by a security information and event management (SIEM) system or other security tools to alert administrators to potential attacks (while also identifying which users clicked on them). Foghorn is still very much a work in progress. It currently only handles requests for "A" records—records in the DNS listing for a domain that specify the Internet Protocol address associated with a particular name. It also doesn't catch requests to specific IP addresses instead of domain names, so links that use an IP address instead of a hostname will slip past unless an HTTP proxy blocks them. The approach may not be a good fit for some people, too. But given the cost (and futility) of phishing training, Foghorn may be a great idea for smaller businesses—and you may want to set it up on your parents' home network while you're at it. Article source
  17. However careful you are online it's always possible to get caught out by a maliciously coded website or advert that leads to malware ending up on your machine. Online safety service SafeDNS is launching a new system for detecting malicious internet resources, which it claims blocks close to 100 percent of them for better online protection. Using continuous machine learning and user behavior analysis, the new SafeDNS system takes a step forward from static lists of categorized resources to dynamically created databases. The SafeDNS research team has produced a technology able to detect malicious internet resources with 98 percent precision. "This unparalleled technology developed by the company's research team takes SafeDNS to a different, much higher level -- on par with global leaders of the industry, as our ability to detect and filter out malware and botnets has significantly improved," says Dmitry Vostretsov, CEO of SafeDNS. "The technology gives SafeDNS a competitive edge as it detects malicious resources overlooked by the analogous systems of other vendors". It works by processing and analyzing data from the company's filtering service to pinpoint malicious resources. One of the most important attributes used is group activity, malicious resources tend to be requested by a group of users over a short period of time such as a few hours hours. If a resource is legitimate it's requested by occasional users, rather than a fixed group of them, this pattern can be used to identify and blacklist malicious sites. Sites are ranked on a continuous basis and the information fed into the SafeDNS database, which drives its filtering system. Information provided by the new system is also available for use through the company's open API of categorized internet resources. More information on SafeDNS for home and business use is available on the company's website. Article source
  18. Sinkholes and watering holes are two expressions not automatically associated with computer security, yet they are in use to describe two tactics that are used in this field. Both are set up in order to disrupt the “normal” flow of things. This post aims to introduce both these expressions and explain the differences, so you won’t get them confused. Sinkhole A DNS sinkhole in cyberspace is a means of taking away traffic from the intended target. It is often used as a defense mechanism against botnets. The DNS of the Command and Control (C&C) server(s) is interrupted and the traffic can either be dropped or rerouted for analysis. One objective of analysis is to get an overview of the drones in the botnet that are under control of the C&C. The WIndows hosts file that blocks traffic to known malicious domains can be considered a miniature sinkhole as it can be used to ‘drop’ the traffic to all the domains listed in the hosts file, by rerouting it to (localhost). In computer networking, localhost is a hostname that resolves to ‘this computer’ so the traffic never leaves the computer. On a larger scale, network administrators can use DNS sinkholing to prevent access of malicious URLs at an enterprise level by deploying an internal DNS sinkhole server. The request can trigger a custom page telling the user that the requested domain is blacklisted. However, this will not work against threats that use their own DNS resolver. A very special way of sinkholing against botnets is done by Kaspersky in the first Hlux/Kelihos takedown. After reverse engineering the workings of the botnet, they managed to introduce a sinkhole and make all the drones talk to that machine instead of the other controllers. Watering holes Watering holes are used as an aimed attack strategy. The attacker infects a website where he knows his intended victim(s) visits regularly. Depending on the nature of the infection, he can single out his intended target(s) or just infect anyone that visits the site unprotected. The watering hole strategy is a mix of social engineering, hacking, and drive-by infections which require a high level of knowledge and a well-thought out strategy. This is normally used against high-profile targets and organizations of great importance as a way to get a foothold inside such an organization by infecting one or more of their systems. The attacker needs the following knowledge to perform the watering hole technique successfully: A website that is visited on a regular basis by the target A vulnerability on the targets system that can be exploited A way to infect the site with their exploit of choice Telling them apart An easy way to remember what’s what is to keep in mind their real life equivalents. A sinkhole absorbs anything that comes near and a watering hole is a pub, a place that attracts people and where they are more likely to show their weaknesses. Links Understanding DNS Sinkholes – A weapon against malware Building a sinkhole that never clogs Article source
  19. SafeDNS [email protected] Edition - 1 Year[365 Days] Promo Links: Offer: https://www.safedns.com/auth/register Note: Limited Period Offer. Current Status: Open. Steps: Visit the registration link above. Enter your email and set your password. Enter "free-software-license" in Promotional Code. The default billing plan will be assigned automatically. Click on "Register" button to complete the registration. In the next page, you'll be automatically logged in. You can refer to the Quick Setup instructions to complete the installation process. Downloads: Windows: https://www.safedns.com/downloads/SafeDNS-Agent-Setup.exe - [6 MB]
  20. Block Malware, Ransomware, and Phishing With 5 Layers There's no silver bullet to keeping intruders and hackers out, but you can minimize the risk by combining approaches They really are after you. I spent last week in London at the InfoSec conference, where it wasn't only the vendors talking up security scares, but the attendees I spoke with in the aisles and meeting rooms. Almost all of them had some personal experience with spear-phishing or a ransomware attack. At InfoSec plenty of solutions were proffered to blunt such attacks, from point products to suites. It became clear that there are five key layers needed to defend your company from such attacks. Email security gateway It’s obvious that the majority of breaches these days begin with an email. The email may contain a link to some kind of identity-theft site, instructions for a wire transfer, or a weaponized attachment. Spear phishing, whaling, ransomware, spam, and malware are all serious threats and nuisances that your organization faces. You should have some sort of email gateway, be it a software based on-premises or a cloud-based tool or, alternately, an appliance. It's best if you use a layered approach where you enhance what you already have, such as within Exchange or Exchange Online. DNS security The use of a DNS protection tool like OpenDNS is an oft-ignored option for fighting against attacks. Because every link a user clicks must reach out to a DNS server for resolution before the user can actually open the link, having a tool that can learn from all those clicks can provide a shield. Endpoint protection From a technical perspective, endpoint (client) protection is your last point of protection against attacks. Endpoint protection tools range from antimalware software to multifactor-authentications VPNs, and you'll need more than one. User behavior analytics User behavior analytics tools watch for trends in your user base so that you can see when red flags come up, like a user who typically downloads 10 documents a day starting to download 1,000. There is no denying that Big Brother watching your organization is a key approach going forward in a world where more and more of the people attacking you are already on the inside and trusted to a degree. Phishing testing and training Raising awareness and increasing the education level of the user (your weakest link in security) is essential. Not surprising, there are tools to help. With such a tool, you can test your users on a regular basis to see how they react to real-world attacks, then require they take additional training (repeatedly). It's the only way you can get them to be so paranoid about harmful links that they think twice before clicking. In the end, a targeted attack may still get through all these layers. But by implementing all five layers, you minimize the chances of being a victim of such an attack. Source
  21. IETF opens RFC 7873 proposal for public debate A proposal submitted to the Internet Engineering Task Force (IETF) details Domain Name System (DNS) Cookies, an extra security layer to the DNS protocol. The security measures included in the RFC 7873 proposal describe a system named DNS Cookies, which are 64-bit keys generated on the client-side to authenticate DNS requests on the server-side. DNS Cookies should not be confused with browser cookies. Servers will be able to verify a client's origin based on a 64-bit key (code), which will be composed based on each user's IP address, the DNS server IP address, and a client secret code. DNS Cookies make DDoS attacks highly impractical According to Donald Eastlake of Huawei and Mark Andrews of ISC, deploying DNS Cookies will make it harder for attackers to spoof DNS requests because they'll have to supply a correctly-calculated 64-bit key. Spoofing DNS origin requests is the technique on which cyber-criminals rely when launching DoS and DDoS attacks using the DNS protocol. By increasing the resources needed to launch such attacks, security researchers hope to make these attacks impractical for attackers. DNS protocol is very popular with DDoS stressers DNS has been a popular protocol for launching DDoS attacks, and more precisely, reflection DDoS attacks. In the first three months of the year, according to Akamai, DNS has been the second most popular protocol used for reflection DDoS attacks after NTP. By using technical flaws in the DNS protocol itself, an attacker can blast a server with malformed DNS requests that contain the victim's IP addresses spoofed instead of the attacker's address. The DNS server does not only send the DNS response to the victim's spoofed IP address but also sends it many times over because of the above-mentioned technical flaws, amplifying the attack, hence its name of "reflection DDoS." DNS Cookies will allow a server to detect the authenticity of incoming DNS requests and drop all packets that don't have a proper 64-bit key. DNS Cookies are also efficient against DoS and cache poisoning attacks RFC 7873 will also help protect servers against simple DoS attacks as well. These are cases where an attacker sends a massive number of DNS requests to the server in order to make it use all of its resources. Attackers usually spoof their malicious DNS requests with multiple IP addresses in order to avoid getting the real source of their attack blacklisted (their own IP). DNS Cookies, once again, makes it easier to reject all spoofed traffic. Additionally, DNS cache poisoning can also be thwarted by DNS cookies, Eastlake and Andrews claim. DNS Cookies are easier to implement compared to existing DNS security systems The two also say that existing DNS security systems such as DNSSEC and DNS Message/Transaction Security "do not provide the services provided by the DNS Cookie mechanism: lightweight message authentication of DNS requests and responses with no requirement for pre-configuration or per-client server-side state." Both DNSSEC and DNS Message/Transaction Security (TSIG) are notoriously difficult to set up, especially TSIG, which needs pre-agreement and key distribution between client and server, keeping track of server-side key state, and required time synchronization between client and server. RFC (Request For Comment) 7873 proposal is currently under public debate and IETF awaits everyone's input before moving forward into making this an official spec. Article source
  22. For message authentication, not for tracking. Promise! A proposal raised late May at the Internet Engineering Task Force (IETF) suggests adding cookies to the DNS to help defend the critical system against denial-of-service exploits. The domain name system (DNS) is an old and fundamental piece of the Internet architecture, providing translation between human-readable addresses like theregister.co.uk and IP addresses. The DNS has also been exploited several times over the years as a traffic amplifier in DoS attacks. [amplification attacks] RFC 7873, authored by Donald Eastlake (Huawei*) and Mark Andrews (ISC*), puts forward the intriguing notion that a simple cookie deployment could help. They describe DNS Cookies as “a lightweight DNS transaction security mechanism” for clients and servers. While the idea offers only “limited protection”, the authors say they can help address denial-of-service, amplification, forgery, and cache poisoning attacks. For the privacy-conscious, the authors note that their proposal is that the DNS cookies only be returned to their originating IP address, preventing them being used as a tracking mechanism. The protection offered by the DNS cookie, the RFC says, comes from the fact that an attacker would have to guess the 64-bit pseudorandom value of the cookie. The client cookie would be calculated using the client's IP address, the server IP address, and “a secret value known only to the client”. The client's IP address, the client cookie, and a secret known to the server would be used to calculate the server cookie. Here's how the authors imagine the cookies would help in various DNS attack scenarios: DoS attacks using forged addresses – the basic DNS denial-of-service attack uses a forged client address. The cookie doesn't block such attacks – but it does identify the client by its real IP address, which makes attribution more feasible, making the attack less anonymous; DNS amplification – from the RFC: “Enforced DNS Cookies would make it hard for an off-path attacker to cause any more than rate-limited short error responses to be sent to a forged IP address, so the attack would be attenuated rather than amplified”. Server DoS – any DNS request accepted by a server uses its resources, making it relatively straightforward to hose a server by flooding it with requests. The cookies make it very easy to reject forged requests “before any recursive queries or public key cryptographic operations are performed.” Cache poisoning and answer forgery attacks – the DNS cookies let resolvers reject forged replies. The RFC also lays out an incremental rollout scheme for DNS cookies. Bootnote: *IETF RFCs are the work of individuals, not their employers, but affiliations get a mention as a courtesy. Article source
  23. straycat19

    Acrylic DNS Proxy

    Acrylic is a local DNS proxy for Windows which improves the performance of your computer by caching the responses coming from your DNS servers and helps you fight unwanted ads through a custom HOSTS file (optimized for handling hundreds of thousands of domain names) with support for wildcards and regular expressions.When you browse a web page a portion of the loading time is dedicated to name resolution (usually from a few milliseconds to 1 second or more) while the rest is dedicated to the transfer of the web page contents and resources to your browser. What Acrylic does is to reduce the time dedicated to name resolution for frequently visited addresses closest to zero possible. It may not seem such a great optimization but in a few weeks of Internet browsing you will probably save an hour or so, which is definitely not such a bad thing. Furthermore Acrylic's sliding expiration caching mechanism, simultaneous forwarding to multiple DNS servers and support for background DNS updates are able to improve your browsing experience independently of the browser.With Acrylic you can also gracefully overcome downtimes of your DNS servers without disrupting your work, because in that case you will at least be able to connect to your favourite websites and to your email server.Another good thing is that Acrylic is released as open source, which means that it's free and its source code, written with Borland Delphi 7, is freely available to anyone under the GNU General Public License. What's new in version 0.9.31 released on April 13, 2016 Solved a bug which prevented Acrylic to read configuration entries larger than 2048 characters. Improved resolving of A (IPv4) and AAAA (IPv6) requests from the Acrylic HOSTS file when one of the related entries is missing. (*) Hashes for file download verification 0.9.31 Acrylic.exe MD5: 7fa64c2dfba39719a78daee5eb4e8f66 SHA-1: db8ea399c21549ad18d43f54951d9b8cb6416d9c SHA-256: 3c9bc162be9a29462da18f7f0568ec1d3260476414b76b79ab8679fe743d60ea RIPEMD-160: 543324de4f508cc4a656b9c85a63db00ee49131d Acrylic-Portable.zip MD5: 7b632a2a52b0c73b1a2a017ae90891cf SHA-1: 7356beb173fd31d1857a196a2b89fabd72a7ed19 SHA-256: 821ba558f5a743e3ad73277c2ded5cc46ffdb63b4d35dad013c6626a6b6ffee5 RIPEMD-160: ab5ae9db49e4c41851a1c0b9c6cf59bab1b45150 Download Installer Portable
  24. Attack on NS1 sends 50 million to 60 million lookup packets per second. Unknown attackers have been directing an ever-changing army of bots in a distributed denial of service (DDoS) attack against NS1, a major DNS and traffic management provider, for over a week. While the company has essentially shunted off much of the attack traffic, NS1 experienced some interruptions in service early last week. And the attackers have also gone after partners of NS1, interrupting service to the company's website and other services not tied to the DNS and traffic-management platform. While it's clear that the attack is targeting NS1 in particular and not one of the company's customers, there's no indication of who is behind the attacks or why they are being carried out. NS1 CEO Kris Beevers told Ars that the attacks were yet another escalation of a trend that has been plaguing DNS and content delivery network providers since February of this year. "This varies from the painful-but-boring DDoS attacks we've seen," he said in a phone interview. "We'd seen reflection attacks [also known as DNS amplification attacks] increasing in volumes, as had a few content delivery networks we've talked to, some of whom are our customers." In February and March, Beevers said, "we saw an alarming rise in the scale and frequency of these attacks—the norm was to get them in the sub-10 gigabit-per-second range, but we started to see five to six per week in the 20 gigabit range. We also started to see in our network—and other friends in the CDN space saw as well—a lot of probing activity," attacks testing for weak spots in NS1's infrastructure in different regions. But the new attacks have been entirely different. The sources of the attacks shifted over the week, cycling between bots (likely running on compromised systems) in eastern Europe, Russia, China, and the United States. And the volume of the attacks increased to the 30Gbps to 50Gbps range. While the attacks rank in the "medium" range in total volume, and are not nearly as large as previous huge amplification attacks, they were tailored specifically to degrading the response of NS1's DNS structure. Rather than dumping raw data on NS1's servers with amplification attacks—where an attacker sends spoofed DNS requests to open DNS servers that will result in large blocks of data being sent in the direction of the target—the attackers sent programmatically generated DNS lookup requests to NS1's name servers, sometimes at rates of 50 million to 60 million packets per second. The packets looked superficially like genuine requests, but they were for resolution of host names that don't actually exist on NS1's customers' networks. NS1 has shunted off most of the attack traffic by performing upstream filtering of the traffic, using behavior-based rules that differentiate the attacker's requests from actual DNS lookups. Beevers wouldn't go into detail about how that was being done out of concern that the attackers would adapt their methods to overcome the filtering. But the attacks have also revealed a problem for customers of the major infrastructure providers in the DNS-based traffic management space. While the DNS specification has largely gone unchanged since it was created from a client perspective, NS1 and other providers have carried out a lot of proprietary modification of how DNS works behind the scenes, making it more difficult to use multiple DNS providers for redundancy. "We've moved a bit away from the interoperable nature of DNS," Beevers said. "You can't slave one DNS service to another anymore. You're not seeing DNS zone transfers, because features and functionality of the [DNS provider] networks have diverged so much that you can't transfer that over the zone transfer mechanism." To overcome that issue, Beevers said, "people are pulling tools in-house to translate configurations from one provider to another—that did work very well for some of our customers [in shifting DNS during the attack]." NS1, like some of its competitors, also provides a service that allows customers to run the company's DNS technology on dedicated networks. "so if our network gets hit by a big DDoS attack, they can still have access." Fixing the interoperability problem will become more urgent as attacks like the most recent one become more commonplace. But Beevers said that it's not likely that the problem will be solved by a common specification for moving DNS management data. "DNS has not evolved since the '80s, because there's a spec," he said. "But I do believe there's room for collaboration. DNS is done by mostly four or five companies— this is one of those cases where we have a real opportunity because community is small enough and because the traffic management that everyone uses needs a level of interoperability." As companies with big online presences push for better ways to build multi-vendor and multi-network DNS systems to protect themselves from outages caused by these kinds of attacks, he said, the DNS and content delivery network community is going to have to respond. Article source
  25. A long-time hacker group is using DNS requests as a command-and-control mechanism in a new series of malware attacks, according to researchers at Palo Alto Networks. The APT group Wekby, which have attacked numerous U.S. targets, usually pounce as soon as exploits are revealed. Palo Alto has dubbed the new malware family "pisloader," and said it is similar to the HTTPBrowser malware family. Additionally, it uses a number of obfuscation strategies to avoid the probing eyes of researchers. It was delivered via HTTP from a still-active URL and the initial dropper contained simple code "that is responsible for setting persistence via the Run registry key, and dropping and executing an embedded Windows executable," according to Palo Alto. This delivers the payload. Another distinguishing characteristic of the pisloader malware family, Palo Alto said, is its use of return-oriented programming and other anti-analysis tactics. The Wekby group is still active, the researchers said. The Source
  • Create New...