Welcome to nsane.forums

Welcome to nsane.forums, like most online communities you need to register to view parts of our community or to make contributions, but don't worry: this is a free and simple process that requires minimal information. Be a part of nsane.forums by signing in or creating an account.

  • Access special members only forums
  • Start new topics and reply to others
  • Subscribe to topics and forums to get automatic updates

Search the Community

Showing results for tags 'technology'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Found 79 results

  1. Heard Of This Man Who Invented Email As a 14-Year-Old Boy? Forgotten and abandoned: Where is the 14-year old Indian boy who invented email? Any of you remember Shiva Ayyadurai? Few of you might have heard about him as a person who claimed to have invented email. Now 53 years old, Shiva has almost been forgotten and abandoned despite making the most controversial claim since the advent of Internet. V. A. Shiva Ayyadurai is an Indian-born American scientist and entrepreneur. Shiva is notable for his controversial claim to be the “inventor of email”. His claim is based on the electronic mail software he wrote as a New Jersey high school student in the late 1970s, which he called EMAIL. Shiva has since published a book challenging claims that the American computer programmer Ray Tomlinson invented email and asserting his own claim as its creator. According to Shiva, Tomlinson’s creation in 1971 was a primitive form of text messaging, while he had invented what we know and love as email in 1978 when he was a 14-year-old boy helping out the Newark dental school where his mother worked. While Shiva has steadfastly held on to his claims, no one from the tech world supported him but for Deepak Chopra, the new age spiritual guru, and India’s new prime minister Narendra Modi who has posed for photographs with him. Shiva now a lecturer at the Massachusetts Institute of Technology, copyrighted the term ’email’ in 1982 but did not patent it because, he says, it was not possible to patent software at that time. His creation, however, was an inter-office mail system with an inbox and was called ’email’. While Ray Tomlinson went on to hog the limelight for email, Shiva was fighting a lone battle. Tomlinson’s email was different from Shiva’s email. It enabled users on different computers connected to the primitive ARPANET, a precursor to the internet, to send messages to each other. It used the @ symbol to identify the computer from which it was being sent. If you take Shiva for a troll patent thief you should remember that Shiva holds four degrees from MIT, including a Ph.D. in biological engineering, and is a Fulbright Scholar. Also, he has won a $750,000 settlement from the now defunct ‘Theil sued’ Gawker for publishing wrong information about him and his invention. The settlement came as part of a broader settlement with wrestler Hulk Hogan and journalist Ashley Terrill. While Shiva fight’s his lone battle we are using his invention daily without even batting an eyelid for its creator. We hope that the world acknowledges Shiva’s efforts for making email a daily part of our life along with Tomlinson and he gets appreciated suitably. Source
  2. Rust Programming Language Takes More Central Role in Firefox Development Starting with the release of Firefox 54, the Rust programming language will take a bigger role in the Firefox browser, as more and more components will work on top of this new technology developed in the past years by the Mozilla Research team. For people unfamiliar with Rust, this is a new programming language developed by a Mozilla employee, which the Foundation officially started to sponsor beginning with 2009. In simple terms, Rust is a safer version of programming languages like C and C++, the languages at the base of Firefox and most of today's desktop software. Applications written in Rust have fewer memory-related errors and are safer to use thanks to the way the language was designed. Mozilla shipped first Rust component in Firefox 48 After seven years of working on Rust, Mozilla shipped the first Rust component with Firefox in August 2016, when the language was used to rewrite the browser's multimedia stack, the module that deals with rendering audio and video files. At the time, Mozilla reported they had zero issues during tests. Since then, Mozilla engineers have been slowly replacing more and more Firefox core components with Rust-based alternatives. According to an entry in the Mozilla bug tracker, there's so much Rust code in the Firefox core that starting with Firefox 54, Mozilla developers will need to have the Rust compiler installed on their devices in order to compile a binary version of Firefox. Mozilla might lose some Firefox users According to Firefox developer Ted Mielczarek and others, this will lead to some problems, and the bigger one is that Mozilla employees won't be able to compile binaries for platforms with smaller userbases, such as IBM's PPC64el and S390X, deployed at various companies around the world. The reason is that there's no Rust compiler for those platforms, which means that Firefox devs will fail when trying to compile a binary. The only way to fix this is if a compiler will be developed for those platforms. Most Firefox users won't be affected by this change, but Mozilla hopes they'll see a boost in performance in the future. In the upcoming year, Mozilla plans to replace most of Firefox's core engine, called Gecko, with Rust components. This operation will be done through small changes across different versions. Developer Jen Simmons perfectly described this very complex process in a blog post called "Replacing the Jet Engine While Still Flying." Source
  3. Canonical: 2017 Will See a Mir 1.0 Release, Plans to Implement Vulkan Support 2016 was a good year for Mir, says the company behind Ubuntu As most of you are aware, Canonical develops its own display server for Ubuntu, called Mir, which, in some ways, is similar to the X.Org Server and Wayland technologies. While Ubuntu on the desktop still uses X.Org Server's components, Mir is currently heavily tested for the Unity 8 user interface that Canonical plans on implementing by default for future releases of Ubuntu Linux, for desktops. However, until now, Mir has only been successfully deployed on mobile devices, powering the Ubuntu Touch mobile operating system used in various official and unofficial Ubuntu Phone and Tablets. According to Alan Griffiths, Software Engineer at Canonical, 2016 was a great year for Mir, and in 2017 the company plans on releasing the 1.0 milestone of the display server, which should implement the long-anticipated Vulkan support. "2017 will see a cleanup of our "toolkit" API and better support for "platform" plugin modules," said Griffiths. "We will then be working on upstreaming our Mesa patch. That will allow us to release our (currently experimental) Vulkan support." Canonical is working on reducing latency for Mir Canonical worked very hard in 2016 to improve its Mir display server by enabling a client-side toolkit, application, or library to work on Mir, as well as to upstream Mir support into GTK+ 3, Qt, SDL2, and Kodi. They also managed to create Mir Abstraction Layer and released MirAL 1.0, but for 2017 the company plans on enabling Mir on new platforms, upstream their Mesa patch, and enable Mir on a new graphics API, such as Vulkan. Canonical is now working on reducing latency for Mir, and hops that 2017 will be the year when Mir becomes mature enough to be used on desktops, powering the next-generation Unity 8 display server. At the moment, the company did not reveal the exact date when Mir 1.0 will see the light of day, so we can only guess that it could launch sometime around the release of Ubuntu 17.04 (Zesty Zapus), in mid-April, when they'll prepare for Ubuntu 17.10. Source
  4. Nokia Could Launch a Foldable Smartphone of Its Own The Finnish company could join the innovation race While Samsung is expected to launch its own foldable handsets, Apple is likely to order them from those companies who own the technology required. LG on the other hand will use its foldable panel technology inside products ordered by other companies, like Apple for example. If Nokia wants to remain in tow, it needs to come up with similar devices that will compete with Samsung, Apple and LG products, as well as other companies that might introduce foldable smartphones. In this regard, it appears that the Finnish company has already filed for a patent that describes a foldable device, which could very well be a smartphone. The patent granted by the US Patent & Trademark Office (USPTO) is not the first of its kind. Nokia has been filing similar patents for this type of technology since 2005, so we might not really see a Nokia-branded foldable smartphone anytime soon. The main idea behind the foldable technology is to try to provide customers with larger displays in a pocket-sized form factor. The patent reads: “In this way it is possible to provide a pocket size device with a relatively large display (for example, a 6, 7 or 8 inch display or larger)." Until the time comes for Nokia to compete other handset maker in innovation, the Finnish company is expected to have its second Android smartphone revealed next month at Mobile World Congress. A flagship smartphone, known as Nokia 8, will be introduced in late February, but it will probably not arrive on the market until March or April, just like other flagships from rival companies. Source
  5. Meet The GPD Pocket, A 7-inch Ubuntu Laptop The GPD Pocket Do you have small hands? Are you a Borrower? Do you consider 10-inch netbooks to be monstrous? If so, the GPD Pocket may be right up your (very miniature) street. GPD Pocket, 7″ Laptop The GPD Pocket is a 7-inch laptop that’s small enough to slip in to a pocket — and it will apparently be available in two versions: with Windows 10, and Ubuntu 16.04 LTS. As reported on Liliputing, GPD (the company who makes the device) is currently only showing the device off a few fancy renders and photos with a prototype unit. But GPD has form for releasing other (similar) devices, like the GPD Win, and Android gaming portables, so although a novelty this latest device is unlikely to be outright vapourware. The GPD Pocket touts some impressive specifications for the size, including a quad-core Intel Atom x7-Z8700 processor (the same one used in the Microsoft Surface 3), 4GB RAM and a high-res IPS touch display: 7-inch IPS touch display Intel Atom x7-Z8700 (quad-core @ 1.6GHz) 4GB of RAM 128GB of storage 1x USB Type-C 1x USB 3.0 Mini HDMI Out MicroSD Card slot Courage jack (“headphone port”) 7000 mAh battery The overall dimensions of the device mean you won’t be able to hammer away on a full-sized keyboard, but the chiclet style QWERTY one included (plus a ThinkPad-like mouse nub as there’s no room for a touchpad) looks perfectly serviceable for tweets, forum posts and some basic web browsing. Since I doubt anyone would be using this device as their primary device issues to do with the keyboard size, or lack of palm rest, etc, are unlikely to be primary considerations. No, the GPD Pocket is, as the name suggests, intended as the sort of device you literally slide into your pocket as you head out the door. The “bad” news is that, like everything these days, GPD plan to crowdfund the GPD Pocket over on Indiegogo sometime in February. Currently there’s no indication of pricing or release date, but providing it’s not too weighted at the high-end it could make a nice midrange alternative to Linux hobbyists. Source
  6. Watch This Terrifying 13ft Robot Walk, Thanks To Ubuntu The world’s first manned robot took its first formative (and no doubt very loud) steps in South Korea last week — but you may be surprised to hear that Ubuntu was there to assist it. Standing an impressive 13 feet high, the bipedal Method-2 robot is referred to “robot-powered suit” that responds and mimics the actions of the person sat inside the cockpit, Ripley Power Loader style! The machine, which is able to walk like a human, has to haul a huge 130kg arms in each lunge forward, and weighs 1.5 ton in total. From a short video posted by Ruptly TV, Ubuntu is involved in helping engineers monitor, debug, and process the robot as it stomps forward. While there’s no suggestion that the robot itself runs on Ubuntu or Linux (something that is not improbable) it’s nonetheless great to see open-source software (especially of the flavor we write about) being used in advancements in robotics and engineering. Around 30 engineers are said to have on the mechanical marvel, the design of which is, in part, inspired by films like Terminator says its (famous) designer Vitaly Bulgarov. R&D spending on the creation has thus far hit $200 million, and news reports say the Method-2 could go on sale by the end of 2017 — with an equally giant price tag of $8.3 million! For more details on the robot, including a glimpse at some truly epic sci-fi-esque photos of the machine in action, see this blog post over on Design Boom. And if you’re lucky enough to get to try one, please don’t run sudo snap install skynet on it! Source
  7. I posted this as a joke, but the sad truth is it has become fact.
  8. bluetooth

    Bluetooth 5 Is Here While Bluetooth technology is not perfect, it has greatly impacted the technology industry. Look no further than headphones and speakers to see that it has made wireless music possible. It is also the technology that links smartphones to smartwatches. Those are just two such examples -- there are countless more. Today, the Bluetooth Special Interest Group announces the official adoption of the previously-announced Bluetooth 5. In other words, it is officially the next major version of the technology, which will eventually be found in many consumer devices. "Key feature updates include four times range, two times speed, and eight times broadcast message capacity. Longer range powers whole home and building coverage, for more robust and reliable connections. Higher speed enables more responsive, high-performance devices. Increased broadcast message size increases the data sent for improved and more context relevant solutions. Bluetooth 5 also includes updates that help reduce potential interference with other wireless technologies to ensure Bluetooth devices can coexist within the increasingly complex global IoT environment. Bluetooth 5 delivers all of this while maintaining its low-energy functionality and flexibility for developers to meet the needs of their device or application", says the Bluetooth Special Interest Group. Mark Powell, executive director of the Bluetooth SIG explains, "Bluetooth is revolutionizing how people experience the IoT. Bluetooth 5 continues to drive this revolution by delivering reliable IoT connections and mobilizing the adoption of beacons, which in turn will decrease connection barriers and enable a seamless IoT experience. This means whole-home and building coverage, as well as new use cases for outdoor, industrial, and commercial applications will be a reality. With the launch of Bluetooth 5, we continue to evolve to meet the needs of IoT developers and consumers while staying true to what Bluetooth is at its core: the global wireless standard for simple, secure, connectivity". So, will you start to see Bluetooth 5 devices and dongles with faster speeds and longer range in stores tomorrow? Nope -- sorry, folks. Consumers will have to wait until 2017. The Bluetooth SIG says devices should become available between February and June next year. Source Alternate Source - Bluetooth 5.0 Officially Introduced with Longer Range, Faster Speed
  9. Researchers Turn Nuclear Waste Into Diamond Batteries That Could Last More Than 5,000 Years Scientists find diamond batteries that could harness energy from nuclear waste Physicists and chemists from the University of Bristol’s Cabot Institute have found a way to convert thousands of tons of seemingly worthless nuclear waste into man-made diamond batteries that can last for 5,000 years. The team also claimed the “diamond batteries” – which are enclosed inside of other diamonds for safety are not dangerous and release less nuclear radiation than a banana. Such long-lived diamond batteries could be used in implants such as pacemakers, drones, satellites, spacecraft, and in other areas where long battery life is crucial. They could help resolve the problems of nuclear waste, clean electricity generation and battery life, say the researchers. Professor Stephen Lincoln, Visiting Research Fellow at the University of Adelaide Honorary, told that the breakthrough of this magnitude, if confirmed, would be revolutionary. “Nuclear waste is a huge issue, there would be something like 150,000 tonnes of high-level nuclear waste stored in various places around the world. It is stored temporarily and the only permanent storage facility is in Finland but there is nothing there yet. “If you could use nuclear waste for generating power, the significant thing, on top of the fact that you don’t need to find somewhere to store it, is it doesn’t generate carbon dioxide so it would be great for the environment.” Scientists found that by heating graphite blocks used to house uranium rods in nuclear reactors – much of the radioactive carbon is given off as a gas. This can then be collected and turned into radioactive diamonds using a high-temperature chemical reaction, in which carbon atoms are left on the surface in small, dark-colored diamond crystals. When placed near a radioactive source, these man-made diamonds produce a small electrical charge. The radioactive diamonds are then encased safely within a layer of non-radioactive diamond. This requires no moving parts or maintenance, and can last for thousands of years without needing to be replaced. The team has already demonstrated a prototype diamond battery using Nickel-63, but researchers said carbon-14 will make a more effective radiation source. “Carbon-14 was chosen as a source material because it emits a short-range radiation, which is quickly absorbed by any solid material,” said Dr Neil Fox from Bristol. “This would make it dangerous to ingest or touch with your naked skin, but safely held within diamond, no short-range radiation can escape. “In fact, diamond is the hardest substance known to man, there is literally nothing we could use that could offer more protection.” Watch the video below for more scientific details: Source
  10. SKEYE Nano 2 Is World’s Smallest Flying Camera At Just 1.6 Inches Wide SKEYE Nano 2 Drone : The worlds smallest flying camera drone which fits on your fingertips, Priced $99 (Rs.6600.00) for Techworm readers Drones and Internet of Things are two sectors of technology which are just taking off with many new advances coming to us every day. Drones are nowadays used in pretty much everything from delivering pizzas to taking selfies, these small unmanned aerial vehicles are slowly taking over the world by storm. They vary in shape, sizes, and features. While there are many drones which covet for the title of world’s smallest drone, have you come across a drone which is smallest, has a camera and packs a punch? SKEYE, the manufacturer of quality drones and the current holder of the title of making the world’s smallest drone (SKEYE NANO) now has added another feather in its cap. SKEYE Nano 2 is arguably the world’s smallest flying camera drone at 1.6 inches wide. Measuring just 1.6 inches on each side, and 0.9 inches in height, the 0.59-ounce this awesome drone is capable of flying in six directions and ‘performing aerobatic stunts. The SKEYE NANO 2 is priced at $129 but is available to Techworm readers at $99. If you are interested, visit our Techworm Deals page and get the SKEYE Nano 2 for $99 for a limited period. The Nano 2 is so simple to fly that even a non-technical person can handle it. Just push the button for takeoff and you can take it from there with the included controller or sync it up to your smartphone via Wi-Fi. The SKEYE Nano 2 drone comes with its own App which gives you real time camera streaming with various controls. Its compact and lightweight design enable precision flying, thanks to its 6-axis flight control system with adjustable gyro sensitivity. Despite its size, the Skeye Nano 2 houses an HD-capable, Wi-Fi-controlled camera able to feed back amazing first-person-view video in real time. The drone also has the ability to hover in one spot, allowing the user to adjust its pitch, yaw, and roll without worrying about it shooting off in the wrong direction. The Nano 2 also comes with built-in LED lights for night-flying, and clips inside the controller when not in use. The SKEYE Nano 2 FPV Drone ships anywhere for free and is a can’t-miss holiday gift idea. Pick one up for $30 off from Techworm Deals for a limited Black Friday promotion period. Source
  11. 10 Best Remote Desktop Software Alternatives For TeamViewer Here Are Top 10 Remote Desktop Software Alternatives For TeamViewer Ever since TeamViewer got hacked, security experts are warning users to ditch it or face problems. The reason is that TeamViewer is being used as a vector of attack. This has happened on other sites where they had no critical information and within 48 hours everyone’s logged in sessions were logged out, an email went around saying you had to click the link in the email (to verify ownership) and set up two factor auth as they knew they were being targeted. TeamViewer must know they are being targeted, and the stakes are high as the software allows complete access to a trusted machine – it’s basically a master key – and there hasn’t been a single response with teeth from TeamViewer. However TeamViewer developers say that the fault lies with the users. Therefore it would be wise to search for alternatives to TeamViewer which we give here. Remote Desktop Access is a great way to manage the files on your desktop as well as any other possible location, and also to help your associates with troubleshooting their problems remotely. In other words, remote desktop is a program or an operating system feature that allows the user to connect to a computer in another location, see that computer’s desktop and interact with it as if it were local. Also, remote desktops are an excellent way to expedite the deployments for developers. Remote desktop applications are used to remotely configure data centers and are the standard in industrial applications. While the advantages to remote desktop are too good to be overlooked, the right tools are needed to connect with your friends and family safe and secured. One of the most common and widely used Remote Desktop software is ‘TeamViewer’ and we will be looking at a few alternatives that better this tool. Why the need for TeamViewer Alternative? While TeamViewer is a useful tool for getting started on remote desktop, it doesn’t provide the simplicity and dependability expected from such a tool. Security is one primary concern that causes many users to flock away from TeamViewer, if not properly configured. If the average user doesn’t configure the settings on this tool correctly, your system is directly put at risk. Although the personal license version is for free, TeamViewer charges a heavy fee for the business version. Even though TeamViewer features many useful functionality like file transfers, collaborations and mobile access, there are better TeamViewer alternatives should you decide to do away with it. Here are the 10 best alternatives to TeamViewer that you can look for your remote desktop activities. Windows Remote Desktop Connection: Windows Remote Desktop Connection is a free feature built into the Microsoft Windows Operating System, Ultimate and Business versions that gives fast and complete access control over a remote PC. Supported by Windows and Mac OS X, the tool is simple, easy-to-use and easy to set up that makes this tool a great fit for beginners and amateurs, beginning out on Remote desktop application. This setting can be accessed from the computer’s System settings, found in the control panel. The router of the remote PC needs to be routed on the port 3389 to direct it to your PC. However, this tool is not capable of controlling multiple PC control at a time. Visit website Join.me: Developed by LogMeIn, Join.me is a premium online conferencing and meeting tool that allows multiple people from multiple locations connect with each other at the same time, which is supported by Windows and Mac OS X. Join.me offers unlimited audio, which means that anyone can join a call from any device, whether that is internet calling (VoIP) or phone lines. It also offers recording, one-click meeting scheduling, and phone numbers in 40 different countries to facilitate worldwide conferencing. The paid versions offers up to 250 participants to join the meeting and a presenter swap lets people share their views across the attendees. While the service is free for basic VoIP, it is $15/mo for Pro plan and $19/mo for Enterprise plans with premium meetings and advanced management. Visit Website Splashtop: Splashtop offers free and paid remote desktop solutions for individuals and businesses alike. Splashtop is easy to use once you get past the difficulties with installation. Supported by Windows, OS X, Linux, Android and iOS, Splashtop remote PC access software offers fast connections and multiple levels of security. You can use the tool for Free if you are using it for personal purposes on up to 5 computers. Setting up the tool on your Windows or Mac and accessing remotely over your Android or iOS based mobile phone is what Splashtop achieves pretty enormously. There is minimum latency on audio and video streaming, so that makes it easier for you to even enjoy your media remotely. It is free for 6 months, after that $1.99/month (Individual use) and $60/year per user for businesses. Visit Website Real VNC: RealVNC provides both free and paid versions of the Remote Desktop Client. The software consists of a server and client application for the Virtual Network Computing (VNC) protocol to control another computer’s screen remotely. Somewhat more complicated that setting up TeamViewer, RealVNC offers the dependability and features like cross-platform remote control, VNC authentication, encryption, file transfer, collaboration tools and chat, to ensure that your remote connection bodes well for the person on the other end of it. The cross-platform utility allows you to connect individually to a remote computer or also connect multiple PC’s behind a public IP Address. The supported platforms for this software are Windows, Mac OS X, Linux, UNIX, Ubuntu, and Raspberry Pi. The software is free for private use, while it is $30 per desktop for Personal Commercial Use and $44 per desktop for Enterprise Use. Visit website Ammyy Adminn: Ammyy Admin is free of charge, fast and easy way to gain remote sharing and remote desktop control solution for both individuals and businesses alike. Unlike heavy remote desktop software, the tool comes in as a tiny application under 1MB. In addition to remotely connecting to the other system, you can also perform actions like file transfers and live chat. Supported by Windows, secure connection and easy to manage software makes Ammyy Adminn one of the most preferred free remote desktop clients. While it is free for non-commercial uses, the Starter, Premium and Corporate licensed tools are priced at $33.90, $66.90 and $99.90 respectively. Visit Website Ultra VNC: UltraVNC is a free tool based on the VNC technology that is developed for systems based on Windows to remotely access other systems. It offers a simple setup process that gets you connected in a matter of minutes. Once the connection is established, you can completely work on another remote system. This tool also allows file transfer that makes it a useful little free tool for quickly establishing a remote connection and getting your work done. Visit Website LogMeIn Pro: LogMeIn offers one of the best remote desktop solutions for Windows and Mac OS X for individuals and businesses. Even though the free version of LogMeIn was discontinued recently, that doesn’t keep it from being listed as one of the best alternatives to TeamViewer. In spite of this, LogMeIn still serves as one of the best premium alternatives to TeamViewer with many key features like file transfer, audio and video streaming, full-remote access to transfer files, print documents to a local printer and share documents with collaborators. Although priced somewhat less than the TeamViewer app, LogMeIn offers features that are non-existent in many remote desktop applications of its kind. It is $99/year for individuals (Access up to 2 computers), $249/year for small businesses with 5 computers and $449/year for Businesses with 10 computers. Visit Website WebEx Free: Cisco’s WebEx free and premium tool allows you to remotely connect with people based on different systems through the free mobile or desktop applications. These remote desktop setups although have to be an attended one on the other side. The one hosting the meeting could share his desktop and choose to pass over the control of mouse and keyboard to other presenters. Sharing files, chat and face-to-face live interaction, scheduling in Outlook, Password protected messages are also capable with this tool. Supported by Windows, Mac OS X, Linux and Mobile Apps, Cisco’s WebEx would be a great fit for your needs if you are looking for a premium business solution or just for remotely accessing a desktop with added bit of interaction to it. While it is free for 3 People, paid plans include Premium 8 ($24 per month for up to eight attendees), Premium 25 ($49 per month for up to 25 attendees) and Premium 100 ($89 per month for up to 100 attendees.) Visit Website Chrome Remote Desktop: This free tool is available as an extension for Google Chrome browser, accessible on any operating system running it and is fully secured. Setting up is simple and the add-on costs nothing in addition to providing an easy-to-use setup for getting your remote desktop connected. The add-on lets users remotely access any desktop and its contents right from their browser. Visit Website Mikogo: Mikogo is a great premium tool for businesses and individuals with commercial purposes, even though priced a bit heavily for business users. The tool is completely browser based and does not need software or plugin installations. Mikogo provides its software as native downloads for Windows, Mac OS X, Linux, iOS and Android. The software is cross-platform which allows a presenter to host the online meeting on a Windows computer and meeting attendees could join from a Windows, Mac, or Linux computer, as well as from smartphones or tablets. The software interface is multi-lingual and can be switched between one of 35 languages. It also has other features such as share documents, swap presenter, remotely control a desktop, free mobile apps, video conferencing, multi-user whiteboard, file transfer, chat and more. It is $13 per month for 3 participants, $19 per month for 25 participants, $39 per month for 25 participants and unlimited hosts and $78 per month for 25 participants with 3 session channels. Visit Website Source
  12. Why Real Hackers Prefer Linux Over Windows And Mac Why do hackers prefer Linux over Mac, Windows, and other operating systems? We have published many tutorials for hackers and security researchers. You may have noticed that most tutorials are based on Linux operating systems. Even the hacking tools out there are based on Linux barring a few which are written for Windows and Mac. The moot question here is that why do hackers prefer Linux over Mac or Windows? Today we look at the reason why hackers always prefer Linux over Mac, Windows, and other operating systems. You may have your own reasons for choosing Linux but what do hackers really look forward to while working with Linux. Reason #1: Command line interface vs graphical user interface Linux was designed around a strong and highly integrated command line interface. Windows and Mac don’t have that. This grants hackers and Linux far greater access and control over their system and awesome customization. This is the reason that most hacking and pentesting tools are built into Linux have greater functionality above and beyond their windows counterparts. In contrast, Windows was built around the graphic user interface (GUI). This restrict user interaction to point-and-click navigation (slower) and application/system menu options for configuration. Windows has a command line structure, such as command prompt and Power Shell, however, these don’t give hackers/developers the complete functionality and integration compared with Linux. This hampers their work as hacking is usually going beyond the well-defined command lines. This is the reason that though hacking tools like Metasploit or nmap are ported for Windows, they don’t have capabilities like Linux. Compared to Windows, Linux is more granular. That means Linux gives users infinite amount of control over the system. In Windows, you only can control what Microsoft allows you to control. In Linux, everything can be controlled by the terminal in the most miniscule to the most macro level. In addition, Linux makes scripting in any of the scripting languages simple and effective. Reason #2: Linux is lighter and more portable This is arguably the best reason for choosing Linux over Mac and Windows. Hackers can easily create customized live boot disks and drives from any Linux distribution that they want. The installation is quick and its light on resources. To memory, I can only think of one program that lets you create Windows live disks and it wasn’t nearly as light or as quick to install. Linux is made even lighter as many distros are specifically customised as light-weight distros. You can read about the top lightweight Linux distros here. Reason #3: Linux is typically more secure Ask a pro hacker or security researcher which operating system is the most secure of them all, and perhaps 101 out 100 will unflinchingly swear by Linux. Windows is popular because of its reach among average users and popularity amongst programmers because it is more profitable to write a program for Windows. In more recent years, popularity has grown for UNIX based operating systems such as Mac OS, Android, and Linux. As a result, these platforms have become more profitable targets for attackers. Still, Linux is a great deal more secure than Windows and Mac out of the box. Reason #4: Linux is pretty much universal Just about everything runs some form of UNIX (Internet of Things, routers, web-servers, etc.). Doesn’t it make sense that you would target those systems from a device running the same platform? After all, the goal is to make things easier on yourself. You don’t want to worry about compatibility problems. Reson #5: Linux Is Open Source Unlike Windows or Mac, Linux is open source. What that means for us is that the source code of the operating system is available to us. As such, we can change and manipulate it as we please. If you are trying to make a system operate in ways it was not intended, being able to manipulate the source code is essential. Think of it this way. Could you imagine Microsoft giving us a plug-in/MMC or whatever to manipulate or change the kernel of Windows for hacking? Of course NOT! Reason #6: Linux Is Transparent To hack effectively, you must know and understand your operating system and to a large extent, the operating system you are attacking. Linux is totally transparent, meaning we can see and manipulate all its working parts. Not so with Windows. Actually, the opposite is true. Microsoft engineers work hard to make it impossible for users or hackers to find the inner workings of their operating system. On Windows, you are actually working with what Microsoft has given you rather that what you want. Here Linux differs philosophically from Microsoft. Linux was developed as an operating system to give users more control over it rather than make them do what the developers want. Summary : Linux vs Windows and Mac You have to understand that hackers and security researcher are here to make money. Hackers hack platforms that are profitable. Windows has been the preferred choice within enterprise environments and with the average consumer. It’s the preferred choice for developers (apple licensing costs and restrictions), which is why Windows is so compatible. Apple has been too expensive for consumers and Linux is frankly not that user-friendly (buggy, lack of GUI, etc.). You don’t have an average Joe just switching on a Linux PC/laptop and doing what he wants. However, this is changing. With the arrival of Android smartphones, there has been a paradigm shift in user’s preferences. As more users switch to Mac/iOS and Android/Linux, attackers will shift to targeting these platforms. With Internet of Things predicted to the next game-changer in tech, Linux will emerge as a formidable challenger to Microsoft’s Windows or Apple’s Mac. As of today, most Internet of Things connected devices are powered by Linux and given the transparency and control available in Linux, it will remain so. Hacking isn’t for the uninitiated. Hacking is an elite profession among the IT field. As such, it requires an extensive and detailed understanding of IT concepts and technologies. At the most fundamental level, Linux is a requirement for hackers and security researchers. Source
  13. Hey guys, I just came across this ad blocking service on twitter and it has some crazy witty tweets on its wall, which made me take it here Here's their site seems like a Product of Purevpn (referring it as pops on twitter). What are your thoughts about it? https://www.purevpn.com/badadjohnny
  14. AMD Thinks Beyond Zen Chips As It Prepares For The Future A rendering of AMD's new Zen chip, which will only be supported by Microsoft's Windows 10. It looks like AMD will jump from 14-nanometer process to the 7-nanometer node to manufacture chips AMD has set high hopes for its upcoming Zen PC chips, but the company has now offered some clarity on how it will manufacture successor chips. Current Zen chips will be made using the 14-nanometer process, but the next important process for AMD is 7-nm, the company said. AMD has renewed an accord with spin-off GlobalFoundries that may include manufacturing those chips using that process. The 14-nm and 7-nm processes "are the important nodes" for AMD, said a company spokesman in an email. AMD is planning CPUs, GPUs, and custom chip manufacturing advances on those nodes. That's different from Intel, which is jumping to the 10-nm process next year before taking on 7-nm. GlobalFoundries' roadmap is unclear, though it has been internally developing 10-nm and 7-nm technologies. The 7-nm process is still considered many years out, and AMD didn't say if it was planning for chips on the 10-nm process. It's likely the immediate successor to Zen will be on the 14-nm process. Chip planning is heavily dependent on the manufacturing process. Chip makers have to plan processor features based on what a manufacturing process is able to achieve. For example, Intel planned its first 22-nm chips with 3D transistors, because the manufacturing process had the capability to etch related features on chips. AMD planned its latest GPUs, code-named Polaris, and many of its features like the HBM (high-bandwidth memory) for the 14-nm process. The 7-nm process is considered a big advance in chip making. Intel plans to introduce EUV (extreme ultraviolet) lithography on 7-nm, which helps etch finer features on chips. EUV alleviates some of the issues involved in etching smaller and smaller features on chips. AMD modified an ongoing chip-manufacturing contract with GlobalFoundries to account for the advance to 7-nanometer technologies. AMD's has modified the terms of the contract with GlobalFoundries through 2020, and will take a one-time charge of $335 million in the third quarter of 2016. The contract actually expires in 2024, but AMD has continuously modified the contract to align with its chip supply needs. Some poor planning has cost AMD millions of dollars in losses on this contract, but the company's chip shipment are on the upswing after years of losing market share to Intel in server and PCs. Source
  15. Woman Shoots Drone: “It Hovered For A Second And I Blasted It To Smithereens.” Jennifer Youngman, 65, used a .410 gauge shotgun like this to take out a drone. Woman used a .410 shotgun against trespassing aircraft thought to be paparazzi. With a single shotgun blast, a 65-year-old woman in rural northern Virginia recently shot down a drone flying over her property. The woman, Jennifer Youngman, has lived in The Plains, Virginia, since 1990. The Fauquier Times first reported the June 2016 incident late last week. It marks the third such shooting that Ars has reported on in the last 15 months—last year, similar drone shootings took place in Kentucky and California. Youngman told Ars that she had just returned from church one Sunday morning and was cleaning her two shotguns—a .410 and a .20 gauge—on her porch. She had a clear view of the Blue Ridge Mountains and neighbor Robert Duvall’s property (yes, the same Robert Duvall from The Godfather). Youngman had seen two men set up a card table on what she described as a “turnaround place” on a country road adjacent to her house. “I go on minding my business, working on my .410 shotgun and the next thing I know I hear ‘bzzzzz,’" she said. "This thing is going down through the field, and they’re buzzing like you would scaring the cows." Youngman explained that she grew up hunting and fishing in Virginia, and she was well-practiced at skeet and deer shooting. “This drone disappeared over the trees and I was cleaning away, there must have been a five- or six-minute lapse, and I heard the ‘bzzzzz,’" she said, noting that she specifically used 7.5 birdshot. “I loaded my shotgun and took the safety off, and this thing came flying over my trees. I don’t know if they lost command or if they didn’t have good command, but the wind had picked up. It came over my airspace, 25 or 30 feet above my trees, and hovered for a second. I blasted it to smithereens.” When the men began to walk towards her, she told them squarely: “The police are up here in The Plains and they are on their way and you need to leave.” The men complied. “They got in their fancy ostentatious car—I don’t know if it was a Range Rover or a Hummer—and left,” she said. The Times said many locals believe the drone pilots may have been paparazzi or other celebrity spotters flying near Duvall's property. Youngman said that she recycled the drone but managed to still be irritated by the debris left behind. "I’ve had two punctures in my lawn tractor," she said. The Fauquier County Sheriff’s Office said it had no record of anyone formally complaining about this incident. When Ars asked if the office had heard of any other similar incidents in the region, Sgt. James Hartman replied: "It's happened around the country but not in this region to my knowledge." A gray zone For now, American law does not recognize the concept of aerial trespass. But as the consumer drone age has taken flight, legal scholars have increasingly wondered about this situation. The best case-law on the issue dates back to 1946, long before inexpensive consumer drones were technically feasible. That year, the Supreme Court ruled in a case known as United States v. Causby that a farmer in North Carolina could assert property rights up to 83 feet in the air. In that case, American military aircraft were flying above his farm, disturbing his sleep and upsetting his chickens. As such, the court found he was owed compensation. However, the same decision also specifically mentioned a "minimum safe altitude of flight" at 500 feet—leaving the zone between 83 and 500 feet as a legal gray area. "The landowner owns at least as much of the space above the ground as he can occupy or use in connection with the land," the court concluded. Last year, a pilot in Stanislaus County, California, filed a small claims lawsuit against a neighbor who shot down his drone and won. However, it is not clear whether the pilot managed to collect. Similarly, a case ensued in Kentucky after a man shot down a drone that he believed was flying above his property. The shooter in that case, William Merideth, was cleared of local charges, including wanton endangerment. But earlier this year, the Kentucky drone's pilot, David Boggs, filed a lawsuit asking a federal court in Louisville to make a legal determination as to whether his drone’s flight constituted trespassing. Boggs asked the court to rule that there was no trespass and that he is therefore entitled to damages of $1,500 for his destroyed drone. The case is still pending. Youngman said she believed in 2nd Amendment rights and also was irritated that people would try to disturb Duvall. “The man is a national treasure and they should leave him the fuck alone,” she said. Source My Comments: What a shot! I like it. Drones must only follow the roads and they have no rights on property. The reason is drones fly lower and it causes disturbance to all. Also, it creates fear among the residents.
  16. Is it suitable to charge Nokia phones with Samsung chargers? Like this, there are many unclear questions and myths related to the technology world. Many myths and common technology misconception that people need to know. Some of them are worldwide known and every age group believes them. Below we will try to increase your computing knowledge. Stick with us… Through this article, we are trying to reveal some of the biggest misconceptions. The popular technology myths (Basic level) are going to be discussed in the following list. Feel free to share your knowledge via comments. 15. Smartphone Battery Saver Apps Are Efficient Never listen to your friends who give you some tripe advice to save your Smartphone battery. If you are using a phone with lots of features then it will automatically use battery power (in background processing) to run. It will be strange if you are not going to use any features such as GPS, Bluetooth to save battery because these things make your phone smart. However, there are some apps that help you to save battery by putting all unnecessary apps in sleep mode and most of them are preinstalled like stamina mode in Sony mobile. So you don’t need to download any additional powersaver application. Also, the developers have included the push notification feature in all apps. So, never felt distraught about battery and use your phone with all features. 14. Web Cookies Are Dangerous Many people think that web cookies are dangerous and should be avoided to keep your computer safe. They are just plain text files that web store on your computer for the browsing history. Cookies never get access to your computer or execute any application. They are generally used to track the information about the website that you opened in the past. Cookies are not dangerous and if you are still worried about your privacy then install personal firewall software. 13. Expensive Cables Means High Quality Picture When you go to any electronics store, you will find thousands of different types of cables. A salesperson always tried to convince you to buy a more expensive thing, but this doesn’t mean that cheap things are bad. Most of the people believe that expensive HDMI cables for audio and video playback are always better, but this is not true. Many popular gadgets communities stated that there is not much difference between cheap and expensive HDMI cables. You don’t need a special cable for 4K television. A standard HDMI cable can also be used in 4K for high quality. 12. Renovated Products Aren’t Good As New Ones Most of us think that renovated or old products that company fix and resell in the market are usually defective. They can be used as a dummy product or sell to new customers after fixing. After renovating them, they can be sold as refurbished products with a low price tag. This is a great opportunity to save money. Many renovated products are also good as new products but you need to do a small research to find the best offer. 11. Is it bad to Use Different Phone Charger? Few official sources claimed that a standard charger can also charge other devices of the same rating. Some other experts claimed that this could stress out the battery but this will only happen when you do it regularly over a year. Now almost every gadget has same charging port and most of the people charge them using other gadget’s charger. Overall, I recommend you to use default charger for longer machine life. 10. Bigger Smartphone Screen is Better If there are two phones with same features but different screen size then most of the people will choose the bigger one. What are the benefits of the larger screen? Most of the Smartphone companies focus on ultra-dense display and high pixel density. Few companies like Apple focus on brightness and clarity of the screen. So it totally depends on nature of usage, if you want to watch movies, then go for bigger ones. Otherwise select according to the CPU and GPU performance. 9. Leaving Phone Plugged Destroys the Battery Most of the people leave their phones plugged in overnight and forget about it. This might affect your battery. Now all Smartphones run on lithium-ion batteries and they are smart enough to stop charging when they are fully charged. That means, charging is off when the phone hits 100%, but leaving it plugged would destroy your Smartphone battery if there is a defect in the hardware. In case, you are not using default charger, you must unplug it because they are not compatible with the phone’s shutdown mechanism. Few local lithium-ion batteries may damage your battery because they continue to increase the charging value even after reaching the maximum limit. 8. Computer Capabilities Depend on Processor’s Clock Speed Do you also think that 3.1 GHz Pentium 4 is more powerful than 2.9 GHz Core 2 Duo processor? This myth is also known as Gigahertz Myth which refers to the misconception of using only clock speed to compare two processors. A computer’s processing speed depends on many factors including processor architecture, operating system, high level design and efficiency to execute more instruction per second. There are many other side factors such as high bandwidth, heat dissipation facility and on-board cache memory. That’s the reason people prefer 2.5 GHz Intel i5 over 3.1 GHz core 2 Duo. 7. Deleting Something Is Gone Forever From Recycle Bin If you think that deleting some stuff from recycle bin can delete your data permanently from the computer then you might be wrong. Every file which you have deleted from is not gone completely. Those deleted files can be recovered using some recovery utilities but all files might not be recovered. These utilities go to the root of the file and recover it from the hard disk. Moreover, system restore point is a good option to recover accidently deleted files. 6. Macs Are Always Safe From Viruses Apple computers are also receptive to Trojan viruses. Millions of people think that the only Windows PC can be affected by virus because they are widely used by most of the internet users. Recently in 2012, thousands of Apple computers were affected by some deadliest Mac virus. Below I have mentioned some Mac viruses with their effects. Flashback (2012): It gets access to the computer through a malicious link. It creates username and password for authorized banking transactions. Scareware (2011): It enters the user’s computer through fake Mac Defender utilities and then push adult popups to fix it. 5. Install Software to Run Your Computer Faster A few years ago, everyone has PC with 256 MB RAM and Pentium processor and all of us wanted to speed up the computer without upgrading them. Most the ways are useless and some can even degrade the PC’s performance. Now everyone has RAM in Gigabyte and hard disk in Terabyte but I think people never leave this myth. That’s the reason they end up with installing fancy looking malicious programs from the Internet. 4. WiFi Networks Are Never hacked If you have a security password on WiFi network then don’t think that it is fully protected by WiFi hackers. Hackers have many ways to hack your WiFi network but I will recommend you to put a WEP key on your network. Wired Equivalent Privacy (WEP) key is a security code for a WiFi network, which allows exchange of encoded messages with other devices. However, no program is completely secure, avoid sharing crucial data over WiFi networks. 3. Jailbreaking of iPhone is Legal or Not? Jailbraking of an iPhone or any other iOS devices are legal all over the world. It is just used to remove the restriction to the iOS device in a country and now it is officially confirmed by the copyright office. I want to add one more important thing that unlocking an iPhone is illegal. Unlocking an iPhone means, make it free to work on any carrier i.e. different GSM carrier. 2. Which Runs Faster – 32 bit or 64 bit? Do you also think that 64 bit processor is faster than a 32 bit processor? Then you are totally wrong because there is no advantage of having a 64 bit processor over a 32 bit processor. The 32 bit version can use up to maximum 4 GB of RAM. Theoretically, a 64 bit processor can hold up to 16 Exabyte of RAM but this is impractical. This would require a 6500 km wide motherboard. Currently, 8 TB is the maximum limit to use in a 64 bit processor. The 64 bit version is only used to handle applications that use more than 4 GB of RAM and to address additional RAM. So if you are using less than 4 GB of RAM, you are actually not getting any benefit of 64 bit system. 1. More Megapixels = Better Camera I put this myth on the number one because most of the people buy a camera or Smartphone based on megapixels. This is one of the biggest and popular myths as most of us would like to buy a 16 MP camera over a 10 MP camera. Other than megapixel there are many factors for a good picture including the quality of the sensor, lens and sharpness of the image. The megapixel term is only used to describe the size of the image. So don’t think 41 MP Nokia phone will click better picture than 13 MP Sony device. Source
  17. New GDDR6 Memory Could Hit GPUs In 2018 Samsung believes the days of good, old GDDR5 are numbered as VR and games demand better graphics Virtual reality and gaming are changing the way PCs are built and driving the development of new types of memory for GPUs. A successor to the GDDR5 memory used in most GPUs -- called GDDR6 -- will be on its way by 2018, according to a presentation by Samsung executive Jin Kim at the Hot Chips conference this week. GDDR6 will be a faster and more power-efficient form of graphics memory. GDDR6 will provide throughput of around 14Gbps (bits per second), an improvement of 10Gbps with GDDR5. Although Samsung has targeted 2018 for GDDR6, new graphics memory usually takes a long time to reach the market, so the estimate may be aggressive. GPUs will need to be designed for the new memory, and components will need to be validated and tested, all of which takes time. Applications like VR and gaming are putting a heavy load on GPUs, under stress to deliver the best graphics. VR headsets like Oculus Rift and HTC's Vive only work with premium GPUs. GDDR6 will help GPUs deliver faster performance while drawing less power. The need for more GPU performance is already changing GPUs. New types of memory like HBM (High-Bandwidth Memory) and GDDR5X, which offer faster bandwidth, are already being used in new GPUs from AMD and Nvidia. GPUs with HBM and other new memory are still priced at a premium. But GDDR6 -- like GDDR5 -- could be used in low-priced GPUs. It'll also be easier for GPUs to transition from GDDR5 to GDDR6 or GDDR5X than to HBM, which redefines the memory subsystem. It's clear that Samsung is putting its weight behind GDDR6, while rival Micron is backing GDDR5X. Nvidia's GeForce GTX1080 GPU has GDDR5X memory. Samsung also backs HBM. GPUs are also getting faster throughput, driving a need for faster memory. Faster memory helps GPUs process graphics faster, and the graphics can then be sent to memory, CPU, and storage via quicker interconnects like Nvidia's NVLink or the upcoming PCI-Express 4.0. Advances in manufacturing have also created the need for new GPU memory. Some of the latest GPUs based on Nvidia's Pascal and AMD's Polaris architectures are manufactured with new techniques including FinFET, a 3D structure in which chips are stacked. New memory like HBM and GDDR6 are designed for such new chip structures, while GDDR5 memory has been designed for older GPUs made using older manufacturing technologies that don't use stacked chips. Source
  18. MIT Creates Mobile Phones Which Assemble Themselves The devices can pull themselves together in less than a minute. Researchers from MIT have created a mobile device which can assemble itself in a matter of moments. The prototype, developed by scientists from the Massachusetts Institute of Technology (MIT)'s Self-Assembly Lab, is composed of six separate parts which assemble into two different mobile devices. Even in unstable environments -- such as being tossed around a tumbler -- the device is able to assemble itself within a few minutes. As reported by Fast Co.Design, the principle behind the self-assembly device is simplicity. To begin with, the tumbler has to be going fast enough that the components meet but do not break. The device's components all have lock-and-key mechanisms which, like puzzle pieces, only allow the proper connections to take place and rejects the wrong ones. Finally, the parts need to "stick," and so the team used magnets to make sure the right parts were attracted to each other. Skylar Tibbits, one of the researchers working on the project, told the publication: If such technology was adopted mainstream, it could have serious implications for the manufacturing industry. MIT says the cost of automaton could be reduced at scale, removing the need to shift labor overseas -- or have workers altogether. Either way, jobs could eventually be replaced with automaton and assembly-line staff at electronics factories could one day be a thing of the past. However, the researchers say the possibilities for these kinds of designs are limitless and could give vendors more freedom to design and create better, more innovative products. "Right now the phone is predetermined, and we're using this process to assemble that phone," Tibbits said. "But imagine you take a circuit board and you have different logical building blocks and those logical building blocks can be tumbled around -- you can have different functionalities." MIT is not the only institution exploring the possibilities of modular consumer products. This year, Google revealed that Project Ara, the tech giant's modular smartphone, will be released in 2017. Source Alternate Source - A Cellphone That Can Self-assemble Itself
  19. ARM Has A New Weapon In Race To Build World's Fastest Computers ARM's new supercomputer chip design with vector extensions will be in Japan's Post-K computer, which will be deployed in 2020 ARM conquered the mobile market starting with Apple's iPhone, and now wants to be in the world's fastest computers. A new ARM chip design being announced on Monday is targeted at supercomputers, a lucrative market in which the company has no presence. ARM's new chip design, which has mobile origins, has extensions and tweaks to boost computing power. The announcement comes a few weeks after Japanese company Softbank said it would buy ARM for a mammoth $32 billion. With the cash, ARM is expected to sharpen its focus on servers and the internet of things. ARM's new chip design will help the company on two fronts. ARM is sending a warning to Intel, IBM, and other chip makers that it too can develop fast supercomputing chips. The company will also join a race among countries and chip makers to build the world's fastest computers. The chip design is being detailed at the Hot Chips conference in Cupertino, Calif., on Monday. Countries like the U.S., Japan, and China want to be the first to reach the exascale computing threshold, in which a supercomputer delivers 1 exaflop of performance (a million trillion calculations per second). Intel, IBM, and Nvidia have also been pushing the limits of chip performance to reach that goal. Following Softbank's agreement to buy ARM, it should come as no surprise that the first supercomputer based on the new chip design will be installed in Japan. The Post-K supercomputer will be developed by Fujitsu, which dropped a bombshell in June when it dropped its trusty Sparc architecture in favor of ARM for high-performance computers. Fujitsu aided ARM in the development of the new chip. Post-K will be 50 to 100 times speedier than its predecessor, the K Computer, which is currently the fifth fastest computer in the world. The K Computer delivers 10.5 petaflops of peak performance with the Fujitsu-designed SPARC64 VIIIfx processor. The new ARM processor design will be based on the 64-bit ARM-v8A architecture and have vector processing extensions called Scalable Vector Extension. Vector processors drove early supercomputers, which then shifted over to less expensive IBM RISC chips in the early 1990s, and on to general-purpose x86 processors, which are in most high-performance servers today. In 2013, researchers said less expensive smartphone chips, like the ones from ARM, would ultimately replace x86 processors in supercomputers. But history has turned, and the growing reliance on vector processing is seeing a resurgence with ARM's new chip design and Intel's Xeon Phi supercomputing chip. The power-efficient chip design from ARM could crank up performance while reducing power consumption. Supercomputing speed is growing at a phenomenal rate, but the power consumption isn't coming down as quickly. ARM's chip design will also be part of an influx of alternative chip architectures outside x86 and IBM's Power entering supercomputing. The world's fastest supercomputer called the Sunway TaihuLight has a homegrown ShenWei processor developed by China. It offers peak performance of 125.4 petaflops. ARM has struggled in servers for half a decade now, and the new chip design could give it a better chance of competing against Intel, which dominates data centers. Large server clusters are being built for machine learning, which could use the low-precision calculations provided by a large congregation of ARM chips with vector extensions. ARM servers are already available, but aren't being widely adopted. Dell and Lenovo are testing ARM servers, and said they would ship products when demand grows, which hasn't happened yet. ARM server chip makers are also struggling and hanging on with the hope the market will take off someday. AMD, which once placed its server future on ARM chips, has reverted back to x86 chips as it re-enters servers. Qualcomm is testing its ARM server chip with cloud developers, and won't release a chip until the market is viable. AppliedMicro scored a big win with Hewlett Packard Enterprise, which is using the ARM server chips in storage systems. Other ARM server chip makers include Broadcom and Cavium. Source
  20. China’s ‘Elevated Bus’: The Future Of Public Transport Is Here China’s elevated bus demoed that glides above traffic Back in May, we had reported how Transport Explore Bus, a Beijing-based company was working on a scale model of a road-straddling bus that looks like an overgrown monorail and would allow the traffic to pass underneath it. We had also informed that the trial of a full-size version is expected to be deployed on urban Chinese around August this year. True to their words, the Transit Elevated Bus (TEB) took its first test ride Tuesday in Qinhuangdao, a port city in northeast China, according to China’s Xinhua News. The electricity-powered vehicle’s brake and power systems were tested on 300m-long controlled track. The TEB concept is designed to help alleviate China’s massive traffic problems and reduce congestion and vehicle emissions – a growing problem in China. The TEB has a roomy interior that is over 72-feet long and 25-feet wide. It’s roughly 16-feet tall, and offers about 7 feet of space underneath for cars to travel through it. The bus can carry up to 300 passengers, and rides along tracks embedded in the street. It’s supposed to reach 40 to 50 km/h (about 25 to 31 mph). The TEB runs on sixteen tired wheels and is guided by eight pairs of rail wheels. The bus was first shown in 2010, and then again re-proposed at this year’s 19th International High-tech Expo in Beijing. Song You Zhou, the designer and chief engineer of the TEB, says prototypes are being constructed, and that five cities — Qinhuangdao, Nanyang, Tianjin, Shenyang, and Zhoukou — have signed contracts with his TEB Technology Development Company for pilot projects. Back in May, Song You Zhou had told WCC Daily that if all goes well, it will only be a year to a year and a half before the vehicle enters the market. With the prototype arriving as promised, it would be interesting to see if the company’s real-size TEB is able to hit the market on time. Source
  21. 5 Facts Every Computer Programmer Should Know What are the things that a programmer must know (obviously besides programming languages)? The word “computer” was first recorded as being used in 1613 in a book called The Yong Mans Gleanings by English writer Richard Braithwaite to describe a human who performed calculations or computations. In the book he said, “I haue read the truest computer of Times, and the best Arithmetician that euer breathed, and he reduceth thy dayes into a short number.” (The spelling mistakes were all deliberate BTW, it was a simpler time back then). 1. The first “pre-computers” were powered by steam In 1801, a French weaver and merchant, Joseph Marie Jacquard, invented a power loom that could base the design of a fabric upon punched wooden cards. Fast forward to the 1830s, and the world marvelled at a device large as a house and powered by six steam engines. It was invented by Charles Babbage, the father of computing – and he called it the Analytic Engine. Babbage used punch cards to allow the monstrous machine to be programmable. The machine consisted of four parts: The mill (analogous to CPU), the store (analogous to memory and storage), the reader (input) and the printer (output). It is the reader that makes the Analytic Engine innovative. Using the card-reading technology of the Jacquard loom, three different punched cards were used: Operation cards, number cards and variable cards. Babbage sadly never managed to build a working version, because of ongoing conflicts with his chief engineer. It seems even back then, CEOs and devs didn’t get a long. 2. The first computer programmer was a woman In 1843, Ada Lovelace, a British mathematician, published an English translation of an Analytical Engine article written by Luigi Menabrea, an Italian engineer. To her translation, she added her own extensive notes. In one of her notes, she described an algorithm for the Analytical Engine to compute Bernoulli numbers. Since the algorithm was considered to be the first specifically written for implementation on a computer, she has been cited as the first computer programmer. Did Lovelace go onto a life of Ted Talks (or whatever the Victorian equivalent was)? Sadly not, she died at the age of 36, but her legacy thankfully lives on. 3. The first computer “bug” was named after a real bug While the term ‘bug’ in the meaning of technical error was first coined by Thomas Edison in 1878, it took another 6o years for someone else to popularize the term. In 1947, Grace Hopper, an admiral in the US Navy, recorded the first computer ‘bug’ in her log book as she was working on a Mark II computer. A moth was discovered stuck in a relay and thereby hindering the operation. Before it was recorded on her note, the moth had been ‘debugged’ from the system. On her note, she wrote, “First actual case of bug being found.” 4. The first digital computer game never made any money What is considered as the forefather of today’s action video games and the first digital computer game wasn’t particularly successful. In 1962, a computer programmer from Massachusetts Institute of Technology (MIT), Steve Russell, and his team took 200 man-hours to create the first version of Spacewar. Using the front-panel test switches, the game allowed two players to take control of two tiny spaceships. It became your mission to destroy your opponent’s spaceship before it destroyed you. In addition to avoiding your opponent’s shot, you also had to avoid the small white dot at the centre of the screen, which represents a star. If you bumped into it, boom! You lost the battle. Russell wrote Spacewar on a PDP-1, an early Digital Equipment Corporation (DEC) interactive mini computer which used a cathode-ray tube display and keyboard. Significant improvements were made later in the spring of 1962 by Peter Samson, Dan Edwards and Martin Graetz. Although the game was a big hit around the MIT campus, Russell and his team never profited from the game. They never copyrighted it. Besides, they were hackers who wanted to do it to show their friends. So they shared the code with anyone who asked for it. 5. The computer virus was initially designed without any harmful intentions In 1983, Fred Cohen, best known as the inventor of computer virus defense techniques, designed a parasitic application that could ‘infect’ computers. He defined it as computer virus. This virus could seize a computer, make copies of itself and spread from one machine to another via a floppy disk. The virus itself was benign and only created to prove that it was possible. Later he created a positive virus called compression virus. This virus could be written to find uninfected executables, compress them upon the user’s permission and attach itself to them. Thanks for reading the post, hope you liked it. Source
  22. For $800 You Can Buy Internet Engineers' Answer To US Government Spying Open-source CrypTech board launches in Berlin The long-awaited response from internet engineers to Edward Snowden's revelations of mass surveillance by the US government has been launched in Berlin. The CrypTech project launched an alpha prototype of its open-source crypto-vault at the 96th meeting of the Internet Engineering Task Force (IETF), and held a two-day workshop prior to the meeting to walk a closed group of net nerds through it. The prototype will be shown at several encryption sessions at the conference later this week, and the team is selling a small initial batch of the cards – between 25 and 50 of them – online for $800. "Building open-source hardware is expensive," the group notes. The units will be shipped in September. At the time of writing, just two had been sold. CrypTech describes itself as "an open hardware cryptographic engine that meets the needs of high assurance Internet infrastructure systems that rely on cryptography." It was launched in December 2014. Despite some heavy backing from Google, Cisco and Comcast, it put out a request for funds in April last year to keep moving forward. Those funds arrived and the prototype works, the small group of testers and developers announced (having squashed a few bugs). It runs on both open hardware and software designs. Details The CrypTech team is diverse: as well as the United States, it has members in Germany, Japan, Russia and Sweden. It has proposed a $1m-a-year budget and a three-year plan to launch and improve its product. The plans will be open-source and the license will "enable use and reuse," according to the team. The board itself is basically a classic hardware security module (HSM) designed to perform strong cryptography away from prying eyes. It securely stores public/private key pairs used in digital certificates. Applications running on other computers talk to the board via PKCS#11 over USB. The board performs the necessary operations, such as digital signing for DNSSEC, for the applications without the secret private keys leaving the CrypTech hardware. Thus, the keys are physically kept separate from the software using them, so if the app code is compromised, the protected digital certificates are not: they remain exclusively inside the HSM. The alpha board comprises software and configurable hardware that can carry out a range of cryptographic operations. It contains an ARM Cortex-M4 processor and a programmable chip – an Artix-7 field-programmable gate array – that can support applications with high-security signing requirements. In keeping with its open approach, the CrypTech team has put the presentation from its workshop online, and continues to outline its progress in some depth on its website and wiki. Those at the workshop spent most of their time installing and configuring the device and testing the DNSSEC security protocol. The team also has a list of improvements it is working on, including the addition of a battery backup for the device. Source
  23. Here Is How You Can Say ‘Hello World’ In 26 Different Coding Languages Say ‘Hello World’ in 26 Different Programming Languages Programming always fascinates people. Programming forms the core of all computers and “Hello World” is the first phrase that we try out when trying a new coding language. Writing the code for “Hello, world!” program outputs “Hello, World!” on a display device. Hello World is normally tried because it is one of the simplest programs possible in almost all computer languages. As such it can be used to quickly compare syntax differences between various programming languages. It is also used to verify that a language or system is operating correctly. The following is a list of “Hello, world” programs in 26 of the most commonly used programming languages. Bash echo "Hello World" Basic PRINT "Hello, world!" C #include int main(void) { puts("Hello, world!"); } C++ #include int main() ( std::cout << "Hello, world!"; return 0; } C# using System; class Program { public static void Main(string[] args) { Console.WriteLine("Hello, world!"); } } Clipper ? "Hello World" CoffeeScript console.log 'Hello, world!' Delphi program HelloWorld; begin Writeln('Hello, world!'); end. HTML Hello World! Java import javax.swing.JFrame; //Importing class JFrame import javax.swing.JLabel; //Importing class JLabel public class HelloWorld { public static void main(String[] args) { JFrame frame = new JFrame(); //Creating frame frame.setTitle("Hi!"); //Setting title frame frame.add(new JLabel("Hello, world!")); //Adding text to frame frame.pack(); //Setting size to smallest frame.setLocationRelativeTo(null); //Centering frame frame.setVisible(true); //Showing frame } } JavaScript document.write('Hello, world!'); jQuery $("body").append("Hello world!"); Julia println("Hello world!") Logo print [Hello, world!] MatLab disp('Hello, world!') Objective-C #import #import int main(void) { NSLog(@"Hello, world! "); return 0; } Pascal program HelloWorld; begin WriteLn('Hello, world!'); end. Perl 5 print "Hello, world! "; Processing void setup() { println("Hello, world!"); } Python print "Hello, world!" R cat('Hello, world! ') Ruby puts "Hello, world!" Swift println("Hello, world!") VBScript MsgBox "Hello, World!" Visual Basic .NET Module Module1 Sub Main() Console.WriteLine("Hello, world!") End Sub End Module XSLT Hello World Source
  24. This X-Shaped Sensor Will Alert You To Incoming Drones, So You Can Freak Out Click here to Watch Video of the Above Image[webm format] Starting at $10,000, Drone Tracker will help you find where those sneaky UAVs are. As I stand along the shores of Lake Merritt on a breezy summer day, I try to imagine I’m someone with either enough fame or money to worry about over-zealous paparazzi and over-curious neighbors. And because it’s 2016, when anyone can buy an inexpensive drone and fly it over my wall, I need some sort of countermeasure, like Dedrone’s DroneTracker. I peer into a video feed and see an approaching object, marked as a green dot. It’s unclear based on the distance whether it’s a bird, a plane, Superman, or, indeed, a drone. As this drone approaches my DroneTracker, the green dot morphs into a red dot and then a red square. DroneTracker uses a combination of microphones, wireless sensors, video cameras, and infrared sensors to detect incoming drones. As was shown during a recent demonstration, DroneTracker knows how to distinguish between planes and overhead birds. Within seconds, I get an e-mail, complete with a picture of the offending drone and even a description of what kind of drone the tracker thinks is coming toward me. For now, the DroneTracker follows drones (within about 1,600 feet), records their movements, and gives an alert. It has no countermeasures, such as jamming signals, firing nets, or even triggering shotgun blasts into the sky. But no worries, Dedrone sells a jammer as an add-on. Such detection systems don’t come cheap: DroneTracker starts at $10,000 "per installation" and goes up into the hundreds of thousands of dollars. In real life, I’m probably not going to be putting a drone sensor on my house anytime soon. Protecting and serving prisons and baseball stadiums Of course, not just rich people want to protect themselves against drones. Lee Jones, a Dedrone manager, explains that the German startup wants to sell to jails, prisons, embassies, police stations, military facilities, data centers, and even sports arenas. Earlier this year, the company announced a deal with the New York Mets baseball team to install 11 trackers along the outside of Citi Field in Queens. The company also has a deal with the Suffolk County Prison to help to keep out unwanted contraband. For now, DroneTracker's capabilities are detection-only. "We absolute understand that countermeasures are a key part of the equation," Jones told Ars. "But before you can apply any countermeasures, you have to find [what you're looking for]." Dedrone is one of a handful of companies that have staked their future on defensive mechanisms. In 2013, Ars reported on Drone Shield, a DIY-style project that has since grown into a full-fledged business venture that was acquired by an Australian company. As of April 2016, Drone Shield has about 200 installations, while DeDrone has about 150. These companies focus exclusively on detection, while other larger, more military-minded firms do detection and neutralization. Israeli defense contractor Rafael Advanced Defense Systems, for example, is capable of jamming a drone’s radio signals. Other larger models made my military contractors have the ability to detect drones at much greater range than DroneTracker. "It's quite early to tell who is definitively rising to the top in this sector, and with the level of interest we have seen so far from larger companies, it's going to become a difficult environment for small startups," Arthur Holland Michael, co-director of the Center for the Study of the Drone at Bard College, told Ars. "The large companies can leverage their current products—for example, their anti-mortar systems, or their lasers—and simply retool them for counter-drone use. Startups, on the other hand, need to build systems from the ground up." For now, the drone detection sector of the broader drone market is still relatively small: so far, Dedrone has raised about $13 million in investment, while the consumer drone market is expected to reach $4.19 billion by 2024, according Grand View Research. Even the federal government is taking drone detection seriously. In May 2016, the Federal Aviation Administration tested the FBI’s own system at John F. Kennedy International Airport in New York City. "We face many difficult challenges as we integrate rapidly evolving UAS technology into our complex and highly regulated airspace," said Marke "Hoot" Gibson, a FAA Senior Advisor, in a statement. "This effort at JFK reflects everyone’s commitment to safety." In August 2016, Dedrone will compete against seven other finalists as part of the Mitre Challenge, which is a competition that offers $100,000 in prize money to come up with the best "solution to detect and interdict" drones that weigh less than five pounds. "Drones are here to stay," said Jones, "And we need technology that can control and detect them." Source
  25. Does Your Smartphone Have LCD Or AMOLED Display? Find Out Why IT Matters! Find out how and why it is necessary to know if your Android smartphone has a LCD or AMOLED display You may never have paid attention to your Android smartphone screen specs after you have bought the smartphone. Nearly 90 percent of Android smartphone buyers basically dont know the difference of a LCD or AMOLED screen will make to their viewing. As of today, there are two principal technologies that are adorning the smartphone screens these days, the traditional LCD panel, and the newer AMOLED display. Most smartphone makers prefer to ship their phones with LCD screens at its more cost-effective. However, bigger manufacturers like Samsung, LG and HTC are now going for AMOLED screens as the technology matures and becomes cheaper and more consistent in recent years. The benefit of an AMOLED display is that each pixel produces its own light, which means a separate backlight is not needed. Thus making AMOLED screens more energy-efficient and providing smartphone owners with higher contrast levels and deeper blacks when compared to LCD displays. However AMOLED screens are more vulnerable to screen burn-in than their LCD counterparts. While most smartphone manufacturers display resolution in their devices, not many advertise whether they are AMOLED or LCD. With Apple also joining the LCD vs. AMOLED race with its next iPhone 7 featuring an AMOLED screen. In this article, we will show you how to find whether your smartphone has a LCD or an AMOLED screen. Method 1: An all-black image displayed on an AMOLED screen shouldn’t release any light at all It is extremely simple to find out whether your phone has an AMOLED or LCD screen in the first method. The pixels in AMOLED screens produce their own light, which means that black portions of the screen are areas where the pixels just not lit up. Therefore, an all-black image displayed on an AMOLED screen shouldn’t release any light at all. To try this method, start by downloading the all-black AMOLED test image above. Then, once you have saved the picture to your device, open it in a full-screen image viewer with the status bar and navigation bar hidden. Next, turn your smartphone’s brightness all the way up, and take the device into a dark room. If you see any light releasing from the smartphone—any light at all—your device has an LCD screen. On the other hand, you have got an AMOLED screen if your screen is completely dark while displaying the test image at full brightness. Method 2: Check the Spec Sheet To begin with, just go to GSMArena, and search for your smartphone model. In the smartphone spec page, check for the keywords “AMOLED” or “LCD” under the Display category. Source