Jump to content

Search the Community

Showing results for tags 'linux'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...

Found 409 results

  1. Using Linux on a laptop used to be so tricky and tedious: that's clearly no longer the case It's been a month since I wrote about getting a new HP Pavilion 14 laptop and loading Linux on it. My experience with it so far has been extremely good – it has done exactly what I wanted, I haven't had any trouble with it, I have used it, traveled with it, updated all of the various Linux distributions I loaded on it, and even added another distribution to it. First, I broke one of my own basic rules – never travel with only a new and untested laptop. I left for a three-week-plus vacation in the US the day after my previous posting. I used the laptop pretty much every day during the trip. and never had a problem of any kind. It was fast and reliable, suspend/resume on closing/opening the lid worked perfectly. Battery life is extremely good – I've never actually managed to run the batteries completely out, but I can certainly say that they are good for 6-8 hours depending on your use. Durability has been good as well; although I never actually "drop-tested" it, I did carry it in my usual travel bag, which got tossed in and out of overhead bins, under seats, in and out of cars and other normal everyday abuse. The screen has been bright and easy to read in all sorts of different light conditions, and the brightness up/down and volume up/down F-keys worked on all of the distributions I have loaded on it. I kept it up to date as I was traveling (to be honest, that also breaks one of my personal rules – don't risk updates on your only laptop while traveling). That means openSUSE Tumbleweed got hundreds of updates; Debian, Fedora and Manjaro got a fair number as well, and I updated Linux Mint from 19.2 Beta to the final 19.2 release, all without problems. I also decided to install Ubuntu 19.04 on it one evening when I had a bit of extra time. That turned out to be just as easy as the other distributions I had already installed – download the ISO, dump it to a USB stick and then boot that and run the installer. As with the other distributions it didn't recognize the Realtek Wi-Fi card, but I was able to correct that the same way, and using the same downloads, as I had already done with Linux Mint and Debian. The one small problem that I ran into I already knew about, that Ubuntu and Linux Mint have a directory name conflict in the EFI boot directory. I avoided that by creating a tiny EFI partition specifically for the Ubuntu installation. Oh, one thing that I am starting to see in a slightly different light is the UEFI firmware and boot configuration on this HP laptop. I've had a lot of negative things to say about HP laptops in this regard before, the most serious of which was that the UEFI boot configuration was difficult to understand and manage. Maybe that has improved since the last time I tried an HP, and maybe I have learned a bit more about managing UEFI boot, but for whatever reason(s), I am starting to appreciate the predictability of the HP configuration. The boot sequence doesn't change no matter what any installed distribution does – it only changes when you go into BIOS setup and change it. This works out just fine for me, because I want Tumbleweed to be my default boot no matter what other distributions I install, so for example even when I installed Ubuntu, and it tried to make itself the default boot, when I rebooted the laptop it still brought up Tumbleweed. While I was traveling I was asked by several friends who keep up with my blog if I regretted having wiped Windows 10 from this laptop unnecessarily (see the comments on my previous post for details ). My answer was a very clear 'no', there was not a single situation where I needed or wanted to boot Windows, and I was happy to have the additional disk space. So, what's next for this system? Well, I return to Amsterdam on Monday, and I will be taking it with me there. I will be using it with the usual assortment of beamers and presenter controls, and using the browsers and application software that I need for that environment, and I don't expect to have any difficulty with that either. That's about all there is to report at this time. I'd love to pass along some really juicy 'tips and tricks' about keeping the laptop working properly with Linux, but there is honestly nothing to say. I bought the laptop, I wiped Windows, I loaded Linux, and it has been smooth sailing ever since. I guess that in itself is a pretty good 'tip', since using Linux on a laptop used to be so tricky and tedious that there was a dedicated website with model-specific information, advice and configuration tips. This is clearly no longer necessary. Source
  2. Last month I made a very unsubtle yawning sound in the direction of Skype due to a lack of updates to the official Skype Snap app. The popular VoIP sat unloved, with no stable updates, for six whole months. Fast forward a few weeks from calling them out and I’m pleased to report that whatever blockage was lodged in the build machine pipe-work has been well and truly flushed out. Not only is the Skype Snap app once again up to date on the Snapcraft store — hurrah! — but some freshly prepared ‘insider’ builds are available for the more adventurous to play with — double hurrah! The latest Skype Snap build is version, the same version as that available to download from their website. Among a bunch of general improvements, bug fixes, etc, this build debuts a brand new icon Left: old icon; Right: new logo The new Skype icon is dressed in Microsoft’s new ‘fluent’ design language, a style it has been slowly rolling out its core products, like Microsoft Office, since last November. The new Skype icon eschews the flat design of the former for a layered look using gradients and shadows. It’s no revolutionary, but I think it’s a subtle improvement over the original. Keep the updates coming, Skype! Skype and Spotify (which also had a recent hiccup to its update frequency) are two of the best known apps on the Snap store. Their lacklustre Snap maintainer (despite releasing new Linux builds through other methods) was both disappointing and discouraging — and not just to Snap app users! Indie app developers debating support for the format may have been put off, inferring that supporting the format is more work than (i’m told) it is. Hopefully things will continue tick along nicely hereon. Snaps might not be the packaging format ticking everyone’s tastebuds, but they’ve proven to be hugely successful so far — frequent updates to tentpole software like Skype will help ensure that this success continues. Source
  3. Two privacy-first, open-source platforms want to give consumers what the tech giants won’t. But starting from scratch isn’t easy. For years, the devices and services we use have ever more aggressively monitored our activities and mined our data. But as consumers have grown increasingly attuned to privacy concerns, solutions have been appearing to help them evade tracking. Browsers such as Brave and search engines such as DuckDuckGo play up their privacy-first design. When it comes to the dominant mobile operating systems, Google has talked about preserving privacy by providing more transparency and exposing opt-out controls. Apple, on the other hand, has sought to create services that remove the opt-out requirement by not collecting data in the first place, turning privacy preservation into a key differentiator. But many users aren’t comfortable even with Apple’s approach. Recently, two groups have created new platforms that avoid sharing data with Google, Apple, or any other entity behind the scenes. Nevertheless, their product-development approaches parallel the market strategies of Google and Apple, with some striking differences. One of these is the e Foundation. Its eOS aspires to be a Google-free version of Android that has a wide range of device support. It’s not a new idea: One existing alternative to Google’s flavor of Android is LineageOS, a fork of what had been the leading Google Android alternative, CyanogenMod. However, according to Gaël Duval, head of e Foundation, producing a version of Android that is completely Google-free requires far more effort than just stripping out Google apps such as Gmail; even LineageOS sends some data through Google’s servers or relies on its services. 20 years ago, Duval created Mandrake Linux, a more approachable distribution of the open-source operating system. Drawing on this experience, he wants to make replacing Google’s Android with the foundation’s eOS version as simple as clicking a button on an installer app. The software’s current beta version supports about 75 different smartphone models. For now, though, the process is similar to installing any custom ROM on an Android phone—that is, not very convenient. To bridge the gap, e Foundation is gearing up to sell a number of refurbished Android phones with the current version of eOS preinstalled. Next year, it intends to offer its own new, optimized smartphones with the OS preinstalled. Until e Foundation can offer its own hardware designed from scratch, it will have to rely on third-party hardware drivers that it doesn’t control. Avoiding that liability is one of the main goals of Purism and its forthcoming smartphone, the Librem 5. A social purpose corporation with a charter to consider goals beyond profit maximization, Purism has been shipping laptops with a strong focus on security and privacy since 2015. It’s used the revenue from its laptops to fund development of its first smartphone. Like its previous devices, the phone runs Purism’s own version of Linux, giving it even more distance from the Google ecosystem than e Foundation’s Android-based system. Image:LineageOS on Samsung Galaxy Note 3 With eOS, e Foundation is taking a Google-like approach, by trying to get its software on as many smartphones as possible in order to reach ubiquity. Purism, by contrast, is pursuing Apple-like vertical integration by developing its own operating system, optimizing hardware to run on it, and even launching a group of services under the banner of Librem One. While Purism’s product development approach has similarities to Apple’s, there are some critical differences. Unlike Apple, Purism makes software that’s open and free to be used by other developers. The company’s devices are endorsed by the Free Software Foundation, and it will only bundle apps on its smartphone that are similarly endorsed. Second, Purism has very different design goals than Apple. While Apple is obsessed with integration and sleek design, Purism’s smartphone will include dedicated hardware switches for the camera and microphone, allowing users to swiftly and definitively turn off those features in the interest of privacy. Instead of integrating as many functions as possible onto its CPU, the phone will err on the side of security with distinct CPU, GPU, and modem modules. It will also have a removable battery, a feature that Apple long ago abandoned in the interest of svelte devices. Purism’s design decisions help contribute to the Librem 5’s 14-mm profile, which is thick for a modern smartphone. Dissatisfied with the level of openness from leading smartphone chip vendors, Purism is using a processor from NXP Semiconductors. The Dutch company, which was long an acquisition target of Qualcomm, is generally known for automotive processors and sensors. Image: Librem 5 Linux smartphone Purism plans to start with the basics of phone calls and texting and add functionality from there. One advantage it has is that its smartphone runs the same Pure Linux distribution that its laptops use, so a pipeline of existing apps could be adapted to run on the smartphone once they’ve been rejiggered to work on a smaller display. The company seems unconcerned that its devices’ slow gestation and relatively high prices—it’s taking Librem 5 preorders for $699—will faze consumers. It believes other manufacturers will eventually adopt its open-source platform, but only after it has proven its viability. Both of these startups’ efforts are ambitious and thoughtful, but they’re taking on one of the most daunting challenges in all of consumer technology. From Windows Phone to the enthusiast-backed Sailfish OS, alternative platforms have failed to gain a foothold in the era of the Apple-Google smartphone OS duopoly. Even if Purism and e Foundation achieve all of their platform goals, they will still have to make their case for a mobile experience that lacks virtually all of the most popular apps that consumers use today. While both camps consider Apple an enemy, it’s done more than any other mainstream tech company to advocate for privacy, a move that could help these new entrants. On the other hand, apps that mine our data, such as Facebook and YouTube, remain some of the most popular offerings on iOS. Apple recognizes that it must balance the services consumers know they want with the privacy Apple believes they need. One way or another, these smartphone upstarts will also need to strike that balance, in a way that makes sense to a critical mass of consumers. Source
  4. So, This happened quite some time ago. When i installed Linux Mint onto a laptop which had BIOS, (Yes, quite old system.) And since then. I don't have access to the system BIOS. All i can do is select boot device and it boots into the os. The bios part essentially won't show up. For my understanding it is due to the installation going the UEFI way and messing something up, I tried to get it working by taking the hard drive out to see if there is something and it does not work that way. So Any way to get this thing fixed up? It's a lenovo g580 with an intel pentium dual core processor. What went wrong. and What can be done to get it back how it was. *I did successfully re-flash the bios via a bios file via usb. But still getting the same thing. Now the Linux installation have also gone haywire by booting into Initramfs adding more salt to the injury. and i want to get rid of this whole linux sh*t and install back windows again with normal accessible BIOS.
  5. Operating systems are dwindling towards irrelevance, and that’s no bad thing When PC Pro was born nearly 25 years ago, it didn't start life under that name: It entered the world as Windows Magazine. Magazines gathered in little tribes. There was PC Pro, PC Magazine, Computer Shopper and several others all vying for the Windows users, and then there were MacUser and MacFormat trying to tempt the Macolytes. Later on, the Linux mags came along, once the writers had managed to unjam their beards from the printer. There wasn't – with the possible exception of the ultra-snobby Wired – one magazine that served all those audiences, because why would they? What would a Mac owner want to know about the new advances in Windows 98? It just didn't compute. A quarter of a century later, the operating system is on the brink of irrelevance. Nothing much is defined by the OS that you use. You could be running macOS, Windows, Android or iOS, even desktop Linux, and to a large extent your day-to-day work would be unaffected. Files flow freely from one OS to another with compatibility rarely raising its ugly head. Computing's tribes have never rubbed along so harmoniously. This outbreak of peace has had a dramatic effect on the computing landscape, and nowhere more so than at Microsoft. The company's mantra used to be "Windows everywhere"; now it's getting harder to find mention of Windows anywhere. New Windows releases used to be huge staging posts, now they're little more than blog posts. The recent Build conference, once the place where we tech journalists flocked to get a full day's advanced briefing on all the new features in the next version of Windows, barely made mention of the W word, according to those who were there. Microsoft's embrace of Linux and its conversion to the Chromium engine for the Edge browser are based on a realisation that Microsoft failed to grasp for too long: despite those billion or so users, the world doesn't revolve around Windows anymore. It's hard to think of anything but niche software packages that could survive by chaining themselves to a single OS anymore. In the process of researching and writing this column, I've gone from Word on my Windows laptop to finishing it off on the train using Word on my iPad Pro. I read the background articles using Chrome on my Android phone, clipped quotes and notes to OneNote mobile, which I've accessed on the other platforms, and saved the copy itself in Dropbox. Had any of these applications or services been tied to a particular OS, I wouldn't be using them. Twenty years ago, Sun boss Scott McNealy used to lose his rag at every press conference when asked about Windows. "Who cares about operating systems?" he would bellow. "Nobody knows what operating system is running inside their car or their mobile phone," he would argue, in the days before iOS and Android were even conceived. They were, to his mind, an irrelevance. He was wrong at the time, but he would be entitled to say "I told you so" if he were still around to swagger into press conferences now. The OS is dwindling in importance. Like a good football referee, you barely notice it's there at all. Even Microsoft has sussed that the operating system just has to get out of the way, which is why it's worked hard to reduce unwanted interruptions from security software and the dreaded Windows Update. To use the favourite phrase of a former editor, Windows has learned to "just deal with it". While a small part of me misses the tribalism and the pub banter with the smug Mac brigade (they probably had reason to be smug, truth be told), the "anything for an easy life" part of me is relieved. I can pick up almost any device and be confident that it will let me get on with the day job. Only a few specialist apps are tied to a particular machine. Windows doesn't really matter any more – it's a good job we changed PC Pro's name all those years ago. Source
  6. A brand new version of Latte Dock, an application launcher and task switcher for the KDE Plasma desktop, is now available for download. Latte Dock v0.9 is the first update to the app this year, and follows a succession of alpha and beta releases. As such, all of the new features Latte Dock 0.9 boasts have been thoroughly tested by the dock’s die-hard user base — including:- Dock colour can change based on active window Selection of new ‘Running Indicator’ styles, including Unity Support for multiple layouts in different Plasma Activities Support for sharing dock layouts Various ‘settings’ usability tweaks Live editing mode Improved dock badges, including 3D design Ability to enable persistent shortcut badges Bug fixes Do you want a better look at all of the above? Of course you do — so the Latte Dock developers have duly obliged by creating this slick promo video to pimp the new release:- You can download Latte Dock 0.9 from the KDE website, Phabricator, and the KDE Store. Do note that Latte Dock 0.9 requires Qt 5.9 or later, and Plasma 5.12 and up. Source
  7. Heard of the Jade desktop environment? I’ll admit that, until this week, I hadn’t — but I like what I see! The Jade desktop (the ‘Jade’ standing for ‘Just Another Desktop Environment’) is a Linux desktop shell based (primarily) on web technologies (eek!). Currently the shell is only readily available on Manjaro Linux. But since its built using a mix of Webkit2, GTK, HTML, CSS, Javascript, and Python, it is (theoretically at least) easily transferable to other Linux distros, including Ubuntu. The Jade desktop shell is far from being a finished product. Indeed, its sole developer, Vitor Lopes, prefers to pitch it as a “fully functional prototype […] subject to changes [at] anytime”. Protoype or not, Jade desktop explores some interesting workflow dynamics that are (somewhat) hard to describe. You’ll get a better feel for how the Jade desktop environment works if you watch this video demo: An interesting, novel approach isn’t it? Admittedly the task-orientated workflow demoed above won’t suit everyone’s tastes — but that’s precisely why it’s cool! I love how easy open source makes it for folks to explore and experiment with daring desktop differentiations like this. Not that this effort is itching to become the Next Major Linux Desktop Environment™. The developer behind JADE says he built it just to “learn Python …and keep my coding skills sharp.” Somewhat satisfied with his effort, he chose to adapt it for desktop use make it freely available for others to use, hack on, or ignore as their leisure. If you’re keep to play more (but don’t fancy building it from source) you can grab an ISO of the alpha-quality “Manjaro WebDad” spin (which also features other web-centric technologies). If you like the look of this DE alternative you must check out the tiling Material GNOME Shell extension. It’s equally rough around the edges, but is doing some interesting things. Source
  8. Windows operating system (OS) has over 79.46 per cent market share of the desktop computer sector. However, in every other computing sector, the complete opposite is true with Linux OS holding the majority of the market share. Here Florian Froschermeier, technical sales manager of industrial router manufacturer INSYS icom, explains why Linux is in such a dominant position. Technology giants like Cern, Amazon and Google and the majority of server-side and embedded technology all use Linux OS for their systems and applications. There are many reasons for OS’s popularity because Linux is a licence-free open-source OS that is highly flexible, customizable, portable and of course at no cost to the user. Free is great unless the product is not. As Linux is open-source, it means that the source code is available to be redistributed and modified freely and with ease. With over 80 per cent of industrial applications using Linux systems, the OS is constantly being patched and upgraded by a legion of highly technical users, ensuring it remains secure and reliable, without costing you a penny. The large variety of applications that use Linux mean that the people updating the software are from a diverse range of different industrial backgrounds. Linux OS has, therefore, been and can be customised to work in many varied applications. This has allowed the OS to become incredibly flexible due to its constantly growing range of use cases. The main reasons for Linux’s popularity is its reliability and the lightweight nature of the OS. However, equally as critical is its portability. Many operating systems are not compatible with different versions of themselves and require modifications to the system or the software before they can work. The different types of Linux OS are manufacturer neutral by design, meaning that it has been programmed, following portable operating system interface standards, to reduce or remove issues of porting software between different Linux environments. Why businesses choose Linux Another major benefit for businesses that use Linux is that they can create their own version of the Linux for specific use cases. For example, most major technology businesses have their own version of Linux such as Amazon Linux and gLinux from Google. At INSYS, we designed our own version, icom OS, which was developed to fulfil our need for a router specific OS. This has allowed us to create a secure, portable and lightweight OS, specifically made for use with routers. Other industrial routers use off-the-shelf OS that, while still functionable, can become saddled with extra parts that slow down operations and leave it vulnerable to attack. Linux into the future The current and future potential for Linux based systems is limitless. The system’s flexibility allows for the hardware that uses it to be endlessly updated. Functionality can, therefore, be maintained even as the technology around the devices change. This flexibility also means that the function of the hardware can be modified to suit an ever-changing workplace. For example, because the INSYS icom OS has been specifically designed for use in routers, this has allowed it to be optimised to be lightweight and hardened to increase its security. Multipurpose OS have large libraries of applications for a diverse range of purposes. Great for designing new uses, but these libraries can also be exploited by actors with malicious intent. Stripping down these libraries to just what is necessary through a hardening process can drastically improve security by reducing the attackable surfaces. Overall, Windows may have won the desktop OS battle with only a minority of them using Linux OS. However, desktops are only a minute part of the computing world. Servers, mobile systems and embedded technology that make up the majority are predominately running Linux. Linux has gained this position by being more adaptable, lightweight and portable than its competitors. Source
  9. Darling is the long-standing (albeit for some years idling) effort to allow macOS binaries to run on Linux that is akin to Wine but focused on an Apple macOS layer rather than Windows. This summer it's been moving along and seeing some new developer contributions. The Darling project just published their Q2 highlights with having new contributors onboard and making progress at varying levels of the stack. They have begun stubbing out more frameworks including AGL, Carbon, AddressBook, CoreServices, and ApplicationServices. Darling's AppKit implementation has also seen a number of improvements as well as working on support for nested frameworks. There was also a fix in 32-bit application support and various other low-level bugs worked out. In fact, they continue to plan to support 32-bit Mac apps on Linux under Darling as a "selling point" for those wanting to still run 32-bit Mac software with Apple moving macOS officially into a 64-bit-only world. More details on Darling's development via their Q2 summary. Source
  10. Support for ZFS, eCryptFS, xfs and BTRFS is back. Dropbox has walked back its November 2018 decision to stop working with filesystems popular among Linux users. That decision saw the sync ‘n’ share giant decide not to support “uncommon” filesystems, leaving it happy to work with just NTFS for Windows, HFS+ or APFS for Mac and Ext4 for Linux. Developers and Linux users were not happy. But their frowns can now turn upside-down, as a support note for the forthcoming Dropbox version 77 client update published today says it will “add support for ZFS (on 64-bit systems only), eCryptFS, XFS(on 64-bit systems only), and Btrfs filesystems in Linux.” The post doesn’t explain Dropbox’s reasons for the change, but it’s not hard to guess its reasons. Dropbox’s ambition is to have integrations with its platform become part of enterprise workflows. Doing that means offering tools that developers favour. And as the majority of servers now run Linux, that means supporting file systems common among Linux users. Source
  11. New Delhi: After the International Business Machines Corp. (IBM) completed the acquisition of Red Hat for $34 billion earlier this month, a top executive from the iconic software company with an open source development model has said that it was a "match made in heaven" that will help it accelerate growth globally, including in India. In India, Red Hat, which specialises in Linux operating systems, has engineering facilities in Pune and Bengaluru. "We do think that there is an opportunity to accelerate growth (after the acquisition) -- not just in the India market, but also in the broader market. But it does exist for sure in India. I think we have an opportunity to scale much more quickly," John Allessio, Senior Vice President and General Manager, Global Services, Red Hat Inc, told IANS in a freewheeling chat here. The acquisition by Big Blue won't change Red Hat's "mission", nor will it curtail its contribution to the open source community, Allessio stressed. Raleigh, North Carolina-headquartered Red Hat reported $3.4 billion in revenue in the 2019 financial year -- up 15 per cent year-on-year. "Red Hat has experienced 69 quarters of continuous growth which means every quarter you are hiring people across the word and that is going to continue for sure," Allessio said. "After the acquisition, we are still an independent software company. What changed is that instead of having an external shareholder, we now have an internal shareholder," he explained. Founded in 1993, Red Hat is credited for bringing open source -- including technologies like Linux, Kubernetes, Ansible, Java and Ceph, among others -- into the mainstream for the enterprises. Today, Red Hat products and services are widely used by government agencies as well as emerging companies in technology, finance, healthcare, civil aviation and other industries. Armonk, New York-headquartered IBM particularly hopes that Red Hat's open hybrid Cloud technologies would help it position itself as a leading hybrid Cloud provider. "At the core of what we do is turning projects in the open source communities into products because at the end of the day, our customer is an enterprise software customer," Allessio said. IBM also participates in open source communities, but it does not "productise" the open source projects. Instead, it develops its own proprietary products. "I think it is an opportunity for IBM to learn this DNA of Red Hat -- how do we do it -- and decide -- they have not yet decided -- if this applies to the IBM software development process. IBM has already stated that they are very much interested in our services division," Allessio added. What has fundamentally changed after the acquisition is that IBM is no longer just a business partner for Red Hat, it is now a strategic business partner. "From a go to market to operations, development and support, IBM does not help Red Hat. They help us by being a new strategic business partner," he said, adding that Red Hat will continue to operate with its other business partners. Source
  12. Former SAP COO moves to the top job at SUSE, the oldest Linux business and more recently an open-source cloud power. SUSE and SAP have been close partners for 20 years. Indeed, SUSE's single biggest customer is SAP. So, it comes as no surprise that Melissa Di Donato, SAP's former COO, has just been named SUSE'S CEO. London-based Di Donato is a well-known technology leader. In particular, she has a proven track record in sales and business operations. Besides being SAP's COO, she was also the company's chief revenue officer. In SAP's latest quarter, SAP saw an increase of 11% year-over-year revenues. Much of that came from the cloud -- where SAP saw 40% year-over-year growth. SAP's cloud is built on SUSE's Linux servers and OpenStack cloud. In a statement, Di Donato said: "There is no greater honor than to lead SUSE into its next chapter of accelerated growth and corporate development. SUSE is at the cusp of a historic shift as open-source software is now a critical part of any thriving enterprise's core business strategy." Although the IBM-Red Hat acquisition has seen the most attention, Di Donato optimistically stated: "We are well-positioned to emerge as the clear leader of this shift, with our ability to power digital transformation for our customers at their own pace and with agile, enterprise-grade open-source solutions, edge to core to cloud. What is unmistakable is our unlimited ability to deliver value to our community, customers, partners, and shareholders -- all of whom have been the bedrock of SUSE's success. As exciting as SUSE's growth and innovation have been over the past several years, we are just getting started." Di Donato succeeds Nils Brauckmann. While officially Brauckmann is retiring, there seems to be more to the story. On LinkedIn, Brauckmann wrote: "I care very deeply for the SUSE business and its employees, and this difficult decision is based entirely on personal reasons. I am pleased to be handing over the reins to such a talented and accomplished leader as Melissa Di Donato." In his SUSE statement, Brauckman added: "She is a proven and dynamic change agent, and many of her achievements have occurred in subscription businesses that exist in high-growth cloud environments." Since SUSE's generally hands-off owner EQT demands growth above all, perhaps EQT felt Di Donato was a better choice to move SUSE forward in an increasingly competitive Linux-based cloud market. During his tenure, Brauckmann delivered eight years of continuous expansion, including record-breaking revenues in FY18. He also oversaw SUSE's return to being an independent Linux company. Jonas Persson, chairman of SUSE's Board and EQT advisor, recognized Brauchmann's accomplishments, as well: "Nils has successfully led the company to what it is today, and he hands over the business in good shape, with 2018 in many ways marking a top year in the company's history." Under Di Donato's leadership, SUSE will continue to focus on growth and expansion. What that means is she's expected to advance SUSE's core business and emerging technologies, both organically and through add-on acquisitions. According to SUSE, "SUSE's independence will continue to be aligned with a single-minded focus on delivering what is best for customers and partners, coupled with full control over its own destiny." Di Donato will be SUSE's first female CEO. She takes charge of SUSE on Aug. 5. Source
  13. The Linux Kernel team have released stable branch updates as well as the first release candidate for Linux 5.3. There are some interesting IRQ related fixes for 5.2.2, 5.1.19 and 4.19.60 and not much interesting fixed in 4.14.134 and 4.9.186. Support for AMD Navi GPUs is biggest high-light in 5.3rc1. Commit f29cd95ca0b3, which was added to 5.2.2, 5.1.19 and 4.19.60, stood out as particularly interesting. It relates to the handling of spurious interrupts. "Quite some time ago the interrupt entry stubs for unused vectors in the system vector range got removed and directly mapped to the spurious interrupt vector entry point. Sounds reasonable, but it's subtly broken. The spurious interrupt vector entry point pushes vector number 0xFF on the stack which makes the whole logic in __smp_spurious_interrupt() pointless. As a consequence any spurious interrupt which comes from a vector != 0xFF is treated as a real spurious interrupt (vector 0xFF) and not acknowledged. That subsequently stalls all interrupt vectors of equal and lower priority, which brings the system to a grinding halt. This can happen because even on 64-bit the system vector space is not guaranteed to be fully populated. A full compile time handling of the unused vectors is not possible because quite some of them are conditonally populated at runtime. (..) Fixup the pr_warn so the actual spurious vector (0xff) is clearly distiguished from the other vectors and also note for the vectored case whether it was pending in the ISR or not." Nobody wants their system brought to "a grinding halt". The patch is not present in the change-log for new stable branch 4.14.134 and 4.9.186 kernels which indicates that the problem introduced "quite some time ago" was due to changes made sometime between 4.14 and 4.19. We never experienced the grinding halt while using 4.19+ kernels but it is still nice to see that we won't. A fix for the Intel networking cards using the e1000e driver from the Russians is the only other fix that really stood out in these new kernels. Commit d17ba0f616a08f597d9348c372d89b8c0405ccf3 was added to all the newly released stable kernel branches. "Driver does not want to keep packets in Tx queue when link is lost. But present code only reset NIC to flush them, but does not prevent queuing new packets. Moreover reset sequence itself could generate new packets via netconsole and NIC falls into endless reset loop. This patch wakes Tx queue only when NIC is ready to send packets." Linux 5.3 release candidate 1 There is not yet a new code-name for Linux 5.3, the makefile still says "Bobtail Squid" - which is the code-name used by 5.2 kernels. Support for AMD's new "Navi" GPUs is among the bigger highlights in the next Linux kernel. These cards are apparently equipped with two graphics rings per graphics pipe, one primary and one for async. There is also some rather gigantic changes to the Intel i915 graphics driver. A lot of the changes appear to be files that are moved around, display-related files like intel_display.c which used to live in drivers/gpu/drm/i915/ is moved to drivers/gpu/drm/i915/display/. This makes it a bit difficult to tell if there is anything new for the Intel Iris driver or old furniture rearrangement. The kernel scheduler has gotten a new feature which can track the clamped utilization of each CPU present based on RUNNABLE tasks scheduled on specific CPUs. The new option is called CONFIG_UCLAMP_TASK. This makes it possible to specify a minimum and maximum range for CPU utilization allowed for RUNNABLE tasks. Support for loading compressed firmware has also been added. A new network class called CONFIG_NET_VENDOR_GOOGLE is added. Google is apparently planning to make network cards or switches or other networking hardware. Perhaps they already do and they only use it internally. Sensors on power supplies can now be exposed to the kernel using the HWMON interface (POWER_SUPPLY_HWMON). Intel is up to something on the CPU side. 5.3rc1 has support for "CometLake-H Platforms" and "CometLake-LP Platforms". There's also a new Intel "Speed Select Technology interface" driver. The Chinese smartphone and laptop manufacturer Xiaomi has added a WMI key driver for special function keys on their laptops. On the security side there's now support for enable heap memory zeroing both on allocation and on free's. The rest of the changes between 5.2 and 5.3 are dull and not at all interesting. There's some new sound chip drivers and a lot of small changes to existing drives. The change-set between 5.1 and 5.2rc1 had a whole lot more in terms of significant changes. The non-graphics related changes are so small it's like reading a change-log between a minor versions like 5.2.1 to 5.2.2, there are oddly few large or important changes in 5.3rc1 outside the GPU tree. Kernel creator and chief architect Linus Torvalds had this to say about the 5.1rc1 release: "This is a pretty big release, judging by the commit count. Not the biggest ever (that honor still goes to 4.9-rc1, which was exceptionally big), and we've had a couple of comparable ones (4.12, 4.15 and 4.19 were also big merge windows), but it's definitely up there. The merge window also started out pretty painfully, with me hitting a couple of bugs in the first couple of days. That's never a good sign, since I don't tend to do anything particularly odd, and if I hit bugs it means code wasn't tested well enough. In one case it was due to me using a simplified configuration that hadn't been tested, and caused an odd issue to show up - it happens. But in the other case, it really was code that was too recent and too rough and hadn't baked enough. The first got fixed, the second just got reverted. Anyway, despite the rocky start, and the big size, things mostly smoothed out towards the end of the merge window. And there's a lot to like in 5.3. Too much to do the shortlog with individual commits, of course, so appended is the usual "mergelog" of people I merged from and a one-liner very high-level "what got merged". For more detail, you should go check the git tree. As always: the people credited below are just the people I pull from, there's about 1600 individual developers (for 12500+ non-merge commits) in this merge window." Source
  14. Best Linux Distro for Windows 7 Refugees: Manjaro KDE With the impending destruction of Windows 7 (read: loss of official support) looming in the horizon, many users may find themselves in the debate of moving to Windows 10 or jumping ship to an alternative such as MacOS or Linux. There are hundreds of Linux distributions to choose from, but I’d like to personally throw my two-copper in and suggest Manjaro KDE. What is Manjaro? And KDE? Manjaro is based off of Arch Linux, but I like to describe it to people as the “Ubuntu of Arch” for its user-friendly design choices and its particular attention to helping new Linux users to learn what they are doing. Another great perk of the Arch foundation underneath Manjaro is the use of the Arch Linux Wiki. The Arch wiki is easily one of the largest resources of help, information, and know-how for all Linux users— regardless of distribution, many of the articles found can be applied. Back in the spring of 2017 I wrote a series of articles discussing various Desktop Environments for Linux systems, such as Cinnamon and KDE just to name a couple, and overall for Windows users who have decided to take the plunge, I’m recommending KDE. Regardless of distribution, KDE is filled with eye candy, is highly-customizable, one of the most powerful file-browsers available (Dolphin), and is deeply documented with a long-standing history (KDE was created in 1996). However, KDE is not without its downsides too: Arguably the most resource-intensive desktop environment Very in-your-face customization access can be jarring to Windows users not used to having such broad customization in their UI setup. Arguably more geared towards power-users than some other environments Some feel that KDE is too cluttered Looking at this list, with the exception of the increased resource usage compared to most of the other Desktop Environments, I personally find the other downsides to really be ‘benefits’, but that’s to each their own. Looking back at Manjaro as the choice of environment, I want to note that I really looked at: Ubuntu Linux Mint Debian Fedora OpenSUSE In the end, I felt that Manjaro held the best combination of user-friendliness with raw-power. Yes, any of the above-listed systems will work and run pretty much the same software; but Manjaro makes everything simple, easy, organized, and smooth, as well as featuring some great built-in tools such as Pamac/Octopi depending on environment chosen (though I always uninstall Octopi and install Pamac), the Settings Manager Kernel changing option is simply spectacular, and I’ve enjoyed many of the software choices by the Manjaro team (including Steam by default, Firefox, Thunderbird, Yakuake). Source: Best Linux Distro for Windows 7 Refugees: Manjaro KDE (gHacks)
  15. Hi All, Just wondering if Nokia with Canonical makes Ubuntu Touch Devices, does people love it and buy to help support Ubuntu Touch development? My wish is that Nokia should join hands with Canonical to make Ubuntu Devices. If that happens, all lazy s/w app giants will create apps supporting Ubuntu Touch platform. I'm calling s/w app giants as lazy bcoz if they would've supported Ubuntu Touch earlier, the OS could've been overtaking Android & Windows Phones(or Windows 10 Mobile) by now. All Nokia & Ubuntu/Linux fans(incl. myself) or devs out there, please suggest Nokia to create Ubuntu Devices in future ASAP. Please vote and provide feedback in comments(if any). Members please note that I'm referring to the future and not now. I'm not a fool to ask for/suggest a change in the first year of re-emerged Nokia. @steven36 & @teodz1984: Please read the desc carefully before providing comments.
  16. brolly

    Any Linux Users Here?

    Hi guys I know there are lot of windows users in this forum. Are there any Linux users around. I use a linux distro as a host machine and i run windows in guest machines. It would be nice to have a linux chat and share views and ideas here
  17. It's July 2019. It's been roughly 11 months since Valve introduced the world to a new version of Steam Play and a compatibility layer called Proton. Today, nearly 5700 Windows-only Steam games run great on Linux with just a click of the "install" button, and many more are becoming compatible every month as Valve and the Proton team work their development magic. To say gaming on Linux has come a long way in a brief time is an understatement. But I just wrapped up some game benchmarks on the System76 Oryx Pro laptop that downright surprised me and I wanted to share them with you. The System76 Oryx Pro laptop running Pop!_OS 19.04 Having a game run on Linux that isn't built for Linux? That's certainly a cool thing. Performance is another thing entirely. It's not a compelling enough argument for Linux enthusiasts to tell their Windows-using friends that "hey, but the games you play run on Linux!" They have to run well. Maybe the notion of switching to Linux is an enticing one for the stability and increased privacy control, but you can't show me an enthusiast gamer who'll willingly trade that for a 20% drop in the framerates they're used to on their hardware, right? That 20% is an important number, albeit not a scientific one. When I got into Linux last year, that's the figure I kept seeing thrown around. "Sure, it runs on Linux but about 15% to 20% lower FPS." With constant improvements to the kernel, Vulkan drivers and Steam Proton, however, I think the situation has changed. Enough of my rambling. Here's what we're looking at today: I have an Oryx Pro laptop from Linux PC manufacturer System76 (the company behind Thelio and Pop OS). It packs an Nvidia GTX 2070 Max-Q graphics card, 6-Core Intel Core i7-8750H CPU and 16GB of RAM. It's a beautiful and powerful machine. It also happily runs Windows 10 because it's basically a Clevo, and System76 linked me to a couple missing drivers to get stuff like the RGB keyboard lit up. So I figured what the heck? Let's do a head-to-head, Linux versus Windows 10 benchmark run and see what shakes out. I tested 6 games total (2 that are native to each platform, and 4 that run on Linux via Steam Proton) with Windows 10 version 1903 and the Ubuntu-based Pop!_OS 19.04 with Linux kernel 5.0. On the Nvidia driver side, Windows 10 is rocking version 431.36, and Pop!_OS utilizes the latest stable Nvidia proprietary driver which is 430.34. The latest Steam beta is used on both installations, and all games are running from the 500GB NVMe drive. To avoid any run-to-run variations, only built-in benchmark tools were used. Game Quality Settings: F1 2018: Ultra Preset with TAA, benchmark track is Australia with heavy rain Dirt Rally: Ultra Preset, 8x MSAA Shadow of the Tomb Raider: High Preset with TAA Total War: Three Kingdoms: High Preset, "Battle" benchmark Strange Brigade: Ultra Preset, ASync Compute off Hitman 2: High Detail & Textures, all other settings maximum, Miami benchmark scene System76 Oryx Pro: 1080p Gaming on Steam with Windows 10 and Pop!_OS 19.04 Let's pick these results apart. Dirt Rally and Total War: Three Kingdoms are both native to each operating system (with the Linux ports handled by Feral Interactive). The former uses OpenGL and the latter utilizes Vulkan. In both results, Pop!_OS comes out 14% ahead over Windows 10 in average framerate. The remaining 4 games are obviously native to Windows, and run on Linux thanks to Valve's Proton layer. In a very brief nutshell, this translates DirectX API calls on the fly to Vulkan (in other words, a graphics API that Linux understands). Crucially, these aren't even titles that Valve has "whitelisted" for Steam Play on Linux. The very demanding Hitman 2 and Shadow of the Tomb Raider on Pop!_OS can't quite catch Windows 10, and Shadow of the Tomb Raider did exhibit a much lower minimum framerate via Proton. But Strange Brigade sees a straight up tie at 107 average FPS on both operating systems. F1 2018 via Proton does pull ahead of Windows, though, by about 6%. Takeaways Does Linux win, here? Not really, but that's not the point. There are scattered victories, but without testing literally thousands of games across across hundreds of hardware configurations, we can't make that statement. And I'm fairly sure Windows would get the overall performance crown as the number of games tested increased. For now. We're seeing the open source graphics community at Valve, Red Hat, Google and elsewhere make serious strides. Linux has less overhead to begin with, and that advantage is primarily seen in productivity and compute workloads. So as the Vulkan drivers improve, Linux gaming improves. And just this month Valve published a new compiler focused strictly on gaming that improves upon the existing LLVM-based solution, boosting both average and minimum framerates on AMD Radeon hardware. (More on that here). What we're seeing today is a level of gaming performance on Linux that simply wasn't happening a year ago. Or even 6 months ago. Parity is being achieved more frequently, and while you still can't play every Windows game under the sun, the landscape looks brighter as each day rolls on. Plus, this little test was leveraging Nvidia GPUs. I suspect that once Valve's ACO compiler matures, a similar experiment on Radeon GPUs may show even stronger performance when compared to Windows. Speaking of the AMD side of things, Valve developer Pierre-Loup Griffais told me that the performance work being done in Proton and DXVK has gotten to a point where the CPU overhead of translating DX11 through DXVK can be lower than the overhead in the native AMD DX11 driver on Windows. That's impressive to begin with, and he told me that months ago. . . Stay tuned for more along this path. It's something we should definitely be keeping an eye on. Source
  18. With the recent release of Mozilla Firefox 68 there are some nice WebRender performance improvements that Linux users can enjoy. But with Firefox 69 now in beta there is even better performance, including when enabling WebRender on Linux. Given the recent Firefox 68.0 release and Firefox 69.0 being promoted to beta, I ran some fresh browser benchmarks for checking out the current state of Mozilla's Linux performance from the Ubuntu desktop. The official Mozilla Firefox binaries for Linux x86_64 67.0.4, 68.0, and 69.0b3 were tested on the same system in a variety of browser benchmarks. This round of testing was done from the AMD Ryzen 7 3700X with Radeon RX 580 graphics card on Ubuntu 18.04 LTS with the Linux 5.2 kernel and Mesa 18.2. With Firefox 67/68/69, the performance was tested out-of-the-box with a clean profile and no extra plug-ins and then again when enabling WebRender via the MOZ_WEBRENDER=1 environment variable. All of these browser benchmarks were carried out using the Phoronix Test Suite. The ARES-6 benchmark shows some improvements with the new Firefox 68 stable compared to Firefox 67 but with Firefox 69 the performance has improved even more. Obviously in the synthetic JavaScript tests like ARES-6 not really changing around the layout of the web-page, WebRender doesn't really yield any performance difference when just exercising the JavaScript engine. Octane was one of the few benchmarks showing the Firefox 69.0 performance pulling back compared to 68/67 releases. The WebXPRT performance meanwhile was flat along with Basemark. In JetStream the performance improved under Firefox 68.0 but did pull back to Firefox 67 levels with the current Firefox 69.0 beta. With CanvasMark is where we can finally see WebRender taking off thanks to its DOM interactions. WebRender on Firefox 69 yields a 9% performance increase compared to Firefox out-of-the-box currently on Linux. Or in the MotionMark benchmark is where the WebRender performance has really exploded since Firefox 68 and continued to rise with the 69.0 beta. The Speedometer browser benchmark performance only increased faintly on the newer releases. For those curious what the geometric mean looks like for averaging all of these Firefox web browser benchmarks, here is that data. Firefox 69.0 is expected to be released on 3 September and brings WebRender enhancements, disabling the Adobe Flash plug-in by default, adds a password generator, and various other web developer enhancements being tackled. Source
  19. A Linux kernel developer working with Microsoft has let slip that Linux-based operating systems have a larger presence on Microsoft’s Azure cloud platform than Windows-based ones. The revelation appeared on an Openwall open-source security list in an application for Microsoft developers to join the list, and was apparently part of an evidently credible argument that Microsoft plays an active-enough role in Linux development to merit including the company in security groups. The overwhelming prevalence of Linux on Microsoft’s cloud platform may come as a surprise when viewed in isolation, but it makes complete sense from a business perspective. To start with, it’s simply cheaper to run Linux on Azure, as Microsoft’s own price calculator illustrates as clear as day. In this respect, Microsoft basically forced its own hand in terms of monetizing OS licensing into a consistent revenue stream, since Windows 10 Home is essentially free (if you don’t count the “Windows tax“) and Windows 10 Pro works out to a one-and-done revenue opportunity with many enterprise customers. The fact that Linux conforms closely (enough) to the Unix structure and philosophy also makes Linux instances easier to manage. Because Unix is so prolific, basically any system administrator will instantly be at home in the Linux file system, and the saved time and headaches translate pretty quickly into saved dollars and cents, not to mention fewer complications posed by downtime. Linux’s dominance also fits perfectly in the context of its gradual, deliberate integration into Microsoft’s long-term development and innovation vision. When Microsoft first proclaimed its love for Linux in 2014, many industry professionals, especially in the open-source sphere, were skeptical, but from that point on, Linux has been rolling steadily ahead at Microsoft. Initially, Microsoft’s embrace of Linux manifested as the Windows Subsystem for Linux, a curiosity mostly aimed at developers. Last year, though, the company announced Azure Sphere, a cloud-connected platform for internet of things (IoT) devices which includes Azure Sphere OS, an in-house headless Linux-based operating system. This was a masterstroke for Microsoft — even a stripped-down Windows OS is far too bloated to run on practically any IoT device, but most IoT manufacturers could benefit from a secure, off-the-shelf IoT solution to replace their own ill-conceived attempts. Azure Sphere was designed specifically to fill this void. Taken together, it’s easy to see how the numerous Linux options Microsoft offers on Azure alone — to say nothing of the deeper integration Linux is getting on the Windows 10 desktop — outflanks the comparatively more limited options and higher cost associated with running Windows on Azure. At the rate at which the company finds new and inventive applications for Linux, this trend looks set to continue, and Microsoft seems just fine with that. Updated on July 15, 2019: Revised with additional information from Microsoft regarding Azure Sphere. Source
  20. A top life tip, there, from the Linux kernel chieftain OSLS Linus Torvalds believes the technology industry's celebration of innovation is smug, self-congratulatory, and self-serving. The term of art he used was more blunt: "The innovation the industry talks about so much is bullshit," he said. "Anybody can innovate. Don't do this big 'think different'... screw that. It's meaningless. Ninety-nine per cent of it is get the work done." In a deferential interview at the Open Source Leadership Summit in California on Wednesday, conducted by Jim Zemlin, executive director of the Linux Foundation, Torvalds discussed how he has managed the development of the Linux kernel and his attitude toward work. "All that hype is not where the real work is," said Torvalds. "The real work is in the details." Torvalds said he subscribes to the view that successful projects are 99 per cent perspiration, and one per cent innovation. As the creator and benevolent dictator of the open-source Linux kernel, not to mention the inventor of the Git distributed version control system, Torvalds has demonstrated that his approach produces results. It's difficult to overstate the impact that Linux has had on the technology industry. Linux is the dominant operating system for servers. Almost all high-performance computing runs on Linux. And the majority of mobile devices and embedded devices rely on Linux under the hood. The Linux kernel is perhaps the most successful collaborative technology project of the PC era. Kernel contributors, totaling more than 13,500 since 2005, are adding about 10,000 lines of code, removing 8,000, and modifying between 1,500 and 1,800 daily, according to Zemlin. And this has been going on – though not at the current pace – for more than two and a half decades. "We've been doing this for 25 years and one of the constant issues we've had is people stepping on each other's toes," said Torvalds. "So for all of that history what we've done is organize the code, organize the flow of code, [and] organize our maintainership so the pain point – which is people disagreeing about a piece of code – basically goes away." The project is structured so people can work independently, Torvalds explained. "We've been able to really modularize the code and development model so we can do a lot in parallel," he said. Technology plays an obvious role but process is at least as important, according to Torvalds. "It's a social project," said Torvalds. "It's about technology and the technology is what makes people able to agree on issues, because ... there's usually a fairly clear right and wrong." But now that Torvalds isn't personally reviewing every change as he did 20 years ago, he relies on a social network of contributors. "It's the social network and the trust," he said. "...and we have a very strong network. That's why we can have a thousand people involved in every release." The emphasis on trust explains the difficulty of becoming involved in kernel development, because people can't sign on, submit code, and disappear. "You shoot off a lot of small patches until the point where the maintainers trust you, and at that point you become more than just a guy who sends patches, you become part of the network of trust," said Torvalds. Ten years ago, Torvalds said he told other kernel contributors that he wanted to have an eight-week release schedule, instead of a release cycle that could drag on for years. The kernel developers managed to reduce their release cycle to around two and half months. And since then, development has continued without much fuss. "It's almost boring how well our process works," Torvalds said. "All the really stressful times for me have been about process. They haven't been about code. When code doesn't work, that can actually be exciting ... Process problems are a pain in the ass. You never, ever want to have process problems ... That's when people start getting really angry at each other." Source
  21. Graphics driver development moves briskly, always adding some level of uplifted performance, bug fixes or crucially, support for newer graphics cards like Nvidia's RTX 2000 series. This makes using "stable" distros like Ubuntu 18.04.2 LTS (Long-Term Support) a bit of a mixed bag for certain users. Thankfully, Ubuntu has clearly been listening to community and critical feedback over the past several months and is making a sweeping change to the way it handles updating the Nvidia proprietary driver in its LTS versions. The Linux Experiment dropped the news on YouTube, reporting that Ubuntu LTS installs will now automatically include the latest proprietary Nvidia graphics driver in its standard system updates. The newest stable Nvidia driver, version 430, is already in the bionic-proposed repository for testing and should land on your Ubuntu 18.04 system soon. That's right, no need to manually add a repository or even journey into the Software & Updates settings manager. As you would expect, this change also benefits Linux users rocking LTS-backed distributions like KDE Neon, Linux Mint, Zorin OS and elementary OS. Ubuntu will also backport this driver to 16.04 "in the near future." This is nothing short of very welcome news, and it's nice to see them following the example set by System76's Pop!_OS. In fact, in what I believe is a rare shift, this may put Ubuntu ahead of Pop!_OS when it comes to the latest stable Nvidia driver, as a pull request is open to add version 430 but it still hasn't found its way to the System76-backed distro. The only oddity surrounding this announcement? Well, it's a very impactful change but was seeded through a community YouTuber (albeit an excellent one), and not via a Canonical-penned blog or press release. The company responded to this on Twitter, saying "We decided it better to share an awesome video from a member of the wider community. Ubuntu is all about community, after all." Source
  22. The launch of AMD's Ryzen 3000 series has been undeniably successful thus far, but early Zen 2 buyers have run up against two curious and vastly different bugs: not being able to play Destiny 2 on Windows 10, and not being able to boot up Linux machines using more recent kernels. Good news for both camps is incoming, as AMD just sent word that a fix is coming within the next few days. An AMD representative just provided this statement via email: "AMD has identified the root cause and implemented a BIOS fix for an issue impacting the ability to run certain Linux distributions and Destiny 2 on Ryzen 3000 processors. We have distributed an updated BIOS to our motherboard partners, and we expect consumers to have access to the new BIOS over the coming days." AMD says it was able to root cause and resolve both issues fairly quickly in its BIOS code with a patch, and the company expects motherboard vendors to distribute the patch (potentially in beta BIOS form) by next week. Earlier this week a growing number of complaints amassed from Windows gamers concerning the inability to launch Activision's Destiny 2 with various Ryzen 3000 CPUs. On the Linux side of the fence, a fairly critical bug emerged that straight up prevented a system from booting with 5.0 or newer Linux kernels. It's nice to have these both addressed and resolved within the first week of launch, and hopefully the motherboard vendors will act quickly to seed this patch to their users. Keep an eye on those BIOS updates! Source
  23. The modern Linux desktop is one where "everything just works," and "you're able to use the applications that you've come to rely on in your day-to-day life," says Canonical's Will Cooke. Following Canonical's pivot away from its internally-developed Unity user interface and Mir display server, Ubuntu has enjoyed two relatively low-drama years, as the Linux Desktop market homogenized during its transition back to a customized GNOME desktop. In a review of the most recent release, TechRepublic's Jack Wallen declared that "Ubuntu 19.04 should seriously impress anyone looking for a fast and reliable Linux desktop platform." Largely, it's been a slow-and-steady pace for Ubuntu since the pivot from Unity to GNOME, though the distribution made headlines for plans to end support for 32-bit support. This prompted Valve, operators of games marketplace Steam, to re-think its approach toward Ubuntu, which it previously characterized as "as the best-supported path for desktop users." TechRepublic's James Sanders interviewed Will Cooke, director of engineering for Ubuntu Desktop at Canonical, about the distribution's long-term plans for legacy 32-bit support, shipping a desktop in a post-Unity-era Ubuntu, and why Linux should be the first choice for users migrating from Windows 7 prior to the end of support. (This interview was lightly edited for clarity.) How support for 32-bit programs will be handled in Ubuntu In June, Canonical announced plans to stop providing new 32-bit x86 packages starting with Ubuntu 19.10, sparking a firestorm of controversy among users of WINE and the Steam games platform, among others. Following public outcry, the company announced that a subset of 32-bit x86 packages will be maintained to support legacy software. For comparison, the first x86-64 processors will be 16 years old when Ubuntu 19.10 is released. Fedora is likely to drop the 32-bit kernel with the release of Fedora 31, though continue to provide packages for application compatibility. MacOS 10.15 (Catalina), expected this fall, is dropping support for 32-bit applications outright. Eventually, the amount of engineering time needed to protract legacy support will approaching the negative end of a cost-benefit analysis, making this a difficult decision for Linux distribution maintainers. TechRepublic: Canonical—like any other company—has constraints on resources. You have to budget your development time. What's the decision-making process like for balancing legacy compatibility with maintainability over the lifespan of a release? Will Cooke: The decision-making process is the same as any other decision-making process in Ubuntu. It's led by engineers who are doing the work, and have to keep this thing going. Generally speaking, Ubuntu is a community project. It happens that Canonical is the commercial entity behind it, and we put the vast majority of the manpower behind it. But, we are built of a community of people who happen to work for Canonical, and people who don't work for Canonical but are still very interested in shaping the Ubuntu product. We congregate in mailing lists, on IRC, and increasingly now on more modern communications schemes like Discourse, for example. And that's where we raised these ideas. People are free to come along with their ideas, speak to other engineers about those ideas, and then discuss it, engineer to engineer. If a decision is reached, then move ahead with that plan. In the case of the 32-bit stuff, this is something that we've been talking about for a long, long time. We started gathering some more information about hardware with a bunch of reports in [Ubuntu] 18.04 and that told us very clearly that—statistically—nobody is running 32-bit anymore. So the conversation was, we could save a significant amount of time and energy if we were to not do this anymore… we had a few discussions around it, but there were no objections raised. And so that's what happened... we made the announcement and lots of people said, "I've got my specific use case—be that gaming, or legacy applications, or printer drivers—what can we do about that?" We foresaw some of these problems. The solution we had was around containerization, or packaging things as Snaps, and that—technically speaking—would have been, and still is, a very viable option. People have, for example, Steam running in that container, and they can run their games just fine. The feedback we heard from the community was that this container system is not what they wanted. So it was relatively easy for us to change our plans there, so that's what we've done. We've committed to maintain those 32-bit libraries, so that people don't have to concern themselves with containerizing their apps, or finding 64-bit equivalents. So, 32-bit will continue to work, and we will speak again about it in probably a couple of years. By then, the state of containerization will have moved on, and the plan will be—if we do go down the containerization route—then it will be entirely transparent to the user, and everything will still work. We've got some really good feedback from people about things that are important to them—Steam, legacy games, legacy software—we know the sorts of things that people are using 32-bit for now, and we can make sure that we focus our efforts on a really solid solution for those use cases. TR: How different, in engineering terms, is maintaining the plumbing to compile a subset of 32-bit packages to maintain compatibility, as opposed to packaging 32-bit binaries from Ubuntu 18.04 in Snap, for software compatibility? Cooke: Generally speaking, there's not a whole lot of difference. Either we build those 32-bit libraries, or we don't. They're already built on 64-bit hardware and compiled in 32-bit mode, so we don't have to maintain extra hardware going forward. The problem with 32-bit is that a lot of important security fixes… are only available for 64-bit software. It's not really about how technically difficult it is, it's that the 32-bit software doesn't get the same exposure. Nobody—statistically speaking—is running it, and a lot of the security fixes simply don't exist for those architectures. So, it's not that it's necessarily more complex or more difficult. It's that the quality is not there and can't be there. Keeping Ubuntu's identity while shipping the GNOME desktop TR: Ubuntu is just over two years into its transition away from the Unity desktop environment to GNOME 3. How has that transition worked, in terms of balancing GNOME 3's design choices with your requirements for Ubuntu with things like keeping desktop icons? Cooke: It's been pretty straight forward. We work with GNOME, we have people who are GNOME members who work in the GNOME community. We have a good relationship with decision makers and with engineers in GNOME. Of course, sometimes we have differences of opinion about the way that we think things should work. We're a distribution and we distribute GNOME. But we also are Ubuntu, we're a recognized brand. We want to… ensure that what we provide our users is what they want. When we did the switch to GNOME Shell from Unity, we did a survey [asking] people straightforward questions like, "What sort of features do you want to see continue in Ubuntu Desktop?" The answer came through very, very clearly that people liked having the launcher on the left, and they wanted to keep that feature there. They liked having desktop icons and they wanted to keep that feature there. We've made decisions based on data from our user base, from our community. They have provided that feedback and we've done what the majority of people want. Sometimes that doesn't go with the ideals of GNOME design, but we're comfortable with delivering what we see as value on top of GNOME. That's delivering a product which gives people consistency between the old days of Unity 7, and the new days of GNOME Shell. That transition was as easy as possible, everybody had a chance to have a say in it, and the answers were pretty clear. What the future holds for the Linux desktop The first stable release of GTK4 is anticipated later this year. Naturally, future versions of the GTK-powered GNOME desktop environment will utilize this major version update. Concurrently, low-level changes are coming for multimedia handling, while Wayland is primed to replace X11 across major distributions—including Ubuntu. When fully realized, these changes will make for rich media applications more performant. TR: What's the biggest thing you're looking forward to in GTK4, and how will that impact Ubuntu on the Desktop? Cooke: There's a lot of lower-level architecture changes going on, and there are things like PipeWire being developed which will give us the next generation of audio routing, which will be very exciting. I think this will give us options for professional audio production, low-latency audio, all sorts of clever routing of audio devices and handling of audio devices. When things like PulseAudio were originally designed, [these] were never foreseen. Having that sort of architectural low-level rework of significant pieces of the desktop stack is very important and it's going to be really cool. The other thing that I'm really looking forward to is the potential change in architecture such that, when the shell itself crashes, it won't take your entire session down with it. This was a big sticking point for us, in the move to Wayland… we took the decision that we weren't going to risk having users lose work in that way, especially when they've been used to, for example, Unity 7 crashing, and then coming back with all of your applications still loaded. We wanted to maintain that feature, if you like. We fixed a lot of those bugs upstream and… generally speaking the Wayland session is extremely stable now. We are looking forward to being able to move over to Wayland as soon as we can, and I think that [the release of] GNOME 4 could be the right time to do that. TR: What release of Ubuntu would you forecast shipping Wayland as the default? Cooke: I can tell you it won't be for 20.04. We're too close to the release now. We're only one cycle away from the release. The cycle before the LTS release is a final fit-and-finish. We should be going into that cycle, which starts in October this year, with these decisions already made. So we haven't got time, in six months, to debug and fully test a change to the display server. In order to try and get it in for the next LTS—Ubuntu 22.04—we will be moving pretty quickly to get Wayland as the default again and shake the rest of the bugs out. So I think we'll see it move in 20.10, and then we'll have to see how that goes, and then we'll make a decision from there. Why Linux is compelling for users switching from Windows 7 Support for Windows 7 is coming to an end in just half a year, though Windows 7 still holds a 36% market share. Considering the relatively high price tag associated with Microsoft's extended support subscriptions for Windows 7, many organizations—including potentially the South Korean government—are turning toward Linux in an effort to prolong the lifespan of relatively modern hardware. TR: What would you want people with not particularly old hardware who are looking at migrating away from Windows 7 to know about Ubuntu? Cooke: I would be interested to learn what it is that they're doing with their computer, because I would hazard a guess that the majority of them are web browsing. If that is where you spend 90% of your computer time, is in front of a web browser using… Gmail or Office 365, those sorts of products, then you need to know that Linux is there for you and will allow you to do exactly the same stuff that you're doing in your web browser. You won't be plagued with continual updates and you will be protected from web-based vulnerabilities on Windows. So, you need to know that Linux is a secure place, that you can get your work done in just the same way that you're currently doing it. But with all of the added protection that comes from having Linux. TR: Over the last five years, what is the biggest innovation that eased a pain point for using Linux on the desktop? Cooke: There's millions of things, really. I don't think I could put my finger on a single one. I think the summary would be that you don't have to drop down into a text editor and fiddle with config files anymore. The auto-detection that happens in Linux now—it could be from USB devices being hotplugged, it could be external monitors, it could be all of the hardware, the sound card, the network card, all of that stuff that's inside the computer. All of that now gets detected automatically. Whereas, five years ago—maybe a little bit more than five years ago—you would have a relatively new piece of hardware, and then you'd have to be compiling kernel drivers yourself, or editing code to try and work around bugs in things that didn't quite work yet. So that maturity, and the fact that Linux is now taken so seriously by the likes of Intel—which means that drivers come along very, very early in the development process for that hardware—means that the overall desktop experience these days is painless by comparison. Things do just work these days. Then you couple that with the likes of Skype, Spotify, and Google with Chrome, for example, who have been bringing these very critical applications. Critical because that's what users want. So you combine those two things, and you've got a very powerful story—not only will you be able to install Linux onto your hardware and there's a very good chance that everything will just work, but when you do install it and it just works, you're able to use the applications that you've come to rely on in your day-to-day life. Source
  24. Linux developers recognize Microsoft's contributions to Linux and security -- by letting the company's Linux developers in its closed linux-distro security list. Most open-source development work, like the name says, is done in the open. The exception is the first stages of security work. Unpatched security holes, however, are discussed and fixed behind closed doors. Now, Microsoft has been admitted to the closed linux-distro list. Microsoft wanted in because, while Windows sure isn't Linux, the company is, in fact, a Linux distributor. Sasha Levin, a Microsoft Linux kernel developer, pointed out Microsoft has several distro-like builds -- which are not derivative of an existing distribution -- that are based on open-source components. These are: Azure Sphere: This Linux-based IoT device provides, among various things, security updates to deployed IoT devices. As the project is about to step out of public preview into the GA stage, we expect millions of these devices to be publicly used. Windows Subsystem for Linux v2: A Linux based distro that runs as a virtual machine on top of Windows hosts. WSL2 is currently available for public preview and schedule for GA early 2020. Products such as Azure HDInsight and the Azure Kubernetes Service provide public access to a Linux based distribution. In addition, Levin asked in, because: "Microsoft has decades long history of addressing security issues via [the Microsoft Security Response Center] MSRC. While we are able to quickly (<1-2 hours) create a build to address disclosed security issues, we require extensive testing and validation before we make these builds public. Being members of this mailing list would provide us the additional time we need for extensive testing." All of which makes good sense. Besides, Levin revealed in a follow-up note to the discussion: "The Linux usage on our cloud has surpassed Windows, as a by-product of that MSRC has started receiving security reports of issues with Linux code both from users and vendors. It's also the case that issues that are common for Windows and Linux (like those speculative hardware bugs)." As David A Wheeler, an open-source security expert, pointed out, the purpose of the list is to enable "everyone to coordinate so that users get fixes." That includes Linux users on WIndows and Azure. So, he supported Microsoft being allowed into the private list. Greg Kroah-Hartman, the Linux stable branch kernel maintainer, supported Levin. "He is a long-time kernel developer and has been helping with the stable kernel releases for a few years now, with full write permissions to the stable kernel trees," he said. Indeed, Kroah-Hartman had "suggested that Microsoft join linux-distros a year or so ago -- when it became evident that they were becoming a Linux distro." Alexander "Solar Designer" Peslyak, security developer and founder of the open-source Openwall security website, announced Microsoft would be subscribed to the list. While some people -- almost all outside the list -- hated this idea because, in their minds, Microsoft is still The Evil Empire, Peslyak wrote that was "irrelevant per our currently specified membership criteria." Source
  25. ARMONK, N.Y. and RALEIGH, N.C. — July 9, 2019 — Acquisition positions IBM as the leading hybrid cloud provider and accelerates IBM’s high-value business model, extending Red Hat’s open source innovation to a broader range of clients IBM preserves Red Hat’s independence and neutrality; Red Hat will strengthen its existing partnerships to give customers freedom, choice and flexibility Red Hat’s unwavering commitment to open source remains unchanged Together, IBM and Red Hat will deliver next-generation hybrid multicloud platform IBM (NYSE:IBM) and Red Hat announced today that they have closed the transaction under which IBM acquired all of the issued and outstanding common shares of Red Hat for $190.00 per share in cash, representing a total equity value of approximately $34 billion. The acquisition redefines the cloud market for business. Red Hat’s open hybrid cloud technologies are now paired with the unmatched scale and depth of IBM’s innovation and industry expertise, and sales leadership in more than 175 countries. Together, IBM and Red Hat will accelerate innovation by offering a next-generation hybrid multicloud platform. Based on open source technologies, such as Linux and Kubernetes, the platform will allow businesses to securely deploy, run and manage data and applications on-premises and on private and multiple public clouds. "Businesses are starting the next chapter of their digital reinventions, modernizing infrastructure and moving mission-critical workloads across private clouds and multiple clouds from multiple vendors," said Ginni Rometty, IBM chairman, president and CEO. "They need open, flexible technology to manage these hybrid multicloud environments. And they need partners they can trust to manage and secure these systems. IBM and Red Hat are uniquely suited to meet these needs. As the leading hybrid cloud provider, we will help clients forge the technology foundations of their business for decades to come." "When we talk to customers, their challenges are clear: They need to move faster and differentiate through technology. They want to build more collaborative cultures, and they need solutions that give them the flexibility to build and deploy any app or workload, anywhere," said Jim Whitehurst, president and CEO, Red Hat. "We think open source has become the de facto standard in technology because it enables these solutions. Joining forces with IBM gives Red Hat the opportunity to bring more open source innovation to an even broader range of organizations and will enable us to scale to meet the need for hybrid cloud solutions that deliver true choice and agility." Red Hat will continue to be led by Jim Whitehurst and its current management team. Whitehurst is joining IBM’s senior management team, reporting to Ginni Rometty. IBM will maintain Red Hat’s headquarters in Raleigh, North Carolina, its facilities, brands and practices. Red Hat will operate as a distinct unit within IBM and will be reported as part of IBM’s Cloud and Cognitive Software segment. Both companies have already built leading enterprise cloud businesses with consistent strong revenue growth by helping customers transition their business models to the cloud. IBM’s cloud revenue has grown from 4 percent of total revenue in 2013 to 25 percent today. This growth comes through a comprehensive range of as-a-service offerings and software, services and hardware that enable IBM to advise, build, move and manage cloud solutions across public, private and on-premises environments for customers. IBM cloud revenue for the 12-month period through the first quarter of this year grew to over $19 billion. The Red Hat acquisition is expected to contribute approximately two points of compound annual revenue growth to IBM over a five-year period. Red Hat’s fiscal year 2019 revenue was $3.4 billion, up 15 percent year-over-year. Fiscal first quarter 2020 revenue, reported in June, was $934 million, up 15 percent year-over-year. In that quarter, subscription revenue was up 15 percent year-over-year, including revenue from application development-related and other emerging technology offerings up 24 percent year-over-year. Services revenue also grew 17 percent. The Hybrid Cloud Opportunity Digital reinvention is at an inflection point as businesses enter the next chapter of their cloud journey. Most enterprises today are approximately 20 percent into their transition to the cloud. In this first chapter of their cloud journey, businesses made great strides in reducing costs, boosting productivity and revitalizing their customer-facing innovation programs. Chapter two, however, is about shifting mission-critical workloads to the cloud and optimizing everything from supply chains to core banking systems. To succeed in the next chapter of the cloud, businesses need to manage their entire IT infrastructure, on and off-premises and across different clouds — private and public – in a way that is simple, consistent and integrated. Businesses are seeking one common environment they can build once and deploy in any one of the appropriate footprints to be faster and more agile. IBM’s offerings have evolved to reflect new customer needs and drive greater growth. The acquisition of Red Hat further strengthens IBM as the leader in hybrid cloud for the enterprise. "As organizations seek to increase their pace of innovation to stay competitive, they are looking to open source and a distributed cloud environment to enable a new wave of digital innovation that wasn’t possible before. Over the next five years, IDC expects enterprises to invest heavily in their journeys to the cloud, and innovation on it. A large and increasing portion of this investment will be on open hybrid and multicloud environments that enable them to move apps, data and workloads across different environments," said Frank Gens, Senior Vice President and Chief Analyst, IDC. "With the acquisition of Red Hat, and IBM’s commitment to Red Hat’s independence, IBM is well positioned to help enterprises differentiate themselves in their industry by capitalizing on open source in this emerging hybrid and multicloud world." The collective ability of IBM and Red Hat to unlock the true value of hybrid cloud for businesses is already resonating among customers moving to the next chapter of digital reinvention. "Delta is constantly exploring current and emerging technology as we transform the air travel experience," said Ed Bastian, Delta CEO. "We’ve been working with both IBM and Red Hat for years to deliver on that goal, and as they together build the next generation IT company, they will be an essential part of our digital transformation." "As a long-standing partner of Red Hat and IBM, we look forward to capabilities that these two companies will bring together," said Michael Poser, Managing Director and Chief Information Officer, Enterprise Technology & Services, Morgan Stanley. "We know first-hand how important and impactful cloud technology contributes to unlocking business value." IBM Reinforces Commitment to Open Source and Red Hat Neutrality IBM and Red Hat have deep open source values and experience. The two companies have worked together for more than 20 years to make open source the default choice for modern IT solutions. This includes the importance of open governance and helping open source projects and communities flourish through continued contribution. With Red Hat, IBM has acquired one of the most important software companies in the IT industry. Red Hat’s pioneering business model helped bring open source – including technologies like Linux, Kubernetes, Ansible, Java, Ceph and many more – into the mainstream for enterprises. Today, Linux is the most used platform for development. Red Hat Enterprise Linux alone is expected to contribute to more than $10 trillion worth of global business revenues in 2019. By 2023, an additional 640,000 people are expected to work in Red Hat-related jobs. IBM has committed to scaling and accelerating open source and hybrid cloud for businesses across industries, as well as preserving the independence and neutrality of Red Hat’s open source heritage. This includes its open source community leadership, contributions and development model; product portfolio, services, and go-to-market strategy; robust developer and partner ecosystems, and unique culture. Red Hat’s mission and unwavering commitment to open source will remain unchanged, and Red Hat will continue to offer the choice and flexibility inherent to open source and hybrid IT environments. Red Hat also will continue to build and expand its partnerships, including those with major cloud providers, such as Amazon Web Services, Microsoft Azure, Google Cloud and Alibaba. IBM and Red Hat also share a strong commitment to social responsibility and a sense of purpose for applying technology and expertise to help address some of the world’s most significant societal challenges. Together, the two companies have committed to expanding this longstanding commitment through new joint initiatives, addressing education and skills, civic and societal needs and Science, Technology, Engineering, and Math (STEM) workforce development. For more information visit: https://ibm.com/blogs/corporate-social-responsibility/2019/07/be-open-and-change-the-world/ For more information on today’s news, visit: https://newsroom.ibm.com/ and https://www.ibm.com/redhat About IBM For more information about IBM, visit https://www.ibm.com. Source
  • Create New...