Jump to content

Search the Community

Showing results for tags 'gpu'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Site Related
    • News & Updates
    • Site / Forum Feedback
    • Member Introduction
  • News
    • General News
    • FileSharing News
    • Mobile News
    • Software News
    • Security & Privacy News
    • Technology News
  • Downloads
    • nsane.down
  • General Discussions & Support
    • Filesharing Chat
    • Security & Privacy Center
    • Software Chat
    • Mobile Mania
    • Technology Talk
    • Entertainment Exchange
    • Guides & Tutorials
  • Off-Topic Chat
    • The Chat Bar
    • Jokes & Funny Stuff
    • Polling Station

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Found 21 results

  1. AMD has sold more GPUs than Nvidia, according to this analyst report Team Green is falling behind, though not by much (Image credit: Future) Team red is on fire. It seems like AMD’s winning streak won’t likely end anytime soon. After leaked figures from Mindfactory revealed that AMD’s Ryzen CPU sales are destroying that of Intel’s, the latest report from Jon Peddie Research is now showing that the Santa Clara company is winning in the GPU market as well. According JPS’s Market Watch Q4 2019 report, AMD saw a 22.6% increase in overall GPU shipments in Q4 2019. This means that AMD now has 19% share of the GPU market, which is a 3% increase from Q3, while rivals Nvidia and Intel saw 0.97% and 2% drops respectively. That leaves Nvidia with only an 18% share - leaving AMD in the lead between the two. That said, Intel still dominates the market with its integrated and discrete GPUs, taking 63% of the market share in Q4. And, Nvidia is still king of the discrete GPU game, taking 73% of discrete GPU shipments in 2019 over AMD’s 27%. However, the fact that AMD's GPU sales are steadily going up is still great news for the company. AMD's shipments of discrete graphics in particular progressed to 27% of the market total, up from 26% in 2018 and 24% in Q3 2019. With the highly-anticipated “Nvidia killer” Radeon RX 5950 XT just around the corner, those numbers are likely to go higher in 2020. Of course, it’s also entirely possible that Intel’s promising Xe discrete graphics will only perpetuate Team Blue’s dominance, especially in the laptop market. Good news for the GPU market in general It’s not just AMD that’s enjoying the fruits of its labor, however. According to Market Watch, the overall GPU shipments had increased 3.4% from Q3 2019. The overall attach rate of GPUs to PCs was up by 1.8% and the number of desktop graphics add-in boards (AIBs) that use discrete GPUs also saw 12.17% increase in Q4. Considering that GPU shipments have been historically flat in the fourth quarter, this is excellent news for the graphics card industry. JPR President Jon Peddie, even notes that this is “the third consecutive quarter of increased GPU shipments.” It’s not all good news. With the coronavirus epidemic crippling many of China’s factories and thus interrupting the supply chain, Q1 2020 “may show an unusual dip,” says Peddie. However, with “Intel’s entry into the discrete GPU market and a possible fourth entry by an IP company,” 2020 is still going to be an exciting year in graphics card game. Source: AMD has sold more GPUs than Nvidia, according to this analyst report (TechRadar)
  2. Nvidia’s next Tesla GPUs could be 75% faster – good news for future GeForce cards Big Red 200 supercomputer upgrade hints at equally big things for consumer GPUs (Image credit: TechRadar) Big Red 200, a new supercomputer at Indiana University, is now up and running, and later this year will be upgraded to make use of Nvidia’s next-gen GPUs, which will potentially be up to 75% faster than current Tesla graphics solutions. This is according to a report from The Next Platform, which spoke to Brad Wheeler, VP for IT and CIO at Indiana University, airing the claim that Nvidia’s next-gen Tesla graphics solutions – which will be deployed as a ‘phase two’ upgrade for Big Red 200 in the summer – will be around 70% to 75% faster than current offerings. That’s a huge leap in performance, of course, and while you might think that it’s not particularly relevant to the average PC user – with these being heavyweight GPUs in a massive supercomputer – remember that the technology Nvidia uses here could trickle down to its consumer GeForce offerings. And an (up to) 75% performance increase in Tesla lends more credence to the (admittedly fairly wild) previous rumor that has been floated, which contends that Nvidia’s next-gen GeForce graphics cards could benefit from a 50% performance uplift (albeit this could, potentially, only pertain to ray tracing scenarios – although this is all just up-in-the-air theorizing, of course). Speculation has it that Nvidia’s next-gen Tesla GPUs might be unveiled at the firm’s GPU Technology Conference in March (this isn’t the first time we’ve heard that Ampere graphics cards will be revealed at GTC in San Jose – although other corners of the rumor mill seem to believe that this could mean a consumer GeForce card, rather than a data center offering). An unveiling at GTC in March might be ahead of a summer launch for the new heavyweight cards, which would line up with the proposed Big Red 200 upgrade time frame as mentioned. As ever, we have to treat any speculation with a great deal of caution, but nonetheless, this represents a potentially exciting glimpse of how powerful Nvidia’s next-gen graphics tech could be in terms of heavyweight computing – hinting at similar things for consumer GPUs. Epyc beast Big Red 200 is a Cray Shasta supercomputer, and it launched with 672 dual-socket nodes carrying AMD’s Epyc 7742 (2nd-gen server) 64-core processors. In the phase two upgrade, further Epyc chips will be added to the machine, along with the aforementioned next-gen Tesla GPUs. The University decided to take this two-stage deployment approach when it discovered that if it waited a bit longer, it could benefit from Nvidia’s next-gen products, rather than going with Nvidia V100 GPUs as was originally planned. With those V100 cards, Big Red 200 would have been capable of a peak performance level in the order of 5.9 petaflops, but using the newer GPUs, the supercomputer should instead see performance up to 8 petaflops. Source: Nvidia’s next Tesla GPUs could be 75% faster – good news for future GeForce cards (TechRadar)
  3. AMD confirms ‘Nvidia killer’ graphics card will be out in 2020 Big Navi could show up sooner rather than later this year (Image credit: AMD) AMD’s chief executive has confirmed that a high-end Navi graphics card will be released this year. In a video interview entitled ‘The Bring Up’ posted on YouTube by AMD (see below), Lisa Su noted that people were wondering about Big Navi – said high-end GPU, which has previously been referred to as the ‘Nvidia killer’ in terms of how it will take on the top-end RTX cards. The CEO then said: “I can say you’re going to see Big Navi in 2020.” This is the first concrete confirmation we’ve had that AMD will definitely be unleashing its big graphics firepower this year, although rumors have always pointed to this, and indeed comments that Su made in a recent roundtable Q&A session at CES 2020. At CES, the CEO stressed how important a top-end GPU was to AMD, and said that “you should expect that we will have a high-end Navi, although I don’t usually comment on unannounced products”. The hint was certainly that this GPU would arrive in 2020, but she didn’t actually say that. So at least now we have a confirmation, even if that really isn’t a surprise to anyone who’s been following AMD’s rumored progress in the graphics card arena lately. Battle of the flagships There has been no shortage of speculation around all this, including that the high-end graphics card could be 30% faster than Nvidia’s RTX 2080 Ti (if the unknown GPU which is the subject of that leak is indeed Big Navi, and that’s a fairly sizeable if). Of course, AMD needs to move quickly enough with the release to make sure it isn’t competing against the RTX 3080 Ti (which might be up to 50% faster than its Turing predecessor, so the rumor mill reckons – although that might be just with ray tracing). Nvidia’s next-gen Ampere GPUs are expected to launch in the latter half of 2020, in case you were wondering. Another potential sign that we might see the high-end Navi graphics cards sooner rather than later is that an EEC filing has just been made for the Radeon RX 5950XT. And a GPU with the same name has been filed previously (back in June 2019), indicating that the 5950XT could be the flagship model for 2020. As ever, we need to take such speculation with a good degree of caution, though. Source: AMD confirms ‘Nvidia killer’ graphics card will be out in 2020 (TechRadar)
  4. Leak shows Intel's DG1 Xe discrete GPU dev kits may be ready to be sampled soon At CES 2020, a couple of days ago, Intel demoed its upcoming 10nm+ Tiger Lake CPUs and also teased its Xe DG1 discrete graphics running Destiny 2. With the chip up and running, Intel appears ready to ship development kits of its Xe GPU according to this leaked press deck. The company appears to have named the kit a 'Software Development Vehicle' (SDV) and these will be sampling to independent software vendors (ISVs) worldwide. The design of the SDV is aesthetically pleasing with stylish grooves on the top and an Xe-branded backplate at the bottom. It is a single fan card with no apparent external power connector hinting at low power requirement for this particular design. It's been known since Supercomputing 2019 that Intel plans to scale its Xe architecture through the entire spectrum of the graphics market, from high end HPC needs down to low power(LP) mobile use cases, starting with Tiger Lake. Intel seems to have reiterated on that fact and has only added a nomenclature denoting each tier of performance. To sum up, Intel's plans with its Xe architecture seem grand as the company looks to take on two behemoths in the GPU market. With time, we will know how Intel has managed to measure up. via Videocardz Source: Leak shows Intel's DG1 Xe discrete GPU dev kits may be ready to be sampled soon (Neowin)
  5. Windows 10 20H1 will allow users to monitor GPU temperature with ease After the release of Windows 10 November Update last month, Microsoft went back to work on 20H1 which is scheduled to release in the spring of 2020. Since Microsoft made it clear that November update would be an incremental update, Windows 10 users have high hopes from 20H1 as it’s slated to bring a plethora of new features. While we expect to see some major feature additions to the OS, Microsoft is also working on small but important changes. One of those is the ability to monitor GPU temperatures with ease. While Windows 10 does show detailed GPU usage, it doesn’t show the actual temperature and because of that, you will need third-party apps to monitor the temperature. However, with the release of Windows 10 Build 18963 Microsoft has added a GPU temperature monitor to the Task Manager. This feature is targetted at people using dedicated Graphics Cards which tend to heat up a lot compared to integrated GPUs from Intel and AMD. Microsoft released the Build 18963 quite a while back but the feature has been working perfectly fine on Windows 10 Insider Preview and should be added to final 20H1 update. We tested the feature on one of our test machines and unfortunately, the feature only records the present temperature. So if you’re after historic data then you will have to rely on third-party apps. That said, if you just want to monitor temperature at regular intervals then you can do so from the Task Manager soon. Source: Windows 10 20H1 will allow users to monitor GPU temperature with ease (MSPoweruser)
  6. AMD launches Radeon Pro W5700, the first 7nm GPU for workstations Today, in addition to launching the Athlon 3000G processor, AMD has announced the world's first 7nm GPU for workstations, the Radeon Pro W5700. This new GPU is the first in the Radeon Pro W5000 series, and it's based on the company's new RDNA architecture, which promises up to 25% more performance per clock compared to the previous GCN architecture. The Radeon Pro W5700 also promises up to 41% more performance per watt compared to the GCN-based Radeon Pro WX 8200. It also claims to be 18% more power-efficient than Nvidia's Quadro RTX 4000 GPU. AMD also boasts better multitasking capabilities when the CPU is under load, promising up to 5.6 times the workflow performance compared to the same Nvidia card. The Radeon Pro W5700 is also the first workstation GPU to support PCIe 4.0 for additional bandwidth and it also comes with 8GB of GDDR6 memory. Additionally, it's the first GPU of its kind to come with a USB Type-C port to support the growing number of monitors that use it for video input. Here's a quick rundown of the specs: GPU Compute units TFLOPS Memory (Bandwidth) Memory interface Display outputs Radeon Pro W5700 36 Up to 8.89 8GB GDDR6 (448GB/s) 256-bit 6 The AMD Radeon Pro W5700 is available today in the North America, EMEA, and Asia Pacific regions, starting at $799. Source: AMD launches Radeon Pro W5700, the first 7nm GPU for workstations (Neowin)
  7. Over the weekend, Nvidia announced that it has trained an AI – which you can test out yourself – to put the expression of one animal realistically onto the face of another. With Nvidia's new GANimal app, you can put the smile of your pooch onto the face of a lion, tiger, or bear. In fact, this app can recreate the expression of any animal on the face of any other creature. To accomplish this, the technology company trained an AI using generative adversarial networks, "an emerging AI technique that pits one neural network against another". The network can translate the image onto a slew of target animals – even those it has never seen before. Instead of having to feed the network several images of your dog, it can perform this task with just one input image, which makes the process simpler than ever. Users can try it themselves and put their pet's expression on animals like an Arctic fox, American black bear or lynx. According to the company, this type of technology could potentially be used in filmmaking, to not only alter animals' expressions, but also to map their movements and recreate them on leopards or jaguars. The GANimal app is the most recent step in the company's lead computer-vision researcher Ming-yu Liu's goal to "code human-like imagination into neural networks". The tool uses the same kind of network as the one behind GauGAN, a technology that turned simple doodles into photorealistic landscapes, which users can also try out for themselves. Source: Nvidia trained AI to put your pup's smile on a lion (via The Star Online)
  8. Deep learning and its applications have grown in recent years. Recently, researchers from ETH Zurich used the technique to study dark matter in an industry first. Now, a team working with the University of California, Berkeley and the University of California, San Francisco (UCSF) School of Medicine have trained a convolutional neural network dubbed "PatchFCN" that detects brain hemorrhages with remarkable accuracy. In a paper titled "Expert-level detection of acute intracranial hemorrhage on head computed tomography using deep learning", the team claims that: The team achieved an accuracy of 99 percent, which is the highest recorded accuracy to date for detecting brain hemorrhages. In some cases, the neural network's performance eclipsed even that of seasoned radiologists: PatchFCN was trained on a dataset of more than 4,000 CT scans from UCSF-affiliated hospitals using Nvidia V100 Tensor Core GPUs and Amazon Web Services. The training and analysis were done in a novel way whereby the team divided up the CT scans into segments that were each subsequently analyzed by the model. The team then experimented with the segment size to achieve the best results to increase the model's accuracy. Furthermore, according to the researchers, each picture can be analyzed within seconds by their trained model. After analysis, the model, in addition to passing a verdict on the existence of a brain hemorrhage, also provides a detailed tracing and measurement of each hemorrhage. In the context of a hospital, this can be a vital asset. PatchFCN will not only improve throughput but will also relieve pressure off of radiologists’, thereby improving their efficiency and productivity, the team believes. For more information and the specifics of the study, you can refer to the paper published here. Source: 1. Neural network system has achieved remarkable accuracy in detecting brain hemorrhages (via Neowin) - main article 2. Deep Learning Detects Brain Hemorrhages with Accuracy Rivaling Experts (via Nvidia Blog) - supporting reference 3. Expert-level detection of acute intracranial hemorrhage on head computed tomography using deep learning (via PNAS) - academic research paper
  9. Today, at the 5G Mobile World Conference, Nvidia co-founder and CEO Jensen Huang, announced Nvidia Jarvis, a multi-modal AI software development kit, that combines speech, vision, and other sensors in one AI system. Here's a YouTube video of the presentation: As stated before, Jarvis is the company's attempt to process multiple inputs from different sensors simultaneously. The wisdom behind this approach is that it will help build context for accurately predicting and generating responses in conversation-based AI applications. To preface this, Nvidia exemplified situations where this might help on its blog post: In Jarvis, Nvidia has included modules that can be tweaked according to the user's requirements. For vision, Jarvis has modules for person detection and tracking, detection of gestures, lip activity, gaze, and body pose. While for speech, the system has sentiment analysis, dialog modeling, domain and intent, and entity classification. For integration into the system, fusion algorithms have been employed to synchronize the working of these models. Moreover, the firm claims that Jarvis-based applications work best when used in conjunction with Nvidia Neural Modules (NeMo), which is a framework-agnostic toolkit for creating AI applications built around neural modules. For cloud-based applications, services developed using Jarvis can be deployed using the EGX platform, which Nvidia is touting as the world's first edge supercomputer. For edge and Internet of Things use cases, Jarvis runs on the Nvidia EGX stack, which is compatible with a large swath of Kubernetes infrastructure available today. Jarvis is now open for early access. If you are interested, you can log in to your Nvidia account and sign up for early access to it here. Source: Nvidia Jarvis—a multi-modal AI SDK—fuses speech, vision, and other sensors into one system (via Neowin)
  10. The AchieVer

    AMD Has a New Very Fancy GPU

    Photo: Alex Cranz (Gizmodo) Usually, the biggest announcements at CES are over with by the end of the first day, but during its second-day keynote AMD CEO Lisa Su announced a new GPU, the AMD Radeon VII. According to Su, it is the very first 7nm graphics card available to consumers. While Nvidia is leaning on the eye candy of ray tracing, AMD is banking on the hype of a GPU with a smaller die process. The last generation Vega GPU was based on a 14nm process. This is half that size. A smaller die almost always means an increase in performance—usually while maintaing the same power efficiency or improving. AMD is bragging about its die, because, notably, the die sizes have been in the news a lot lately with Intel promising a 10nm CPU (and repeatedly failing to deliver) and Apple crowing about it’s 7nm processor for what felt like half o the iPhone XS keynote. Nvidia’s latest GPUs, the RTX 2-series, is based on a 12nm process. So theoretically the AMD GPU could be faster in games (provided you can do without the ray tracing), but GPU performance is also heavily informed by the software it operates with, and AMD’s software has frequently lagged behind Nvidia’s. Which is why Su took time to talk up AMD’s investment in better software. She also mentioned the kind of memory the AMD Radeon VII to presumably better mark it apart from Nvidia. The Radeon VII will come with 16GB of second-generation high-bandwidth memory (HBM2) with a claimed bandwidth of 1TB. Nvidia’s 2080 8GB of GDDR6 memory with a bandwidth of approximately 448GB per second and the previous generation AMD GPU, the RX Vega 64 had 8GB of HBM2 with a bandwidth of 483.8GB a second. That’s half the memory at half the potential speed. What does that all actually mean? Photo: Alex Cranz (Gizmodo) It means, according to Su, better performance at the same power draw as the previous top of the line Vega GPU (she made no mention of the 2080). She cited about 25 percent improved performance on average. Su claimed that the Radeon VII saw 35 percent better performance in Battlefield V at 4K and the highest settings, and 25 percent improvement in Fortnite. She claimed the performance extends to non-gaming applications as well, with approximately 30-percent improved performance in apps like Photoshop and Blender, and a whopping 62-percent improved performance across other OpenCL apps. Notably absent from the Radeon VII announcement is any mention of ray tracing, the slick feature Nvidia is touting in its GPUs. But Radeon VII will cost $700 when it’s available February 7. That’s at least $300 less than Nvidia’s top GPU, though twice the price of Nvidia’s cheapest ray tracing card, the just announced RTX 2060. Can it possibly be worth it? We’ll know more when we try our own Radeon VII card for a review in the coming weeks Source
  11. Intel looking to tackle Ryzen 3 with cheaper, GPU-less chips? With the launch of AMD's hotly-anticipated Ryzen 3rd Generation processors just around the corner – the new CPUs are expected to be officially unveiled this Wednesday, January 9 at AMD's CES 2019 conference – Intel has today used its CES event to finally announce it's adding six more 9th-gen Core processors, ranging from Core i3 to Core i9, set to release soon. The new processors join the company's three existing 'flagship' 9th-generation desktop chips, which launched in October last year – the Core i5-9600K, i7-9700K and i9-9900K – as well as the 9th-generation X-series for HEDT systems. Intel didn't officially announce full details of the new processors, but we've been able to dig up information on all six of them via some URL experimentation in Intel's ARK product database: Intel Core i3-9350KF: 4-cores, 4-threads, no integrated graphics, clocked at 4.0GHz to 4.6GHz Intel Core i5-9400: 6-cores, 6-threads, Intel UHD Graphics 630, clocked at 2.9GHz to 4.1GHz Intel Core i5-9400F: 6-cores, 6-threads, no integrated graphics, clocked at 2.9GHz to 4.1GHz Intel Core i5-9600KF: 6-cores, 6-threads, no integrated graphics, clocked at 3.7GHz to 4.1GHz Intel Core i7-9700KF: 8-cores, 8-threads, no integrated graphics, clocked at 3.6GHz to 4.9GHz Intel Core i9-9900KF: 8-cores, 16-threads, no integrated graphics, clocked at 3.6GHz to 5.0GHz What's perhaps most intriguing about them is that five of the six new chips appear to be part of a brand new F-series of processors, which have removed (or most likely disabled) the integrated graphics chip that almost every mainstream Intel processor now includes. That may be an attempt to reduce costs (as it will allow the chip-maker to sell CPUs with non-functional GPUs) but it will likely also mean these processors run cooler and use less power – and they could be better for overclocking as a result. Somewhat surprisingly, Intel didn't expressly mention the new F-series at its press conference. If these new GPU-less processors do come at a reduced price, they may also be aimed at tackling AMD's Ryzen family of processors, which largely offer better bang for buck than their Intel equivalents. With many mid-range and higher-end PCs still coming equipped with a dedicated graphics card, Intel's integrated chips often go to waste, so offering a range of cheaper, GPU-less processors may help Intel win back some of the value-oriented market segment that it's recently been losing to AMD. Source
  12. Starts early as it comes in 2020 Intel has now officially started to tease its upcoming dedicated GPU, codename Arctic Sound, which is now confirmed for 2020. The teaser does not come as a surprise considering that Chris Hook, an ex-AMD marketing veteran, is now at Intel, pushing its marketing machinery into motion. The video teaser is also the first piece of information published on the new Intel Graphics Twitter account, and we suspect that it won't be the last. In a response to Charlie Demerjian, Editor-in-Chief of the Semiaccurate.com site, which has been a thorn in the side of many companies, especially Intel, Chris Hook noted that it "will take time and effort to be the first successful entrant into the dGPU segment in 25 years, but we have some incredible talent at Intel, and above all, a passion for discrete graphics". In case you somehow missed it, Hook was not the only guy switching to the blue camp as Intel managed to snatch Raja Koduri, who was Senior VP and Chief Architect at AMD, and the guy that is behind Intel's future push into the graphics market. Earlier rumors suggested that Intel is working on some sort of scalable GPU, which will be able to cover desktop, mobile, and even workstation markets. The video also confirms that we won't see it before 2020, but nevertheless, competition is always good for the consumer, so we are looking forward to it. View: Original Article
  13. Intel has been rumored to be working on a discrete graphics card for some time now, and we may well see it announced at CES 2019. At least that’s the word on the Internet according to Anthony Garreffa from TweakTown. According to an industry source within Intel, Garreffa alleges that Intel’s GPU team has “reached the end of this first step, and are now preparing for the big [GPU] launch.” Supposedly after acquiring the Athlon and Ryzen CPU architect Jim Keller, Intel is now pushing full-steam ahead with the next generation Intel graphics core. Graphic ambitions Previously, Intel poached AMD’s Radeon Tech Group Leader, Raja Koduri, and Global Product Marketing, Chris Hook. On top of this, we previously reported that Intel is building a graphics-focused team of at least 102 personal. With all of that in mind it seems certain that Intel is working on something big in the GPU world. Of course, we’ll have to take this rumor with a grain of salt. Especially when Tweaktown’s last source confirmed that Nvidia would launch a new GPU in late March at the company GPU Technology Conference. Obviously, that didn’t happen. View: Original Article.
  14. Tiny application which checks for NVIDIA GeForce GPU drivers, written in C-sharp (C#) for Windows Visit the wiki for more information about the application! This application has a simple concept, when launched it will check for new driver updates for your NVIDIA gpu! With this you no longer need waste your time searching if there's something new to get. Homepage Download Changelog :
  15. Back in November, we heard that SUMCO, one of the largest silicon wafer producers in the world, was planning to increase prices by 20 percent this year, with another price increase planned in 2019. Now, it looks like other Silicon wafer manufacturers are joining in, with Taiwan’s GlobalWafers confirming that prices will increase by 20 percent throughout this year. SUMCO is a Japanese based company and is responsible for over 60 percent of the world’s silicon wafer supply. With a 20 percent price increase, CPU, GPU, DRAM and Flash makers may have taken their business elsewhere, but it looks like they won’t necessarily have that opportunity. This week, GlobalWafers Chairwoman, Doris Hsu, informed shareholders that the company would raise prices for silicon wafers by 20 percent this year. Apparently, the biggest reason for this price increase is a shortage in 12-inch, 300mm wafers, which are traditionally used to build processors, graphics chips and RAM. Given that GPU prices have already risen to ridiculous heights due to crypto-mining, this is an additional blow to DIY PC builders. Last year when SUMCO revealed its own price hikes, the company estimated that global wafer demands will rise to 6.6 million per month by 2020. Hopefully by then, we’ll see production increase but for the time being, it looks like we will be stuck with price hikes as demand outstrips supply. View: Original Article
  16. npo33770

    Tiny Nvidia Update Checker 1.8.0

    Tiny Nvidia Update Checker 1.8.0 font: pc gamer Site info: https://github.com/ElPumpo/TinyNvidiaUpdateChecker Download: https://github.com/ElPumpo/TinyNvidiaUpdateChecker/releases
  17. Green team reckons that MCM architecture can counteract Moore's Law slowdown. Researchers from Arizona State University, Nvidia, the University of Texas, and the Barcelona Supercomputing Centre have published a paper (PDF) that looks at improving GPU performance using Multi-Chip-Module (MCM) GPUs. The team see MCM GPUs as one way to sidestep the deceleration of Moore's Law and the performance plateau predicated for single monolithic GPUs. Transistor scaling cannot happen at historical rates anymore and chipmakers are staying with certain manufacturing processes longer but optimising performance in other ways. As "the performance curve of single monolithic GPUs will ultimately plateau," researchers are looking at how to make better performing GPUs from package-level integration of multiple GPU modules. It is proposed that easily manufacturable basic GPU Modules (GPMs) are integrated on a package "using high bandwidth and power efficient signalling technologies," to create multi chip module GPU designs. To see if such a proposal is worthwhile and can bear fruit worth picking, the research team has been evaluating designs using Nvidia's in-house GPU simulator. Theoretical performance comparisons against multi-GPU solutions were also made. MCM GPUs could do wonders for increasing the SM count and many GPU applications "scale very well with increasing number of SMs," observe the scientists. The research team looked at the possibilities of a 256 SMs MCM-GPU in the paper, and are pleased by its potential. Using the simpler GPM building blocks and advanced interconnects this 256 SM chip "achieves 45.5% speedup over the largest possible monolithic GPU with 128 SMs," assert the researchers. In further tests the 256 SM equipped MCM-GPU "performs 26.8% better than an equally equipped discrete multi-GPU, and its performance is within 10% of that of a hypothetical monolithic GPU that cannot be built based on today’s technology roadmap," concluded the research paper. Research to reality delays mean we shouldn't expect MCM GPU graphics cards for enthusiasts from Nvidia for a couple of hardware generations. View: Original Article
  18. The Windows Task Manager is probably one of the most helpful and also the most used tools in the entire operating system. Surely everyone can recall at least a couple of times when they were in a jam and called on good ol‘Task Manager for help. Some use it to easily manage, set permissions or close running apps. Others however use the app to get quick and effective data regarding the computer parts and their performance. Tracking computer performance might not be a priority or a concern for casual users, but power users often times do this to be more informed on how their PC operates. Knowing more about a computer’s performance levels allows the user to make modifications and push the machine more because it knows its limits and capabilities better. More so, it also helps them keep machines in check and prevent bad situations. For example, keeping up with performance levels in the Task Manager can help a user understand that the processor is a lot more stressed than it should be at a given time, and so they can investigate and fix a potential problem. GPU tracking is finally here One of the biggest complaints people have had with the Task Manager is the fact that it didn’t provide GPU tracking. With the other major components tracked, users were always wondering when they would be able to see how their GPUs are performing. Those benefits are finally coming to the GPU with the new Windows 10 update. This change has been spotted in the 16226 build of Windows 10, which falls under the Fall Creators Update segment. A lot of info From displaying nothing at all regarding GPU performance, Microsoft is pulling a quick 180 with the Task Manager as now the tool will display a plethora of stats and useful information. There are many categories of information and users can see everything from GPU performance to GPU memory usage and so on. Users can even see the stats for each individual GPU component, which is pretty cool especially for those that are using their GPUs for really intense processes where every last drop of power and how it’s used counts enormously. Changing to multi-engine The Task Manager info on the GPU won’t immediately display all information as it will come preloaded in Single engine mode. Users can right click and change the graph properties so that it shows multi-engine instead. That’s one of the things that users will have to get used to once the new update comes out, but it will definitely be worth the slight learning curve. That is especially true when taking into consideration all the new possibilities born from the new GPU tracking feature. Article source Windows Task Manager can now track GPU performance Every Windows user can recall at least one instance where they didn’t know what to do and Windows Task Manager saved the day. One thing that always bugged people out about it, however, is the fact that it didn’t have any GPU performance tracking features. GPU tracking is finally coming That’s no longer the case as now Microsoft has finally decided to implement such a feature. The new GPU performance tracking feature is a part of the new Fall Creators Update for Microsoft’s Windows 10 OS and the first glimpses of it can be seen in the Preview build. To be more specific, the feature is part of the Windows 10 Insider 16226 build that is currently being tested on Microsoft’s preview platform. The integration will be seamless as the feature will be available under the Performance section, where users have been able so far to check out CPU performance related information. Microsoft must have wanted to make the wait worthwhile as it went the extra mile with the new GPU tracking capabilities. Now, users are not only able to track GPU performance but also separately track individual parts of the GPU. For those that aren’t savvy enough to know the name of their own GPU, this new feature is extra helpful since it provides that information alongside info about the driver used by the GPU. This can be very useful information and if nothing else, it’s good to have it “just in case”. Work in progress As mentioned earlier, these new features are currently being tested on the Insiders Preview build for Windows 10, which means that they are still under development. Things can be added or subtracted until the build releases officially to the public, meaning that there is still plenty of time for the developers to work on and enhance these features. Its Preview status also means that there might be slight errors here and there and obviously some bugs that the team hadn’t gotten a chance to fix yet. All these things are currently under review and part of the crew’s to-do list. Even if it’s arriving with quite a bit of delay, the new GPU tracking function will be most welcomed by Windows 10 users old and new, since it’s such a useful feature to have. Being able to immediately read important information regarding the computer’s GPU and how it’s performing is an important enough feature that would make people wonder why Microsoft hadn’t implemented it earlier. Article source
  19. AMD Radeon RX Vega Release Date Confirmed For Early August: AMD To Ship Vega To AIB Partners AMD Radeon RX Vega is the high-end gaming oriented graphics card for which all are waiting. However, latest reports suggest that AMD will start shipping Radeon RX Vega chips to its board partners as early as this week to finalize their custom card designs. After this, the custom AIB RX Vega cards will ship between late July and early August. According to HWBattle, AMD Radeon RX Vega silicon ships to company's board partners this week. The custom AMD RX Vega cards will then transport in late-July/early August, shortly after the reference cards. AMD CEO Lisa Su previously confirmed that the Vega GPUs will be revealed at SIGGRAPH and will be available for the masses by the end of July or early August. However, there are other sources which are claiming that AMD Vega will not release until later this year and hence there is a lot of confusion regarding AMD Vega release window. The gaming-oriented AMD Radeon RX Vega will probably be based on the same Vega 64 and Vega 56 silicon and these parts will be particularly optimized for games. According to the Radeon graphics guru Raja Koduri, these parts will be faster than the latest Vega Frontier Edition card, as reported by Segment Next. There have been lots of leaks regarding AMD Vega GPUs and while nothing has been confirmed yet, one thing for sure is that AMD GPUs will be cheaper than Nvidia. If this will be the case then AMD does not need to be on par with Nvidia GPUs. AMD's first Vega-based graphics card, the AMD Radeon Vega Frontier Edition, is scheduled to be available in late June. More details regarding AMD Radeon RX Vega will be revealed from AIB partners before the official launch of the graphics card at Siggraph 2017, which is scheduled to kick off on July 30. Article source
  20. Just weeks after launching the GTX 1080 Ti, Nvidia has now released the TITAN Xp. The graphics card is essentially a beefed up version of the TITAN X from last year, and all signs point to the new card taking the crown of being the fastest gaming GPU available. Based on the Pascal architecture, the TITAN Xp comes with 12GB of GDDR5X memory and 3,840 CUDA cores clocked at 1,582Mhz under its hood. It also boasts a whopping 12 teraflops of processing power, a full teraflop higher than the TITAN X. Specifications taken from the card's store page are below: As with all cards in the Nvidia TITAN lineup, the new addition isn't on the cheap side, coming in at a cool $1,200 USD. The new TITAN Xp graphics card is available directly via Nvidia's online store, with a limit of two per customer. Source
  21. As everybody knows, surface series lacks a powerful GPU I wonder if anybody tried to attach an external GPU via eg.Thunderbolt technology....etc
×
×
  • Create New...