Jump to content

AMD or Nvidia? which one supports DirectX 12 better?


Sylence

Recommended Posts

 

Early DirectX 12 games show a distinct split between AMD, Nvidia performance

 

 

Rise of the Tomb Raider
 
 
 

One of the major questions readers have raised over the past year is which company’s graphics cards would perform better in DirectX 12. It always takes a certain amount of time to answer questions like this, and DX12 is still in the early stages of deployment, with only a handful of titles currently available: Hitman, Rise of the Tomb Raider, Ashes of the Singularity, and Gears of War. Of these four, one of them (Gears of War) is DX12-only and available solely through the Windows Store; the other titles can run in DX11 or DX12 and support multiple operating systems.

Tweaktown recently put Hitman through its paces in both APIs. In 1080p DirectX 11, Nvidia wins top overall honors with the Titan X squeezing out the Fury X. Switch to DirectX 12, however, and AMD’s Fury X pulls ahead. The gap between the AMD and Nvidia cards continues to widen as the resolution rises; AMD wins 4K in both DX11 and DX12 and the gap in 4K DX12 is large enough that the R9 390X is able to surpass the GTX Titan X, as shown below:

 

 

hitman4K

Hitman’s 4K DX12 performance. Image by Tweaktown

 

 

These results are broadly similar to the benchmark results we saw in Ashes of the Singularity a few weeks before that game shipped. As in that title, Nvidia gains nothing in DirectX 12 and suffers some small performance regressions.

DirectX 12: A bifurcated story

Of the four DirectX 12 games currently in-market, Ashes and Hitman are wins for AMD and show a marked advantage in that API. Rise of the Tomb Raider, on the other hand, is a major Nvidia win. Benchmarks performed in that title show that AMD still lags Nvidia, even when testing in DX12 and even at 4K.

 

 

Rise of the Tomb Raider

Rise of the Tomb Raider. Data by Overclock3D.net

 

 

We can’t really draw many conclusions from Gears of War; the game appears to have been a terrible port with unplayable performance on AMD hardware, and is less than stellar even on Nvidia. The developer has released several patches, but it’s not clear if the game’s been truly fixed yet. With Fable Legends now cancelled, our early performance tests in that title don’t tell us much, either. Still, three games is enough to point to at least the beginnings of a trend.

First, we see AMD picking up performance relative to Nvidia in two of the three titles here. Both Hitman and Ashes use asynchronous compute, but Hitman’s lead render programmer, Jonas Meyer, noted at GDC 2016 that doing so only improved AMD’s performance by 5-10%, while Nvidia gained nothing from the feature.

 

 

HitmanPresentation

 

 

One reason AMD GPUs do better in DX12 than their Nvidia counterparts is because the new API eliminates a great deal of driver overhead, and AMD’s drivers were never as adroit as Nvidia’s at handling these workloads in the first place. AMD’s 4K performance in DX12 is roughly 10% faster than in DX11, which jives with Jonas Meyer’s comments at GDC 2016.

What’s less clear is why Nvidia consistently loses performance in every DirectX 12 game published to-date. The GTX 980 Ti is faster than the Fury X in Rise of the Tomb Raider when using DirectX 11 or DirectX 12, but it leads AMD by roughly 9% in DX11 and by just 2.4% in DX12. These performance drops aren’t large in and of themselves, but if moving to DirectX 12 makes AMD 8% faster and Nvidia 6% slower, you’ve got a net performance shift of 14% in favor of Team Red.

DirectX 12 appears to help AMD by both reducing driver overhead and allowing developers to leverage GCN’s formidable asynchronous compute capabilities. It’s less clear why Nvidia continues to struggle with delivering absolute performance improvements in DirectX 12, even in titles that otherwise favor the company’s products.
It’s still too early in the DirectX 12 / Windows 10 product cycle to draw absolute conclusions about which architecture will prove definitively better and the imminent arrival of new GPUs from both companies will render the question at least somewhat moot. So far, it looks as though AMD gamers are generally better off using DirectX 12 when it’s available, while Nvidia owners may want to stick with DX11, even when gaming in Windows 10.

 

We’ll continue monitoring the situation as new titles arrive and will update you accordingly.

 

ExtremeTech

 

 
Link to comment
Share on other sites


  • Replies 19
  • Views 1.6k
  • Created
  • Last Reply
4 minutes ago, pc71520 said:

AMD or NVIDIA supports DirectX 12 better? 

:wub:AMD, too.

 

Me too, price to performance ratio is awesome

Link to comment
Share on other sites


at this point in time it's meaningless to compare performance as everything is pretty much in beta stage.

 

the same happened when dx10 came around ....and dx11

 

a couple years down the road they will have something worth looking at.

Link to comment
Share on other sites


5 hours ago, VileTouch said:

at this point in time it's meaningless to compare performance as everything is pretty much in beta stage.

 

the same happened when dx10 came around ....and dx11

 

a couple years down the road they will have something worth looking at.

 

they exactly are aware of this fact, both of them had the same amount of time. and you're just calling it beta, if it was that experimental we wouldn't see some big titles employing it ;) 

so far AMD is doing great and will be better when 2-3 months later 490X comes out

4 hours ago, info999 said:

nvidia & fuc* denuvo, ROTR is still not cracked ! :pos:

 

That's why they created Steam

Link to comment
Share on other sites


20 minutes ago, saeed_dc said:

 

they exactly are aware of this fact, both of them had the same amount of time. and you're just calling it beta, if it was that experimental we wouldn't see some big titles employing it ;) 

so far AMD is doing great and will be better when 2-3 months later 490X comes out

 

That's why they created Steam

well, i hope so. but game devs jump on the new bandwagon to gain experience with it...and release a new product while doing so. not necessarily because they are comfortable with it. (nowadays their motto is: "release first, fix it later")

other companies that wait for the waters to calm, might benefit from a more stable framework to work with.

me, IDGAF any more. i'll just wait for UE 4.9 or even 5.0 to be released before even thinking about the whole thing.

out of steam much? more like:

MjAxMy1iMTM2ODc2NDQ0Njg4ZWFl.png

Link to comment
Share on other sites


4 minutes ago, VileTouch said:

well, i hope so. but game devs jump on the new bandwagon to gain experience with it...and release a new product while doing so. not necessarily because they are comfortable with it. (nowadays their motto is: "release first, fix it later")

other companies that wait for the waters to calm, might benefit from a more stable framework to work with.

me, IDGAF any more. i'll just wait for UE 4.9 or even 5.0 to be released before even thinking about the whole thing.

out of steam much? more like:

MjAxMy1iMTM2ODc2NDQ0Njg4ZWFl.png

 

 

Yeah I know but people also don't seem to be bothered with this so much.

 

the Steam part wasn't meant for you but anyway, UE engine isn't the best engine, actually it's not that good. big companies like Avalanche (Avalanche engine) or Rockstar (RAGE) built their own engines that's why their games are always on a higher level than others. 

 

 

 

 

 

 

Link to comment
Share on other sites


6 minutes ago, saeed_dc said:

Yeah I know but people also don't seem to be bothered with this so much.

 

the Steam part wasn't meant for you but anyway,

I know... it's a different kind of steam

 

6 minutes ago, saeed_dc said:

UE engine isn't the best engine, actually it's not that good. big companies like Avalanche (Avalanche engine) or Rockstar (RAGE) built their own engines that's why their games are always on a higher level than others. 

I know. but, i'm not about to create yet another game engine just for one or two projects.

you want your game, showcase, whatever, you either do it in Unreal, Cryengine or Unity

it's not like it makes any difference for players anyway.

Link to comment
Share on other sites


1 hour ago, VileTouch said:

I know. but, i'm not about to create yet another game engine just for one or two projects.

you want your game, showcase, whatever, you either do it in Unreal, Cryengine or Unity

it's not like it makes any difference for players anyway.

 

well of course it makes difference for players. if R* had used Unity for its game engine instead of RAGE GTAV would be unplayable. just imagine how shitty physics and everything would look like, full of bugs too and poor optimization

Link to comment
Share on other sites


  • Administrator

The big problem is async compute thing that makes new DirectX faster. AMD worked on it for years. nVidia says it has implemented it, but the reality seems different here. nVidia says it's driver based problem which they are trying to fix. But there are high chances that it is not so. Maybe the nVidia 900 series cards do not support async compute on hardware side.

 

Be sure of this though, nVidia dominates the market. If async compute is not working on it, not many games are going to implement it.

Link to comment
Share on other sites


1 hour ago, DKT27 said:

The big problem is async compute thing that makes new DirectX faster. AMD worked on it for years. nVidia says it has implemented it, but the reality seems different here. nVidia says it's driver based problem which they are trying to fix. But there are high chances that it is not so. Maybe the nVidia 900 series cards do not support async compute on hardware side.

 

Be sure of this though, nVidia dominates the market. If async compute is not working on it, not many games are going to implement it.

 

If history has taught us anything, it's that nothing can dominate the market for a long time. everything's subject to change because it's a multipolar world. you make your product not good enough to complete you'll loser your market share, you already know that.

Link to comment
Share on other sites


  • Administrator
5 hours ago, saeed_dc said:

If history has taught us anything, it's that nothing can dominate the market for a long time. everything's subject to change because it's a multipolar world. you make your product not good enough to complete you'll loser your market share, you already know that.

 

That might be true. But currently AMD comes into that. What nVidia did with the 900 series was something out of ordinary - delivering performance and power usage of a 28nm GPU as good as of lets say 22nm or maybe even better. On the other side, AMD 300 series is complete nonsense. Not only it was using GPUs made 3 years ago, these AMD 300 series cards faced lot of inventory issues. nVidia literally bashed AMD on it when it came to performance and performance per watt usage. AMD however, was futuristic, as it kind of developed the core part of the new DirectX and Vulkan, it knew that rather than doing optimizations on the level that nVidia did, it relied on the new gen APIs to take the lead from nVidia with it's knowledge of these new technologies. With these APIs obviously needing optimizations of their own, they were not sitting empty handed though.

 

Though, the reason I say not many games will implement is that it is well known that nVidia has a lot of influence, both with money and power in the gaming industry. That is why I'm saying that.

Link to comment
Share on other sites


5 hours ago, DKT27 said:

 

That might be true. But currently AMD comes into that. What nVidia did with the 900 series was something out of ordinary - delivering performance and power usage of a 28nm GPU as good as of lets say 22nm or maybe even better. On the other side, AMD 300 series is complete nonsense. Not only it was using GPUs made 3 years ago, these AMD 300 series cards faced lot of inventory issues. nVidia literally bashed AMD on it when it came to performance and performance per watt usage. AMD however, was futuristic, as it kind of developed the core part of the new DirectX and Vulkan, it knew that rather than doing optimizations on the level that nVidia did, it relied on the new gen APIs to take the lead from nVidia with it's knowledge of these new technologies. With these APIs obviously needing optimizations of their own, they were not sitting empty handed though.

 

Though, the reason I say not many games will implement is that it is well known that nVidia has a lot of influence, both with money and power in the gaming industry. That is why I'm saying that.

 

Most of the Nvidia's 900 series cards have 2x 8-pin power connectors whereas all AMD's 300 series have 6-pin plus 8-pin power connectors, so how can you say AMD is consuming more power than Nvidia? have you seen benchmarks? AMD cards have more memory and more overclocking capability than Nvidia's and with all those features they still draw power as little as 30watts more from the system.

 

GTX 950, 960 , 970 ,970 are cards for at least 2 years ago, only 980TI might still have a chance for 1 or 2 more years and that's only because of its 6GB VRAM. AMD's price to performance ratio is a lot better than Nvidia's.

 

I know you're considering market share but there are more important factors when purchasing a graphic card, nobody likes to pay money for something not worth it

 

Link to comment
Share on other sites


  • Administrator
On 10/4/2016 at 8:29 AM, saeed_dc said:

 

Most of the Nvidia's 900 series cards have 2x 8-pin power connectors whereas all AMD's 300 series have 6-pin plus 8-pin power connectors, so how can you say AMD is consuming more power than Nvidia? have you seen benchmarks? AMD cards have more memory and more overclocking capability than Nvidia's and with all those features they still draw power as little as 30watts more from the system.

 

GTX 950, 960 , 970 ,970 are cards for at least 2 years ago, only 980TI might still have a chance for 1 or 2 more years and that's only because of its 6GB VRAM. AMD's price to performance ratio is a lot better than Nvidia's.

 

I know you're considering market share but there are more important factors when purchasing a graphic card, nobody likes to pay money for something not worth it

 

 

Let give some list.

 

AMD 370: 1x6 TDP: 110W

AMD 380: 1x6 TDP: 190W

AMD 380X: 1x6, 1x8 TDP: 190W

AMD 390: 1x6, 1x8 TDP: 275W

AMD 390X: 1x6, 1x8 TDP: 275W

 

GTX 950: 1x6 TDP: 90W

GTX 960: 1x6 TDP: 120W

GTX 970: 2x6 TDP: 145W

GTX 980: 2x6 TDP: 165W

GTX 980 Ti: 1x6, 1x8 TDP: 250W.

 

All these are reference range. Source for pins, source for TDP: Wiki.

 

Now you may ask why some companies like are asking for for 8x2 pins. Reason is simple, there is a huge OC possibility for the GTX 900 cards, for example, you can OC 980 to make 980Ti I have heard. That is why these manufacturers have added more pins.

 

However, these pins do not really matter here. Real power usage does. For that you should refer to this. It's peak power usage. Notice how all the AMD graphics cards are using more power. I wish I could provide you a more detailed list, but this is the best I can find.

 

 

Now about the the time these processors were made.

 

AMD 370: Based on / rebrand of Pitcairn Pro, which is found in Radeon HD 7850. GPU made 4 years ago. Has no native / full support for newer DirectX. Re-released a year ago.

AMD 380: Based on / rebrand of Tonga PRO, which is found in Radeon R9 285. GPU relatively newer, but made 2 years ago. Re-released a year ago.

AMD 380X: Based on / rebrand of Tonga XT, which is found in Radeon R9 285X, which never released in public. GPU made same time as 285. Re-released few months ago.

AMD 390: Based on / rebrand of Hawaii Pro, which is found in Radeon R9 290. GPU made 3 years ago. Re-released a year ago.

AMD 390X: Based on / rebrand of Hawaii XT. which is found in Radeon R9 290X. GPU made same time as 290. Re-released a year ago.

 

All these above mentioned AMD cards are rebards / re-released series 200 or 7xxx cards. Only Fiji based cards are newer.

 

GTX 950: Based on GM206. Not a rebrand, GPU made and sold a year ago.

GTX 960: Based on same GM206. Not a rebrand, GPU made and sold a year.

GTX 970: Based on GM204. Not a rebrand, GPU made and sold two years ago.

GTX 980: Based on same GM204. Not a rebrand, GPU made and sold two years ago.

GTX 980 Ti: Based on GM200. Not a rebrand, GPU made and sold a year ago.

 

As you can see. None of the cards are rebrands. All have their own same gen processors. The reason I mention all this is that they all are based on Maxwell architecture, which has high number of optimizations in them. Some of these optimizations in AMD are only to be found in 380 and 380X, not others.

 

RAM as you say might play a problem. How much, I cannot say. However, Gamers Nexus did a benchmark on GTX 960 year ago about 2GB vs 4GB. Worth reading. Just to get a idea. One more benchmark. I must mention though, 4GB may be enough for time being for 1080p.

 

 

As for price - performance. If you take previous gen DirectX games, nVidia wins. If you take newer gen DirectX games, which are not much right now, but more will come, with async compute enabled, AMD wins.

 

 

I do not consider market share much when advising to buy a graphics card. I did not when I bought mine. I looked at things from as many sides as possible.  I'm not a fanboy of either of companies. Infact, I say AMD did a great thing by helping in invention of HBM. The reason I say many games will not support async compute is because nVidia does not support it, it's simple as that. You can say nVidia is holding back the graphics technology by not supporting it, then I agree with you on that. But nVidia has money, power and market share to make game developers do whatever nVidia wants them to do.

Link to comment
Share on other sites


10 hours ago, DKT27 said:

 

Let give some list.

 

AMD 370: 1x6 TDP: 110W

AMD 380: 1x6 TDP: 190W

AMD 380X: 1x6, 1x8 TDP: 190W

AMD 390: 1x6, 1x8 TDP: 275W

AMD 390X: 1x6, 1x8 TDP: 275W

 

GTX 950: 1x6 TDP: 90W

GTX 960: 1x6 TDP: 120W

GTX 970: 2x6 TDP: 145W

GTX 980: 2x6 TDP: 165W

GTX 980 Ti: 1x6, 1x8 TDP: 250W.

 

All these are reference range. Source for pins, source for TDP: Wiki.

 

Now you may ask why some companies like are asking for for 8x2 pins. Reason is simple, there is a huge OC possibility for the GTX 900 cards, for example, you can OC 980 to make 980Ti I have heard. That is why these manufacturers have added more pins.

 

However, these pins do not really matter here. Real power usage does. For that you should refer to this. It's peak power usage. Notice how all the AMD graphics cards are using more power. I wish I could provide you a more detailed list, but this is the best I can find.

 

 

Now about the the time these processors were made.

 

AMD 370: Based on / rebrand of Pitcairn Pro, which is found in Radeon HD 7850. GPU made 4 years ago. Has no native / full support for newer DirectX. Re-released a year ago.

AMD 380: Based on / rebrand of Tonga PRO, which is found in Radeon R9 285. GPU relatively newer, but made 2 years ago. Re-released a year ago.

AMD 380X: Based on / rebrand of Tonga XT, which is found in Radeon R9 285X, which never released in public. GPU made same time as 285. Re-released few months ago.

AMD 390: Based on / rebrand of Hawaii Pro, which is found in Radeon R9 290. GPU made 3 years ago. Re-released a year ago.

AMD 390X: Based on / rebrand of Hawaii XT. which is found in Radeon R9 290X. GPU made same time as 290. Re-released a year ago.

 

All these above mentioned AMD cards are rebards / re-released series 200 or 7xxx cards. Only Fiji based cards are newer.

 

GTX 950: Based on GM206. Not a rebrand, GPU made and sold a year ago.

GTX 960: Based on same GM206. Not a rebrand, GPU made and sold a year.

GTX 970: Based on GM204. Not a rebrand, GPU made and sold two years ago.

GTX 980: Based on same GM204. Not a rebrand, GPU made and sold two years ago.

GTX 980 Ti: Based on GM200. Not a rebrand, GPU made and sold a year ago.

 

As you can see. None of the cards are rebrands. All have their own same gen processors. The reason I mention all this is that they all are based on Maxwell architecture, which has high number of optimizations in them. Some of these optimizations in AMD are only to be found in 380 and 380X, not others.

 

RAM as you say might play a problem. How much, I cannot say. However, Gamers Nexus did a benchmark on GTX 960 year ago about 2GB vs 4GB. Worth reading. Just to get a idea. One more benchmark. I must mention though, 4GB may be enough for time being for 1080p.

 

 

As for price - performance. If you take previous gen DirectX games, nVidia wins. If you take newer gen DirectX games, which are not much right now, but more will come, with async compute enabled, AMD wins.

 

 

I do not consider market share much when advising to buy a graphics card. I did not when I bought mine. I looked at things from as many sides as possible.  I'm not a fanboy of either of companies. Infact, I say AMD did a great thing by helping in invention of HBM. The reason I say many games will not support async compute is because nVidia does not support it, it's simple as that. You can say nVidia is holding back the graphics technology by not supporting it, then I agree with you on that. But nVidia has money, power and market share to make game developers do whatever nVidia wants them to do.

 

+1 for the detailed information :) 

 

Okay, so here's a few things missing in your explanation,

 

1. these cards are being used on high end desktop computers so a few more watts doesn't harm anyone, desktop users don't run on battery.

 

2. it doesn't matter at all whether the graphic chip is a rebrand or old. old comparing to what exactly? AMD made something which lasts for a long time and the engineers decided that it is still good to use it on newer cards. are they missing any features? no. it's too amatuer to compare them based on the date they were built as long as they have the same features Nvidia's latest GPUs have, if not more.

 

3. VRAM is an issue. if you play most new games and enable anti-aliasing and multi-sampling or simply use too much texture the 4GB VRAM will start throttling the performance on 1080P, not even 4K. I can even show you the VRAM usage on my own system.

 

4. few weeks ago I posted a topic about AMD cooperating with game developers to employ their technologies in the upcoming games instead of Nvidia's.

 

hopefully Radeon 490 and 490X will be more efficient

 

 

Link to comment
Share on other sites


  • Administrator
15 hours ago, saeed_dc said:

 

+1 for the detailed information :) 

 

Okay, so here's a few things missing in your explanation,

 

1. these cards are being used on high end desktop computers so a few more watts doesn't harm anyone, desktop users don't run on battery.

 

2. it doesn't matter at all whether the graphic chip is a rebrand or old. old comparing to what exactly? AMD made something which lasts for a long time and the engineers decided that it is still good to use it on newer cards. are they missing any features? no. it's too amatuer to compare them based on the date they were built as long as they have the same features Nvidia's latest GPUs have, if not more.

 

3. VRAM is an issue. if you play most new games and enable anti-aliasing and multi-sampling or simply use too much texture the 4GB VRAM will start throttling the performance on 1080P, not even 4K. I can even show you the VRAM usage on my own system.

 

4. few weeks ago I posted a topic about AMD cooperating with game developers to employ their technologies in the upcoming games instead of Nvidia's.

 

hopefully Radeon 490 and 490X will be more efficient

 

 

 

1. Not exactly. Let me explain. When I was buying an AMD card, after lots and lots and lots of reading, I came to a conclusion that my 500W PSU, yes, my 500W PSU will not be enough to run it. Well, it will run fine, but I came to know that these GPUs made 3 years ago are not efficiently handling things, unless there is proper power available. Now, you may say that proper power is a requirement, that's for sure, but guess what, these AMD 300 series cards are going beyond their mentioned TDP. Now, while TDP does not exactly mean actual power usage, but, you can imagine the problem when you put a graphics card which actually requires more power than stated, on a PSU that is not sufficiently providing it. To add to that, going over the stated power usage also means more heat which effects the cards. On the other hand, all the GTX 900 series cards do not seem to have this problem. They are within their limits when you do not OC, are well designed and do not get the heat problems like AMD 300.

 

2. Thing is, I have been tracking the AMD 200 and 300 series quite closely. You know, as a buyer, everyone was excited when AMD released 200 series, stating that while these cards are based on 7000 series, the 7000 series is kinda new, also, hey, they are far more cheaper now. So everyone was happy. Then, came the 300 series, everywhere on the internet, there were strong talks of how AMD 300 series will beat nVidia with their new processors. With their innovative techniques, new tech and such. Then came the problem, the 22nm or 18nm, I'm not exactly sure tech of graphics card but something around, had big problems in manufacturing, so much so that both nVidia and AMD were effected by it. Now both companies had two options, either go for newer nm processors inspite of the manufacturing problems, or stick with the now completely mature 28mn process. Both of them decided to stick with 28nm. Then, nVidia said, tell you what, let us innovate and make full use of the 28nm technology. In it, nVidia introduced new compression techniques, optimizations and lot of such stuff, all this meant the newer cards, though based on same 28nm tech, did not require much power, gave more performance while not requiring many CUDA cores - shaders in AMD's language and did not generate that much heat. nVidia did a tech masterpiece with their 900 series, with what they had with them. Then came AMD's chance. Everyone was looking at what AMD could do with the 300 series. Everyone felt AMD will have upper hand here. News started coming out that AMD is going full speed with the 300 series and then unconfirmed HBM based cards. Then news started coming in that AMD is not interested in making new GPUs and wants to concentrate on HBM instead. The news were confirmed when AMD released 300 series. All of them were rebrands, the common buyer felt cheated. I'm talking about $200 graphics card buyers. It was AMD's way of saying - screw you guys, we want to concentrate on HBM cards as we do not want to invest money in making our mainstream cards better, instead, we want to earn more by selling expensive ones. The AMD 300 series only offered minor optimizations and had a lot of backlash. AMD tried to convince people but none were buying their stance. No one expected that AMD will rebrand their cards not once, but for the second consecutive time. It is complete nonsense approach by AMD. They could have added delta compression to all the 300 series instead of just AMD 380, but no, they did not want to do that as that would require one to actually work on the GPU.

 

It's not that nVidia users are not feeling cheated, nVidia had promised async compute on 900 series, they were marketed it as such. It's just that AMD only wins in newer DirectX because it had beforehand knowledge of it - because AMD themselves kinda made it and instead of optimizations in their GPU, AMD relied on supporting the features of new DirectX, which meant more performance.

 

I just check, latest nVidia drivers do seem to support Aync compute. But as they are driver based not hardware based, does not make sense I think.

 

3. Maybe. Again, I cannot confirm. The games I'm playing with AA and multisampling turned on do not use much memory. They are not necessarily AAA titles though. :P

 

4. I do not know much about graphics card history to be honest. I only know of recent history of theirs. But from what I can understand, AMD has always been more supportive of open technologies and it would be no surprise if it continues to do so.

 

Newer cards with newer tech with newer GPU optimizations will always be better than previous ones. Just that people expected more from AMD when it came to the 300 series I think.

Link to comment
Share on other sites


On 4/12/2016 at 2:18 PM, DKT27 said:

 

1. Not exactly. Let me explain. When I was buying an AMD card, after lots and lots and lots of reading, I came to a conclusion that my 500W PSU, yes, my 500W PSU will not be enough to run it. Well, it will run fine, but I came to know that these GPUs made 3 years ago are not efficiently handling things, unless there is proper power available. Now, you may say that proper power is a requirement, that's for sure, but guess what, these AMD 300 series cards are going beyond their mentioned TDP. Now, while TDP does not exactly mean actual power usage, but, you can imagine the problem when you put a graphics card which actually requires more power than stated, on a PSU that is not sufficiently providing it. To add to that, going over the stated power usage also means more heat which effects the cards. On the other hand, all the GTX 900 series cards do not seem to have this problem. They are within their limits when you do not OC, are well designed and do not get the heat problems like AMD 300.

 

2. Thing is, I have been tracking the AMD 200 and 300 series quite closely. You know, as a buyer, everyone was excited when AMD released 200 series, stating that while these cards are based on 7000 series, the 7000 series is kinda new, also, hey, they are far more cheaper now. So everyone was happy. Then, came the 300 series, everywhere on the internet, there were strong talks of how AMD 300 series will beat nVidia with their new processors. With their innovative techniques, new tech and such. Then came the problem, the 22nm or 18nm, I'm not exactly sure tech of graphics card but something around, had big problems in manufacturing, so much so that both nVidia and AMD were effected by it. Now both companies had two options, either go for newer nm processors inspite of the manufacturing problems, or stick with the now completely mature 28mn process. Both of them decided to stick with 28nm. Then, nVidia said, tell you what, let us innovate and make full use of the 28nm technology. In it, nVidia introduced new compression techniques, optimizations and lot of such stuff, all this meant the newer cards, though based on same 28nm tech, did not require much power, gave more performance while not requiring many CUDA cores - shaders in AMD's language and did not generate that much heat. nVidia did a tech masterpiece with their 900 series, with what they had with them. Then came AMD's chance. Everyone was looking at what AMD could do with the 300 series. Everyone felt AMD will have upper hand here. News started coming out that AMD is going full speed with the 300 series and then unconfirmed HBM based cards. Then news started coming in that AMD is not interested in making new GPUs and wants to concentrate on HBM instead. The news were confirmed when AMD released 300 series. All of them were rebrands, the common buyer felt cheated. I'm talking about $200 graphics card buyers. It was AMD's way of saying - screw you guys, we want to concentrate on HBM cards as we do not want to invest money in making our mainstream cards better, instead, we want to earn more by selling expensive ones. The AMD 300 series only offered minor optimizations and had a lot of backlash. AMD tried to convince people but none were buying their stance. No one expected that AMD will rebrand their cards not once, but for the second consecutive time. It is complete nonsense approach by AMD. They could have added delta compression to all the 300 series instead of just AMD 380, but no, they did not want to do that as that would require one to actually work on the GPU.

 

It's not that nVidia users are not feeling cheated, nVidia had promised async compute on 900 series, they were marketed it as such. It's just that AMD only wins in newer DirectX because it had beforehand knowledge of it - because AMD themselves kinda made it and instead of optimizations in their GPU, AMD relied on supporting the features of new DirectX, which meant more performance.

 

I just check, latest nVidia drivers do seem to support Aync compute. But as they are driver based not hardware based, does not make sense I think.

 

3. Maybe. Again, I cannot confirm. The games I'm playing with AA and multisampling turned on do not use much memory. They are not necessarily AAA titles though. :P

 

4. I do not know much about graphics card history to be honest. I only know of recent history of theirs. But from what I can understand, AMD has always been more supportive of open technologies and it would be no surprise if it continues to do so.

 

 

 

1. you're completely wrong, I don't know where you got this information. the power usage is exactly what mentioned by the card's manufacturer (i.e Asus, Gigabyte, Sapphire etc). if you want to stick to the original clock you can buy the card directly from AMD themselves with their stock cooling design. 

for example, Asus R9 390 StriX runs on 275 W when not overclocked but when you do overclock it it will let you inject more power to it so the car will be stable on higher clocks and uses 430W. not everybody overclock their GPUs so it's perfectly fine to buy AMD cards from any brands if you're not going to overclock it. overclockers usually go for those cards that let you use more power resulting in wider overclocking range. there are cards like Gigabyte R9 390 their power usage never goes beyond 300W but also there are cards like R9 390X MSI using up to 530W. you see it's different from brand to brand. some of the forums you read are misleading, not everybody tell the right thing...

 

2. all that you said are just talks you know, you me or anyone else can tell these things but do you or others have any reliable proof to show that AMD's 300 series are behind Nvidia's 900 series? the right way to present the facts is to show some proof of real life experience too.

in almost all benchmarks R9 390 beats GTX 970 and R9 390X beats GTX 980, then comes the Fury X that beats 980 TI and in some benchmarks it's equal to GTX Titan because of it's 4 Gbit per second memory bandwidth.

 

3. Assassin's creed Syndicate, Just Cause 3, GTA V, Call of duty Black Ops 3 these are some memory thirsty titles

 

On 4/12/2016 at 2:18 PM, DKT27 said:

Newer cards with newer tech with newer GPU optimizations will always be better than previous ones. Just that people expected more from AMD when it came to the 300 series I think.

 

do you have any screenshot of benchmarks or some proof that Radeon 300 series don't perform equal in games/programs as Nvidia 900 series, if not better?

just saying AMD is using older technologies than Nvidia isn't quite enough. I can say those at AMD considered everything and decided to manufacture these products because they think these are enough for today's needs and already beat Nvidia's. 

 

 

Link to comment
Share on other sites


  • 3 weeks later...
Cereberus

i'm no fan boy. i've both ati and nvidia cards at different points in time, and i get what was good when it was available. 

 

that said amd has done lots of good things to really drive things forward.... but that said i'd still buy the product with the better performance and value....

 

and now thats not the only thing to consider. there is the async vs gsync wars..... depending on which monitor you get... you may have to commit to one brand or the other....  but basically your locked into that camp due to the investment which doesn't make it easy to switch between the 2 easily without spending much more (replacing monitors cough...) >_>; .... sigh

 

anyhow... got to wait for reviews to see which is the better card, pascal or polaris :/ however polaris will be coming later.... not sure i can wait that long. but at least with the reviews, we can finally see whether nvidia bothered to do something about async with pascal. 980ti hardly qualifies as a dx12 gpu when it doesn't even do async as well as ati..... people keep saying dx12 are not here so whats the big deal. well ashes singularity, hitman all here.... >_>: so yes it's important especially for the newer titles.

 

seems though hbm2 might be skipped over for pascal and pushed back to volta. not really sure it's worth waiting just for hbm2. when amd was first to come with hbm1, it still came down to fps, and it showed 980ti still won that game, also the fact hbm1 was stuck at 4gb compared to nvidias which had more vram.....

 

amd did however do itself a favor by redoing their drivers with a more modern look. also nvidia hasn't done themselves any favors with their recent whql drivers which many questioned why they were even tagged as whql if they were so buggy .....

 

anyway pascal will get a showing in a few days so we will learn more about it :)

 

 

 

Link to comment
Share on other sites


Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...