Monday, January 8th 2024

AMD Announces the Radeon RX 7600 XT 16GB Graphics Card

AMD announced the new Radeon RX 7600 XT graphics card, bolstering its mid-range of 1080p class GPUs. The RX 7600 XT is designed for maxed out AAA gaming at 1080p, although it is very much possibly to play many of the titles at 1440p with fairly high settings. You can also take advantage of technologies such as FSR 3 frame generation in games that support it, AMD Fluid Motion Frames on nearly all DirectX 12 and DirectX 11 games; as well as the new expanded AMD HyperRX performance enhancement that engages a host of AMD innovations such as Radeon Super Resolution, Anti-Lag, and Radeon Boost, to achieve a target frame rate.

The Radeon RX 7600 XT is based on the same 6 nm "Navi 33" silicon, and the latest RDNA 3 graphics architecture, as the Radeon RX 7600. If you recall, the RX 7600 had maxed out all 32 CU on the silicon. To design the RX 7600 XT, AMD retained the "Navi 33," but doubled the memory size to 16 GB, and increased the clock speeds. The 16 GB of memory is deployed across the same 128-bit wide memory bus as the 8 GB is on the RX 7600. The memory speed is unchanged, too, at 18 Gbps GDDR6-effective; as is the resulting memory bandwidth, of 288 GB/s. There are two key changes—the GPU clock speeds and power limits.
The Game Clock of the RX 7600 XT is set at 2.47 GHz, compared to 2.25 GHz on the RX 7600; and the maximum Boost Clock is set at 2.76 GHz, compared to 2.66 GHz on the RX 7600. To support these, and improve boost clock residency, AMD increased the total board power (TBP) to 190 W, up from 165 W on the RX 7600. As a result, the RX 7600 XT custom-design graphics cards will feature two 8-pin PCIe power connectors, or at least a combination of 6-pin and 8-pin; while the RX 7600 made to with just one 8-pin.

Another small change with the RX 7600 XT is that board partners will be mandated to wire out DisplayPort 2.1 on their custom boards (to use the required clock drivers and other ancillaries); they cannot opt to have DisplayPort 1.4 to save costs.

The 6 nm "Navi 33" silicon physically features 32 RDNA 3 compute units (CU), adding up to 2,048 stream processors, 64 AI accelerators, 32 Ray accelerators, 128 TMUs, and 64 ROPs. A 32 MB Infinity Cache memory cushions the 128-bit GDDR6 memory interface, which on the RX 7600 XT drives 16 GB, running at 18 Gbps.
Thanks to the increase engine clocks, the RX 7600 XT is shown posting a proportionate increase in performance across popular titles at 1080p with maxed out settings, including ray tracing. The RX 7600 XT is shown posing a near doubling in performance over the GeForce RTX 2060 6 GB. The RX 7600 XT is also shown offering playable frame rates at 1440p with max settings (albeit without ray tracing). AMD is making the case for 16 GB with creator and generative AI applications, where the large video memory should come very handy.

AMD Radeon RX 7600 XT will be available on January 24, 2024. It is exclusively a partner-driven launch, there will be no reference design in the retail market. AMD set $329 as the baseline price for the RX 7600 XT, a $60 premium over the RX 7600.
Add your own comment

51 Comments on AMD Announces the Radeon RX 7600 XT 16GB Graphics Card

#26
kapone32
Based on the specs this card should be a nice upgrade for people who want more VRAM. The fact that it is 2.1 is nice too but they have to be priced properly. I fear that these will still be over $400 Canadian though. It could be a nice upgrade for my Daughter's PC to give her double the VRAM. It would be nice to see 6700XT get a price drop though as they are $439 currently.
Posted on Reply
#27
sLowEnd
kapone32Based on the specs this card should be a nice upgrade for people who want more VRAM. The fact that it is 2.1 is nice too but they have to be priced properly. I fear that these will still be over $400 Canadian though. It could be a nice upgrade for my Daughter's PC to give her double the VRAM. It would be nice to see 6700XT get a price drop though as they are $439 currently.
They definitely will be over $400 CAD. The MSRP is $30 higher than the RTX 4060's, and you can see right now 4060s are hovering around $390-$400ish.
Also, exchange rates are $329 USD = $440 CAD at this time.
Posted on Reply
#28
kapone32
sLowEndThey definitely will be over $400 CAD. The MSRP is $30 higher than the RTX 4060's, and you can see right now 4060s are hovering around $390-$400ish.
Also, exchange rates are $329 USD = $440 CAD at this time.
Yes you got it. As I said the cheapest 6700XT is $439. It will be up to the distributors unfortunately. I can see them using the 16GB VRAM as an excuse to charge a premium. So maybe the 6700XT falls to 409 or 419 to allow for that straight conversion for the 7600XT. The thing is the 4060 price you are referencing are for the 8 GB variants. I see the 16GB versions up to $200 more on Newegg.
Posted on Reply
#29
sLowEnd
kapone32Yes you got it. As I said the cheapest 6700XT is $439. It will be up to the distributors unfortunately. I can see them using the 16GB VRAM as an excuse to charge a premium. So maybe the 6700XT falls to 409 or 419 to allow for that straight conversion for the 7600XT. The thing is the 4060 price you are referencing are for the 8 GB variants. I see the 16GB versions up to $200 more on Newegg.
There's no 16GB 4060, only 4060 Ti's
Posted on Reply
#30
Vya Domus
I don't know why people are annoyed by this release, according to their charts this will be faster than a 4060, some of the time considerably so. Seems reasonably priced, I don't see the issue.

Also, I love that they're using figures with frame interpolation on. Nvidia paved the way to this horrid practice, at least now that they both do it consumers are equally mislead.
Posted on Reply
#31
Beginner Macro Device
Vya Domusaccording to their charts this will be faster than a 4060, some of the time considerably so.
In reality, 7600 is 5% behind 4060 in pure raster. 7600 XT has the same N33 GPU with the same 32 CUs. Overclock it all you want, it won't be "considerably" faster than the 4060.
Additional 8 GB VRAM is cool but we are yet to witness more than two games where this matters at this level of raw performance.

On top of that, 4060 is:

• Cheaper than $330;
• Capable of DLSS;
• Capable of CUDA workloads;
• Much more energy efficient;
• Usually much more compact.

I don't see a reason for this 7600 XT to be interesting for anyone who is not in the market for the cheapest 16 GB GPU for their professional workloads. At $330, you can get a 6700 XT, or even a 6750 XT which are much faster in gaming.
Posted on Reply
#32
Vya Domus
Beginner Micro Device• Cheaper than $330;
Yeah by 30$, great. It's also faster, like I said.
Beginner Micro Device• Capable of DLSS;
They both have upscaling and frame interpolation, Nvidia doesn't get to plaster everything with "get a zillion more fps with DLSS on" anymore, they can both do it.
Beginner Micro Device• Capable of CUDA workloads;
• Much more energy efficient;
• Usually much more compact.
Pretty irrelevant for most consumers.
Posted on Reply
#33
Beginner Macro Device
Vya DomusIt's also faster, like I said.
7600 XT cannot be faster than 4060. They are on par at best for the former.
Vya DomusThey both have upscaling and frame interpolation
FSR is objectively worse; XeSS works much slower on AMD GPUs than it does on nVidia ones; and having one less option deserves a second thought.
Vya DomusPretty irrelevant for most consumers.
Yet still rendering the 4060 a better value SKU.

7600 XT is a nonsensical, gluttonly overpriced and non-balanced release. It's even worse than a 16 GB version of 4060 Ti because nVidia had zero competition at $500 price point when it was released. Now, 4060 is a competition to consider, yet AMD are behaving like 4060 doesn't exist.
Posted on Reply
#34
Vya Domus
Beginner Micro DeviceYet still rendering the 4060 a better value SKU.
To an insignificant portion of consumers.
Beginner Micro Device7600 XT cannot be faster than 4060.
What do you mean "cannot be" lol, like it's forbidden or what ? It almost certainly will be faster, it has a higher TDP, 7000 series are all limited by power, more W more performance with the same core count.
Posted on Reply
#35
80-watt Hamster
Ignoring the naming, since that's basically arbitrary, it's been interesting seeing the conversation around this part.

Every 8GB card that gets released, including the 7600, gets dumped on for not having enough VRAM. Then a 16GB version comes out and gets dumped on for having too much VRAM. I get that 16GB is more framebuffer than the chip can make full use of, but there's no in-between available. AMD can't just make a 12GB card without reconfiguring the memory bus. So they're stuck with "extra" memory chips on the BOM, and they're not about to just not calc those in because they're largely superfluous.

Pricing will sort itself out. It generally does. Eventually.
Posted on Reply
#36
Beginner Macro Device
Vya DomusWhat do you mean "cannot be" lol, like it's forbidden or what
Vya Domus7000 series are all limited by power, more W more performance with the same core count.
OG 7600 has 165 W power limit. That's 100%.
7600 XT has 190 W power limit. That's 115%.

Performance uplift from cranking power limit up has never been linear, you can surely divide it by 2 and get the best case scenario, thus 7.5 percent speed, especially considering RDNA3 GPUs are way beyond their sweet spot already.

97% + 7.5% of 97% = 104.3% performance of 4060. Best case. Realistic case is 99 to 101 % depending on an exact make.

We also don't know for sure if that ain't an RTX 3090 alike case. Because in case it is it's double the VRAM chips, thus double the VRAM power consumption, thus even less chances for 7600 XT to be fast enough.
Vya DomusTo an insignificant portion of consumers.
Don't underestimate the sheer amount of guys with Aerocool VX / Perdoon / KCAS / Power Man PSUs and those with suboptimal PC cases.
80-watt HamsterThen a 16GB version comes out and gets dumped on for having too much VRAM.
I personally dump this GPU for being last gen level slow. 16 GB or 8 GB, this GPU will be a bad purchase at 330 USD. AMD could easily cut 7700 XT down to 48 CUs, cut clocks a little bit and call it a day. They were one and a half gen behind in 2018. Now they are THREE gens (lackluster RT + worse upscaler + worse power efficiency; the difference is bigger than 6 years ago) behind and they are acting like nVidia's products released after 2018 don't exist. If our retail prices made any sense I'd go for a 40 series GPU (prices are currently same level much higher than MSRP + VAT for both AMD and nVidia GPUs).

nVidia, by vast overpricing, offered a hangar sized room for AMD to slot their GPUs in. AMD made GPUs slightly faster in raster and much worse in everything else and are pretending that selling them for the same price is completely fine.
Posted on Reply
#37
DaemonForce
What's that? 16GB on a 128-bit bus and the exact same core clock with a higher boost? Sounds like I can expect marginally higher performance that may as well be called margin of error. The 4060 Ti had an eerily similar moment with a $100 price difference between the two SKUs. This is probably the new GPU face of "weird flex but okay" and seeing how AMD narrowed the price gap into awkward price territory without sunsetting the previous card, maybe that is okay. It's an equally stupid design decision, sure. It's also possible that AMD saw the vitriol around the 4060 Ti and decided to be a bro and take a bit of the heat off. It could also be the quietest screaming match we've ever seen. Who knows?

7600 Launch: $269
7600 XT Launch: $299
4060 Ti Launch: $399
4060 Ti 16GB Launch: $499

The one thing I don't understand is who these cards are for. They're the modern day entry to 1080p60 gaming but the last time I had anything on a 64 or 128 bit bus fit for gaming, we were still in the DirectX9 era, where cards like this tend to suffer the most. Perhaps it's a compromise for those that enjoy gaming and AI training.
Posted on Reply
#38
theouto
80-watt HamsterIgnoring the naming, since that's basically arbitrary, it's been interesting seeing the conversation around this part.

Every 8GB card that gets released, including the 7600, gets dumped on for not having enough VRAM. Then a 16GB version comes out and gets dumped on for having too much VRAM. I get that 16GB is more framebuffer than the chip can make full use of, but there's no in-between available. AMD can't just make a 12GB card without reconfiguring the memory bus. So they're stuck with "extra" memory chips on the BOM, and they're not about to just not calc those in because they're largely superfluous.

Pricing will sort itself out. It generally does. Eventually.
I guess the dumping comes because that memory jump comes at a big price gap without a noteworthy performance gap. 4060ti 8gb to 16gb was 100 coins and did not improve its performance, maybe in the odd vram limited game. Now the 7600XT is 60 coins more expensive than the 7600, yet only promises more board power, vram and a factory overclock. Of course that is disregarding the bus staying the same, but overall the price jumps come at a big question mark, or at least for me.
Posted on Reply
#39
efikkan
btk2k2Longevity. Same reason the RX 480 8GB was a better buy than the RX 480 4GB. It allows you to run games with higher texture settings for longer at your desired resolution and generally textures are one of the most important settings to keep high because of how much they impact IQ. If I have a choice between turning down compute heavy settings like fancy fog or turning down textures then the fog goes every single time.
The argument about having loads of extra VRAM to "future-proof" a GPU is a pointless endeavor, and yet people fall for it again and again.
By the time games gain significantly higher details, the computational load and bandwidth requirements will grow even more, slowing the frame rate to a crawl long before you get to see games make sensible use of this extra memory. RX 7600 (XT) isn't particularly powerful by today's standards, and it will certainly not be 3-5 years down the road. Slapping 16 GB of VRAM on this card isn't going to extend its useful life as a gaming card.
Posted on Reply
#40
Super XP
efikkanThe argument about having loads of extra VRAM to "future-proof" a GPU is a pointless endeavor, and yet people fall for it again and again.
By the time games gain significantly higher details, the computational load and bandwidth requirements will grow even more, slowing the frame rate to a crawl long before you get to see games make sensible use of this extra memory. RX 7600 (XT) isn't particularly powerful by today's standards, and it will certainly not be 3-5 years down the road. Slapping 16 GB of VRAM on this card isn't going to extend its useful life as a gaming card.
Agreed.
The Term "Future Proof" should never be considered when building a Gaming PC. Its a gimmick, pointless endeavor as you stated.
What I prefer is strategic Gaming PC custom building. Acquire a feature rich motherboard with (1) CPU upgrade pathway & (1) Ram upgrade pathway (16GB to 32GB? or 32GB to 64GB). GPUs can always be swapped in and out, M.2 + SSD drives can always be swapped in and out for newer versions.

RDNA3 looks great, BUT I am not sure what AMD is thinking here? This GPU makes no sense with 16GB. Would have been better off with 8GB or 12GB and drop the price tag. What also boggles the mind is how the RX 7700XT and 7800XT are SO CLOSE to each other. Is it because AMD is having issues squeezing higher performance with RDNA3? I probably don't need to upgrade my RX 6700XT as it pounds any AAA game at 1440p with max PQ & very playable FPS. But the 7700XT is overpriced because its so close in performance with the 7800XT and both are considered 1440p GPUs. :confused: lol AMD pulling an Nvidia again?
Posted on Reply
#41
Beginner Macro Device
Super XPIs it because AMD is having issues squeezing higher performance with RDNA3?
No. It's because what was supposed to be a 7800 XT was only majorly shipped to China being called a 7900 GRE with about 150 dollar premium on top of what it's really worth and what had to be a 7700 XT was called 7800 XT. Then, we have a nonsensical cut-down GPU with scuffed VRAM bandwidth for 430 plus USD. AMD are behaving like nVidia GPUs newer and more advanced than 2080 Ti do not exist.
Posted on Reply
#42
btk2k2
efikkanThe argument about having loads of extra VRAM to "future-proof" a GPU is a pointless endeavor, and yet people fall for it again and again.
By the time games gain significantly higher details, the computational load and bandwidth requirements will grow even more, slowing the frame rate to a crawl long before you get to see games make sensible use of this extra memory. RX 7600 (XT) isn't particularly powerful by today's standards, and it will certainly not be 3-5 years down the road. Slapping 16 GB of VRAM on this card isn't going to extend its useful life as a gaming card.
Games now are exceeding 8GB VRAM buffers so it does not require a significant higher detail, just a few % more on average is going to make a difference.

Remember the question is not 'can a 7600 class card use 16GB of VRAM' the question is actually 'can a 7600 class card use more than 8GB of VRAM'.

Also the bandwidth and compute requirements only increase if you stick to the same pre-set of high/ultra. If you allow yourself to use lower pre-sets as games get more demanding then you won't exceed your compute. The realistic minimum for a game is going to be PS5 / Series X so if your GPU has more compute than those consoles you should always be able to tune it to hit playable settings until games no longer target PS5 / Series X which is a good 5/6 years away yet.

This is what made the RX480 8GB / RX580 so good, the GPU performance and VRAM was enough that it could offer a much better than console experience until pretty recently. It has only been since we started seeing fewer cross gen games on PS5 / Series X that they have fallen behind. 7600XT is going to be similar although for a true sub $300 GPU to really hit the sweet spot we will probably waiting for the next gen.
Posted on Reply
#43
efikkan
btk2k2Games now are exceeding 8GB VRAM buffers so it does not require a significant higher detail, just a few % more on average is going to make a difference.
As we always have explain: allocated memory isn't the same as used VRAM. It's a useless metric to determine how much VRAM you actually need.
VRAM is heavily compressed on the fly, and changes constantly with dynamic tiled rendering. The way to check whether you need more is to see if the performance plummets. If it keeps scaling (e.g. with an OC), then VRAM size isn't the issue.
btk2k2Remember the question is not 'can a 7600 class card use 16GB of VRAM' the question is actually 'can a 7600 class card use more than 8GB of VRAM'.

Also the bandwidth and compute requirements only increase if you stick to the same pre-set of high/ultra.
Just look at the RTX 4060 Ti 16 GB, which has shown us how pointless slapping extra VRAM on a card really is. Its bigger brothers (4070 and up), manage to scale well with "just" 12 GB, and there isn't a time coming when the extra VRAM is going to make the 4060 Ti 16 GB able to outperform cards from a higher tier. So exactly when is the investment in VRAM going to pay off? :rolleyes:

If you're going to make actual use of VRAM during a frame, then you'll always going to need bandwidth and computational power to go along with it. It doesn't matter if it's a current game, or a game coming 5 years from now, this basic fact isn't going to change.
7600 XT 16 GB wit its 288 GB/s bandwidth, means it could theoretically access at most 4.8 GB during a single frame at 60 FPS, or 2.4 GB at 120 FPS, assuming 100% utilization of the memory bus (which is extremely unrealistic). So in reality, the only way to make use of such a large VRAM is to have a lot allocated that isn't actually needed in the immediate future. By the time you need 16 GB to play games at desired settings on this card, it's going to perform far below 60 FPS. Keep in mind, this card isn't a great performer even by today's standards.

The "future-proofing" with VRAM argument always comes down to arguing for a theoretical utopia. Is the extra VRAM giving you extra performance? No. Will the extra VRAM mean the card will perform well for 2 years longer than without it? No.
Super XPRDNA3 looks great, BUT I am not sure what AMD is thinking here? This GPU makes no sense with 16GB. Would have been better off with 8GB or 12GB and drop the price tag. What also boggles the mind is how the RX 7700XT and 7800XT are SO CLOSE to each other. Is it because AMD is having issues squeezing higher performance with RDNA3? I probably don't need to upgrade my RX 6700XT as it pounds any AAA game at 1440p with max PQ & very playable FPS. But the 7700XT is overpriced because its so close in performance with the 7800XT and both are considered 1440p GPUs. :confused: lol AMD pulling an Nvidia again?
It's probably a combination of surplus GPUs of certain bins, and the need for some media attention. We know they have released "pointless" products in the past when they need something to show while the next generation gets ready.
7600 XT would certainly be more interesting if it was a more cut down Navi 32 (like 160-bit 10 GB and ~3072 cores), but the reality is they probably don't have a surplus of bins matching that.
Posted on Reply
#44
btk2k2
efikkanAs we always have explain: allocated memory isn't the same as used VRAM. It's a useless metric to determine how much VRAM you actually need.
VRAM is heavily compressed on the fly, and changes constantly with dynamic tiled rendering. The way to check whether you need more is to see if the performance plummets. If it keeps scaling (e.g. with an OC), then VRAM size isn't the issue.


Just look at the RTX 4060 Ti 16 GB, which has shown us how pointless slapping extra VRAM on a card really is. Its bigger brothers (4070 and up), manage to scale well with "just" 12 GB, and there isn't a time coming when the extra VRAM is going to make the 4060 Ti 16 GB able to outperform cards from a higher tier. So exactly when is the investment in VRAM going to pay off? :rolleyes:

If you're going to make actual use of VRAM during a frame, then you'll always going to need bandwidth and computational power to go along with it. It doesn't matter if it's a current game, or a game coming 5 years from now, this basic fact isn't going to change.
7600 XT 16 GB wit its 288 GB/s bandwidth, means it could theoretically access at most 4.8 GB during a single frame at 60 FPS, or 2.4 GB at 120 FPS, assuming 100% utilization of the memory bus (which is extremely unrealistic). So in reality, the only way to make use of such a large VRAM is to have a lot allocated that isn't actually needed in the immediate future. By the time you need 16 GB to play games at desired settings on this card, it's going to perform far below 60 FPS. Keep in mind, this card isn't a great performer even by today's standards.

The "future-proofing" with VRAM argument always comes down to arguing for a theoretical utopia. Is the extra VRAM giving you extra performance? No. Will the extra VRAM mean the card will perform well for 2 years longer than without it? No.


It's probably a combination of surplus GPUs of certain bins, and the need for some media attention. We know they have released "pointless" products in the past when they need something to show while the next generation gets ready.
7600 XT would certainly be more interesting if it was a more cut down Navi 32 (like 160-bit 10 GB and ~3072 cores), but the reality is they probably don't have a surplus of bins matching that.
Lets give it time... Oh wait we don't need to because it is already starting to happen.



So at 1440p the 4060Ti 16GB can use RT if you can accept 30FPS. The 8GB cant and neither can the 3070 Ti which is a much faster GPU usually.



Here the 4060Ti can offer path tracing at 1080p at just over 30 fps which the objectively faster 3070 Ti can't manage and neither can the 4060Ti 8GB manage.

Now in these 2 cases I don't think 16 GB will help the 7600 XT much because AMD is that much worse in RT. It would surely have helped the 3070 and 3070Ti though. I mean why is the 3060 12GB ahead of the 3070 in this test. Sure both are 'unplayable' but the 3060 12GB should be more unplayable because it has less compute and less bandwidth.

Or here at 1080p in Ratchet and Clank.



4060Ti 16GB ahead of the 3070Ti and offering a much better experience than the 4060Ti 8GB. In this title I would not at all be surprised if the 7600 XT managed to comfortably exceed 60fps which the current 7600 just can't do.

Or here in RE4 at 1440p with RT.



3060 12GB actually offers a playable console like experience which for a $330 is quite okay, the 3070 crashes when if it had 16GB of VRAM it would be a 60+ FPS experience. I expect the 7600 would also crash if it was tested here but the 7600XT would probably be in and around 60 FPS in this example.

One thing these bar charts don't tell you is how bad texture swapping is on 8GB cards. Sure the FPS bar might look fine but does the IQ hold up while playing or are you looking at an awful lot of low resolution textures while the proper ones load in after a quick camera pan because it can't keep up.

So sure, you can take your numbers and theory crafting about why 16GB won't matter and I will just look at the observable evidence we have. To point it out again, a title does not need to use 16GB of VRAM for the XT to show an advantage, a title just needs to use 9GB of VRAM, maybe even 8.5GB and it will show. Either in smoother frame rates or with less texture pop and better IQ.

EDIT:



Purely academic of course because only the 4090 is playable but despite the massive compute short comings of the 4060Ti 16 GB it is faster than the 4070 and 4070ti. I wasn't going to show this chart for the very reason that nothing is remotely playable but it does show the stark difference between being compute bound and VRAM bound. With a VRAM bind you just hit a wall and usually the only setting you can change to improve it is textures or in this case turning Path Tracing / RT off. With compute binds there are more settings you can turn down to get you where you need to be.
Posted on Reply
#45
Beginner Macro Device
btk2k2it does show the stark difference between being compute bound and VRAM bound. With a VRAM bind you just hit a wall and usually the only setting you can change
Let me get it straight. Double the VRAM helps with a single figure number of games in limited scenarios and we're talking very high or maximum quality settings, usually way below PC gamers' zone of comfort, namely 60 FPS upwards.

Additional calculating power, however, helps in 100% games in 100% scenarios. In VRAM bound scenarios, of course, it helps way less but once again, it's <10 games out here exhibiting such behaviour. I agree with XT needing >8 GB but making it 6700-alike (36 CUs instead of 32 and 10 GB on 160-bit bus instead of 8/128), of course with at least 10% higher frequencies than in the case of 6700 non-XT, would both solve the 8 GB issue and the low general speed issue. At 32 CUs, 16 GB over a very slender 128-bit bus... it's an exceptionally niche product to say the very least.
Posted on Reply
#46
btk2k2
Beginner Micro DeviceLet me get it straight. Double the VRAM helps with a single figure number of games in limited scenarios and we're talking very high or maximum quality settings, usually way below PC gamers' zone of comfort, namely 60 FPS upwards.

Additional calculating power, however, helps in 100% games in 100% scenarios. In VRAM bound scenarios, of course, it helps way less but once again, it's <10 games out here exhibiting such behaviour. I agree with XT needing >8 GB but making it 6700-alike (36 CUs instead of 32 and 10 GB on 160-bit bus instead of 8/128), of course with at least 10% higher frequencies than in the case of 6700 non-XT, would both solve the 8 GB issue and the low general speed issue. At 32 CUs, 16 GB over a very slender 128-bit bus... it's an exceptionally niche product to say the very least.
Those 4 games are no where near an exhaustive list...

With fewer and fewer games catering to the PS4 and more and more using RT if your intention is to buy a GPU at a good price and ride it out until the next generation consoles then the 7600XT is the 1st GPU at a reasonable MSRP that has enough VRAM and enough compute power to make it possible. The 6700XT is a good alternative option right now due to it being on sale but that depends on region. The 8GB variant won't be able to manage that.

Yes it would be better if the 7600XT was a further cut down N32 rather than an OC'd N33 with double the ram but a cut down N32 variant would very likely cost closer to $400 so would be less appealing on that front.
Posted on Reply
#47
Beginner Macro Device
btk2k2using RT
btk2k2The 6700XT is a good alternative
Just no. Double, no, quadruple no. RT and AMD GPUs below 7900 XT don't match together. At 10+ GB VRAM consumption mark at least. I am an RX 6700 XT owner and I know what I'm talking about: this GPU, being VRAM maxed out on RT, gets me way below 20, let alone 30 FPS. This is not playable and is not a valid point.

Speaking RT titles, Avatar: Frontiers of Pandora is well below 60 FPS on RX 6700 XT and yes, we're talking 2000late 1080p resolution. At 4K, its VRAM buffer is still nowhere near being maxed out but this GPU only gets a tad above a dozen FPS. Things are even worse for an RX 7600 with it being under 40 FPS at 1080p and 13 FPS at 4K. Cyberpunk 2077 is also unplayable on both GPUs if we enable RT to the point it eats at least half their VRAM. Control? Unplayable even at 1080p. Software ray traced titles like Resident Evil 4 Remake? Yes, there is an obvious deficite if we're talking 8 GB VRAM GPUs, yet RT is a total gimmick in this particular game.

Even 4060 isn't bought for RT despite handling it way better than BOTH aforementioned GPUs. At below 500-600 USD, gamers don't expect RT to be handled well. And even without RT, pure raster performance is in a gutter way before VRAM limitations kick in. Remember GTX 1060 6 GB launch? Remember GTX 980? These cards are tied both in scenarios where 4 GB is enough and in scenarios where 4 GB is not enough. 7600 is way slower than 3070 Ti and it can't reasonably saturate its own 8 GB, let alone 16 GB. A couple titles with VRAM hogging habit don't count. Horsepower requirements always grow first.
btk2k2would very likely cost closer to $400
That's where RX 7900 GRE belongs. Unfortunately, AMD aren't aware of this yet.
Posted on Reply
#48
btk2k2
Beginner Micro DeviceJust no. Double, no, quadruple no. RT and AMD GPUs below 7900 XT don't match together. At 10+ GB VRAM consumption mark at least. I am an RX 6700 XT owner and I know what I'm talking about: this GPU, being VRAM maxed out on RT, gets me way below 20, let alone 30 FPS. This is not playable and is not a valid point.

Speaking RT titles, Avatar: Frontiers of Pandora is well below 60 FPS on RX 6700 XT and yes, we're talking 2000late 1080p resolution. At 4K, its VRAM buffer is still nowhere near being maxed out but this GPU only gets a tad above a dozen FPS. Things are even worse for an RX 7600 with it being under 40 FPS at 1080p and 13 FPS at 4K. Cyberpunk 2077 is also unplayable on both GPUs if we enable RT to the point it eats at least half their VRAM. Control? Unplayable even at 1080p. Software ray traced titles like Resident Evil 4 Remake? Yes, there is an obvious deficite if we're talking 8 GB VRAM GPUs, yet RT is a total gimmick in this particular game.

Even 4060 isn't bought for RT despite handling it way better than BOTH aforementioned GPUs. At below 500-600 USD, gamers don't expect RT to be handled well. And even without RT, pure raster performance is in a gutter way before VRAM limitations kick in. Remember GTX 1060 6 GB launch? Remember GTX 980? These cards are tied both in scenarios where 4 GB is enough and in scenarios where 4 GB is not enough. 7600 is way slower than 3070 Ti and it can't reasonably saturate its own 8 GB, let alone 16 GB. A couple titles with VRAM hogging habit don't count. Horsepower requirements always grow first.


That's where RX 7900 GRE belongs. Unfortunately, AMD aren't aware of this yet.
Ahh okay, you like to snip quotes to take them out of context then argue the strawman you have built with those out of context quotes. Wonderful.
Posted on Reply
#49
Beginner Macro Device
btk2k2Ahh okay, you like to snip quotes to take them out of context then argue the strawman you have built with those out of context quotes. Wonderful.
I just pointed out the most screaming nonsense in the first paragraph. Later on, in paragraphs number two and three, I argued with the rest of your passage. It just screams "I don't know how VRAM is used and I am proud of it."
Posted on Reply
#50
btk2k2
Beginner Micro DeviceI just pointed out the most screaming nonsense in the first paragraph. Later on, in paragraphs number two and three, I argued with the rest of your passage. It just screams "I don't know how VRAM is used and I am proud of it."
btk2k2if your intention is to buy a GPU at a good price and ride it out until the next generation consoles then the 7600XT is the 1st GPU at a reasonable MSRP that has enough VRAM and enough compute power to make it possible. The 6700XT is a good alternative option right now due to it being on sale but that depends on region.
This is pure truth. If someone wants to build a more budget friendly PC that can keep up with the consoles for the rest of the generation they now have some options. Before the 6700XT got as cheap as it is or the 7600XT launch you had to pay quite a lot more for a GPU that had that balance and it was not worth it over just buying a console unless there were specific PC only games you wanted.

Sure you will need to make compromises but you should be able to maintain console like IQ and have more FPS or have the same FPS and higher IQ, sometimes you will be able to turn on RT if the game has a console like RT setting available (how worthwhile it is to turn on is upto the user).

Neither of these parts will hit VRAM walls at certain settings combinations like the 3070/3070Ti and other more power 8GB cards do. Just look at the Ratchet and Clank example at 1080p. The 6700 XT is 2x faster than the 7600 and the 6700XT is no where near 2x the compute performance. I fully expect the 7600XT hit more than 60 fps in that game at that setting, maybe even more like 70+ depending on how VRAM vs compute vs bandwidth limited the bottlenecks are.
Posted on Reply
Add your own comment
Jun 1st, 2024 21:36 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts