Reviews

Power without compromise — GeForce RTX 3080 graphics card review

It’s a question that comes up with every upgrade cycle, and the answer is always “It depends”. That question, of course, is “Is this a good time to upgrade my video card?” Well, NVIDIA has announced their GeForce RTX 3080 and 3090 cards, and this time around the answer to that question is a little more straightforward.

By the time you read this, you’ll have a chance at pre-ordering an RTX 3080, but why in the world would you want it? Well, if you are using any card in the 10 series, even a 1080Ti, it’s well past time. It’d be easy to simply throw benchmark after benchmark at you and point to the numbers and say “See? Higher numbers!” but that would do a great disservice to you as a consumer, and also the capabilities of this fantastic piece of hardware. It’s capable of so much more than frames and resolutions, and it’s inside those details that you’ll find why this 3080 would have been worth it at twice the price. To do that, however, we need to take an unexpected detour.

The Xbox Series X and PlayStation 5 are right around the corner, with Microsoft being forced to announce thanks to a leak, and Sony right behind them. These new platforms are starting to encroach on ground that has been completely dominated by high end PCs. 4K/60, 1440p at 120fps, Ray Tracing — these things were simply barely (and rarely) achievable on hardware as powerful as the Xbox One X. The new consoles are starting to breathe some rarified air with GDDR6 memory with 560GB/s memory bandwidth throughput, driving 4K resolutions at clock speeds of 14Gbps. Impressive, but why am I telling you, the PC gamer, these things? That’s because the games being built for this upcoming generation will have PS5 and XSX ports, so they also represent the level of hardware being used as reference for the next four to five years of development. Sure, there will be PC exclusives, but it’s unlikely that they’ll stray too far from this base level. As such, I’m going to use the published Xbox Series X GPU numbers (they are higher than the PlayStation 5) as part of my comparison for the GeForce RTX 3080, as well as that of the previous generation king (and still stellar performer in its own right), the 2080 Ti, and I’ll even throw in a 1080 Ti to cover all the bases.

To fully digest all of what is under the hood inside the GeForce RTX 3080, you’ll need to understand a few background objects. I’ll try to explain them succinctly and in layman’s terms so you don’t have to have an electronics engineering degree to understand them. Suffice it to say that some nuance may be lost in the summation, but the general idea is correct. If you disagree, hit me up on Discord — I love to talk shop.

Cyberpunk 2077 | Official GeForce RTX 30 Series Gameplay Trailer

Cuda Cores:
Back in 2007, NVIDIA introduced a parallel processing core unit called the CUDA, or Compute Unified Device Architecture. In the simplest of terms, these CUDA cores can compute and stream graphics. The data is pushed through SMs, or Streaming Multiprocessors, which feed the CUDA cores in parallel through the cache memory. As the CUDA cores receive this mountain of data, they individually grab and process each instruction and divvy it up amongst themselves. The RTX 2080 Ti, as powerful as it is, has 4,352 cores, making it that much more staggering when you see that the GeForce RTX 3080 has a whopping 8,704 of them. Now, understand that more CUDA cores doesn’t always mean faster, but we’ll get into the other elements that explain why in this case it very much does.

Both current gen and Gen-9 consoles from Microsoft and Sony are using an AMD Zen 2 architecture, and that means there are no CUDA cores — those are NVIDIA exclusive. Instead they use something similar but not as easily measured or explained. It’ll be more clear when we get into the head to head comparisons of memory and throughput.

Boost Clock:
The boost clock is hardly a new concept, being introduced with the GeForce GTX 600 series, but it’s a very necessary part of wringing every frame out of your card. Essentially, the base clock is the “stock” running speed of your card that you can expect at any given time no matter the circumstances. The boost clock on the other hand allows the speed to be adjusted dynamically by the GPU, pushing beyond this base clock if additional power is available for use. This dynamic adjustment is done constantly, allowing the card to push to higher frequencies as long as the temperatures remain within tolerance thresholds. With better cooling solutions (more on that later), the GeForce RTX 3080 is capable of driving far higher boost clock speeds, reaching 1710 MHz — an improvement over the 2080 Ti’s speed of 1545 Mhz.

GDDR6 vs. GDDR6X:
You may or may not have noticed, but there is a very subtle difference between the 2080 Ti, 3070, and the 3080 and 3090 cards. The first two use GDDR6 memory — the standard for video memory which is also used in both upcoming console platforms. On the other hand, the RTX 3080 and 3090 both use GDDR6X. What’s the difference? Well, the 2080 Ti is capable of pushing 616.0 GB/s though its memory pipeline — a fairly staggering amount, but when textures can be massive at 4K or even 8K resolutions, a very necessary rate. On the other hand, the GeForce RTX 3080 is capable of delivering 760 GB/s — a massive improvement over its predecessor. While the 2080 Ti may have had an extra gig of space at 11GB, the RTX 3080’s 10GB of GDDR6X is more densely packed, faster, and more efficient. The team at Micron (the memory manufacturer and inventor) have taken a fairly big step forward with this new memory architecture, and it does this by breaking some fundamental rules that have been steadfast up until now.

Binary is what computers speak — 1s and 0s. Sending electrical signals across a transistor to indicate on or off states, we can create all sorts of things from programs to graphics to digital sound. Well, Micron asked what would happen if we could more accurately measure the state of that electrical charge and push not just on and off, but four states through the pipeline? GDDR6X is the answer to that question as it does exactly that. This has never been done before with memory, and since memory forms the pathways for all that a GPU does, this has impacts across the entire platform. Whether you are doing video editing, gaming, or letting your video card do AI work like DLSS, being able to hold twice as many instructions per bite at the apple is phenomenal. For those of you interested in the math behind the scenes, GDDR6X’s four states (PAM4 or Four Pulse Amplitude Modulation) are 00, 01, 11, and 10, allowing the increase of memory width without increasing power or heat a significant amount, or reduction of reliable detection of state. This is one of the major secret sauce items that drive power in this new card from NVIDIA.

What is an RT Core?
Arguably one of the most misunderstood aspects of the RTX series is the RT core. This core is a dedicated pipeline to the streaming multiprocessor (SM) where light rays and triangle intersections are calculated. Put simply, it’s the math unit that makes realtime lighting work and look its best. Multiple SMs and RT cores work together to interleave the instructions, processing them concurrently, allowing the processing of a multitude of light sources intersecting with the objects in the environment in multiple ways, all at the same time. In practical terms, it means a team of graphical artists and designers don’t have to “hand-place” lighting and shadows, and then adjust the scene based on light intersection and intensity — with RTX, they can simply place the light source in the scene and let the card do the work. I’m oversimplifying it, for sure, but that’s the general idea.

The Turing architecture cards (the 20X0 series) were the first implementations of this dedicated RT core technology. The 2080 Ti had 72 RT cores, delivering 29.9 Teraflops of throughput, whereas the RTX 3080 has 68 2nd-gen RT cores with 2x the throughput of the Turing-based versions, capable of delivering 58 Teraflops of light and shadow processing power, all while running concurrent ray tracing, shading, and compute! This isn’t back of the envelope marketing nonsense — the 3080 is capable of delivering twice the amount of ray and triangle intersection calculations, making realtime lighting and shadows opportunities for developers that much easier to implement to create realistic worlds for us to inhabit.

What is a Tensor Core?
Here’s another example of “but why do I need it?” within GPU architecture — the Tensor Core. This relatively new technology from NVIDIA had seen wider use in high performance supercomputing or data centers before finally arriving on consumer-focused cards in the latter and more powerful 20X0 series cards. Now, with the RTX 3080 we have the third generation of these processors, and move of them (i.e. the 2080 Ti had 240 second-gen cores, the 3080 has 272 third-gen Tensor cores, but they are reportedly twice as fast) So, what do they do?

NVIDIA DLSS 2.0 | A Big Leap In AI Rendering

Tensor cores are used for AI-driven learning, and we see this more directly applied to gaming via DLSS, or Deep Learning Super Sampling. More than marketing buzzwords, DLSS can take a few frames, analyze them, and then generate a “perfect frame” by interpreting the results using AI, with help from supercomputers back at NVIDIA HQ. The second pass through the process uses what it learned about aliasing in the first pass and then “fills in” what it believes to be more accurate pixels, resulting in a cleaner image that can be rendered even faster. Amazingly, the results can actually be cleaner than the original image, especially at lower resolutions, and having less to process means more frames can be rendered using the power saved. It’s literally free frames. DLSS 3.0 is still swirling in the wind, but very soon we may see this applied more broadly than it is today. We’ll have to keep our eyes peeled for that one, but when it does release, these Tensor cores are the components to do the work. That’s all fancy, but wouldn’t you rather see it in action? Here’s a quick snippet from Control that does exactly that.

Control with DLSS 2.0

DLSS 2.0 was introduced in March of 2020, and it took the principals of DLSS and set out to resolve the complaints users and developers had, while improving the speed. To that end, they reengineered the Tensor core pipeline, effectively doubling the speed while still maintaining the image quality of the original, or even sharpening it to the point where it looks better than the source! For the user community, NVIDIA exposed the controls to DLSS, providing three modes to choose from — Performance for maximum framerate, Balanced, and Quality which looks to deliver the best quality final resultant image. Developers saw the biggest boon with DLSS 2.0 as they were given a universal AI training network. Instead of having to train each game and each frame, DLSS 2.0 uses a library of non-game-specific parameters to improve graphics and performance, meaning that the technology could be applied to any game should the developer choose to do so. Game development cycles being what they are, and with the tech only hitting the street earlier this year, it’s likely we’ll see more use of DLSS 2.0 during the holiday blitz, and even more after the turn of the year.

Frametime vs. Framerate:
It’s important to understand that these two terms are not in any way interchangeable. Framerate tells you how many frames are rendered each second, and Frametime tells you how long it took to render each frame. While there is a great deal of focus on framerate and the resolution at which it’s rendered, frametime should likely receive equal if not greater attention. When frames take too long to render they can be dropped or desync, wreaking all sorts of havoc including stuttering. If frame 1 takes 17ms, but frame 2 takes 240ms, that’s going to make for a jittery result. Realize that both are important and don’t become myopically focused on just framerate as it only tells half the story as, even if a device is capable of delivering 144fps, if it does so in an uneven fashion, you’ll see a choppy output.

What is RTX IO?
Right now, whether it’s on PC or consoles, there is a flow of data that is largely inefficient and our games suffer for it. Storage platforms deliver the goods across the PCI bus, to the CPU and into system memory where they are decompressed. Those decompressed textures are then passed back across the PCI bus to the GPU which then hands it off to the GPU memory. Once that’s done, it is then passed to your eyeballs via your monitor. Microsoft has a new storage API called DirectStorage that allows NVMe SSDs to bypass this process. Combined with NVIDIA RTX IO, the compressed textures instead goe from the high-speed storage across the PCIe bus and directly to the GPU. Assets are then decompressed by the far-faster GPU and delivered immediately to your waiting monitor. Cutting out this back-and-forth business frees up power that could be used elsewhere — NVIDIA estimates up to a 100X improvement. When developers talk about being able to reduce install sizes, that comes directly from this technology. So, what’s the catch?

NVIDIA RTX IO is an absolutely phenomenal bit of technology, but it’s so new that nobody has it ready for primetime. As a result, I can only tell you that it’s coming and talk about how awesome this concept and technology is, I can’t test it for you. That said, as storage platforms shift towards incredibly high speed drives that unfortunately have very low storage capacity, you can bet we’ll see this come to life and quickly. Stay tuned on this — it could be a huge win for gamers everywhere.

A little side-by-side:

Before we get to the meat and potatoes of benchmarking, I think it’s a good time to do some side-by-side comparison between the upcoming consoles, the GeForce RTX 2080 Ti, and the GeForce RTX 3080. While it’s not necessarily an apples-to-apples comparison, this will give you an idea of the capabilities and outputs versus NVIDIA’s newest hardware.

 

NVIDIA GeForce RTX 3080 NVIDIA GeForce RTX 2080 Ti Xbox Series X PlayStation 5
NVIDIA CUDA Cores 8,704 4,352 NA NA
Boost Clock (MHz) 1,710 1,545 1,825.00 2,233.00
Standard Memory Config 10 GB GDDR6X 11 GB GDDR6 16 GB GDDR6 16 GB GDDR6
Memory Interface Width 320-bit 352-bit 320-bit 256-bit
Memory Speed 19 Gbps 14 Gbps 14 Gbps 14 Gbps
Memory Bandwidth 760.0 GB/s 616 GB/s 560 GB/s 448 GB/s
Maximum Resolution 7680×4320 7680×4320 3840 x 2160 3840 x 2160
RT TFLOPS 58 26.9 12.15 10.28

It isn’t a huge surprise that a card that costs more than the entire console would outperform it, but to see it deliver this much more power is hard to believe. At nearly double the power, it’s capable of delivering framerates and resolutions that Sony and Microsoft’s new consoles could only dream of hitting. Without further ado, let’s break out some games that will push any system to its limits and see how each performs at 1080p, 1440p, and 4K.

Benchmarking:

Analysis:

There are a lot of interesting bits of information glean from these benchmarks. Sure the standouts are the big number jumps, but it also indicates something subtle as well. Games that take advantage of new lighting like RTX instead of performing complex calculations for lighting and shadows see a sizable boost over their contemporaries without that tech. By way of example, Assassin’s Creed Odyssey is notoriously CPU-bound, so throwing more hardware at it will only go so far. If the game were to take advantage of tech like RTX, we could see that constraint removed, likely granting huge gains.

Also interesting in this list is the performance of Red Dead Redemption 2. As you saw in our in-depth look at the PC version, Rockstar’s wild west adventure will bring just about any video card to its knees. Well, patches abound since launch and more than a few driver updates have brought better optimization and now we see it able to hit 4K/60 on PC with only a few toggles on the 2080 Ti. Well, with the RTX 3080 you won’t even have to bother with that — slide everything to the right, turn on DX12 mode, and enjoy 4K gaming at 60fps without compromise. Absolutely breathtaking.

Shadow of the Tomb Raider was one of the first games to take advantage of RTX lighting, and as such has had the most time to refine. Not only is 4K/60 at maximum quality possible, you can hit those numbers with frames to spare on cards with RTX. You can see this when you look at the 4K numbers on the 1080Ti. While it is an incredibly powerful card to this day, not having RTX tech and a whole bank of Tensor and RT cores to offload the work yields 40fps at 4K, and without all the bells and whistles.

Let’s take a second and talk about bells and whistles. Keoken Interactive delivered a stress-inducing lunar adventure with their game Deliver Us the Moon. It’s a good looking game, to be sure, but it is easily the best showcase of why RTX is, without being in any way hyperbolic, a game changer. Watch this video:

Deliver Us The Moon RTX Launch Trailer

In terms of immersion, there’s nothing quite like this sort of jump. Immersion is about suspension of disbelief, and making your moon-bound astronaut’s reflection appear in the glass of the habitation unit helps reinforce the cold reality of being so completely alone on your mission. Seeing the reflection of the status panel flicker in the glass is an off-camera reminder that you are one small failure from certain death. Light and shadow bending through the refraction of glass to remind you that your station is spinning as it hurtles around our closest heavenly body through the vacuum of space. Music can make or break the tension of a scene, but seeing games like Deliver Us the Moon with realtime lighting and effects makes it impossible to go back.

As a point of fact, many of the games on this list suffer from being CPU-bound. The interchange between the CPU, memory, GPU, and storage medium create several bottlenecks that can interfere with your framerate. If your graphics card can ingest gigs of information every second, but your crummy mechanical hard drive struggles to hit 133 MB/s, you’ve got a problem. If you are using high-speed Gen3 or 4 m.2 SSD storage that drive can hit 3 or even 4 GB/s, but if your older processor isn’t capable of processing it, it’s not going to be able to fill your GPU either. Supposing you’ve got a shiny new 10th Gen Intel CPU, you may be surprised that it also may not be fast enough for what’s under the hood of this RTX 3080. We’ll discuss how NVIDIA is trying to solve this dilemma a little further into this review, but put simply, NVIDIA is once again ahead of the power curve, and processors, memory, and even the PCI bus need to catch up before all this power gets fully utilized, much less taxed.

NVIDIA Broadcast:
There is an added bonus that comes with the added power of the Tensor cores. While the AI cores do handle DLSS, they also deliver some additional smoothing in an app NVIDIA is calling “NVIDIA Broadcast”. This app can improve the output of your microphone, speakers, and camera through applying AI learning in much the same way that we see it in the examples above. Static or background noise in your audio, artifacting when you are streaming with a green screen (or without one as Broadcast can simply apply a virtual background), distracting noise from another person’s audio, and even perform a bit of head-tracking to keep you in frame if you are the type that moves around a lot. Not just a gaming-focused application, this works for any video conferencing, so feel free to Zoom away while we are all stuck inside.

Inside the application are three tabs – microphone, speakers, and camera. Microphone is supposed to let you remove background noise from your audio. I use a mechanical keyboard and I’ve been told more than once that it’s loud. Well, even with this enabled, it’s still loud — my Seiren Elite doesn’t miss a single sound.

The second tab is speakers which is supposed to reduce the amount of noise coming from other sources. I found this to be fairly effective, removing obnoxious typing noises from others — hopefully I can do the same for them one day.

The final tab is where the magic happens. Under the Camera tab you can blur your background, replace it with a video or image, simply delete the background, or “auto-frame”. Call software like Zoom and Skype can do this as well, but even in this early stage I can say it doesn’t do it this well. Better still, getting it pushed into OBS was as simple as selecting a device, selecting “Camera NVIDIA Broadcast” and it was done. It doesn’t get any easier than this.

The software is still in testing and development, so I’m not sure when you’ll get your hands on it, but I was incredibly impressed with the results. I saw a marginal (~2%) reduction in framerate when recording with OBS — a pittance when games like Overwatch and Death Stranding are punching above 150fps, and it works with fairly terrible lighting like I have in my office. A properly lit room will look miles better.

Cooling and noise:

It’s one thing to deliver blisteringly fast frame rates and eye-popping resolutions, but if you do it while also blowing out my eardrums with a high pitched whine as your Harrier Jet-esque fans spin up to cool, we’ve got a problem. Thankfully NVIDIA realized this, as well as the need to cool a staggering 28 billion transistors (the 2080 Ti had 18.9 billion), and they redesigned the 3080 to match. The card has a solid body construction with a new airflow system that displaces heat more efficiently, and somehow does it while being 10dB quieter than its predecessor. I’d normally point to that being a marketing claim, but I measured it myself.

Normal airflow through a case with a standard setup starts with air intake at the front, pushing it over the hard drives, and hopefully out the back. I can tell you that the case I picked up from Coolermaster was incorrectly assembled on arrival with the 200mm fan on top blowing air back into the case, so it’s always best to check the arrows on the side to make sure your case follows this path. The 2080 Ti’s design was that of a solid board that ran the length of the card with fans on the bottom pushing heat away. Unfortunately this requires that it circulates back into the path of the air flow, having to billow back up and then travel out the top and rear case fans.

The RTX 3080’s airflow system is a thing of beauty. Since the card has been shrunk effectively in half thanks to the 8nm manufacturing process, this left a lot of real estate for a larger cooling block and fins, as well as a fan system to match. The fan on the top of the card (once it is mounted in the case) instead draws air up from the bottom, going through the hybrid vapor chamber heat pipe (as there no longer a long circuit board to obstruct the airflow), and pushes it directly into the path of the normal case air path. The second fan, located on the bottom of the card and closest to the mounting bracket at the back of the case, draws air in as well, but instead of passing it through the card, it pushes the excess heat out of the rear of the card through a dedicated heat pipe vent. Observe:

You can see the results while running benchmarks. The card never pushed above 79 C, no matter how hard I pushed it or at what resolution I ran it, remaining in the low to mid 70s nearly all the time, and at an amazing 35 C when idle. More than once I’ve peeked into my case and saw that the fans for the card aren’t even spinning — this is one amazing piece of engineering.

Gaming without compromise

As the PlayStation 5 and Xbox Series X launch you are going to hear a non-stop barrage of advertisements around 4K gaming and how both platforms can deliver 120fps. What we’ve seen with the few hands-on moments we’ve had with the new consoles is that capability isn’t always matching reality. Dirt 5, ExoMecha, Gears 5, Halo Infinite (multiplayer) Ori and the Will of the Wisps, Orphan of the Machine, Second Extinction, and Metal: Hellsinger are all purported to run at 120fps and at 4K resolution, but that’s a very short list. Upgrades to already-released titles may pad this out some, but there’s an underlying issue — HDMI 2.1, your TV, and your receiver. Much like the issue with processors, RAM, and motherboards, all of your components need to match to maximize what your GPU can produce. On a console all of that is self-contained and set in stone, but the problem remains. While both consoles provide support for HDMI 2.1, the newest standard which is capable of delivering up to 8K output, your TV and receiver also have to support these — and very few do. In reality, you likely have a 4K TV which supports 60Hz, if even that. This brings me to the GeForce RTX 3080.

NVIDIA’s newest series of cards need hardware to match. They support HDMI 2.1, 8K outputs, high refresh rate and high resolution monitors, and they do it while having RTX and DLSS enabled. Seeing DOOM Eternal in 8K is something to behold, but most of us are a long way away from realizing that at home. What we can do, however, is enjoy 4K gaming at 60+ (often WAY past 60) without compromise. Sure, the newest consoles will advertise 4K/60 or even 4K/120, but can they do it with all the sliders to the right? I sincerely doubt it.

For what it’s worth, seeing Doom Eternal run at 150+fps at 4K, maxing out the refresh on my 144Hz monitors is the way gaming should be. It’s not just a carefully-crafted tech demo — it’s real, and it’s here right now. Frames make games, and it’s never been more true than in id’s hell shooter.

DOOM Eternal | Official GeForce RTX 3080 4K Gameplay - World Premiere

There is something to note about the GeForce RTX 3080 — the power requirements. All of this insane capability doesn’t come without cost — you’ll need a 750W PSU to bring this beast to life. If you’ve been plugging along on that 550W PSU for years, now’s the time to put some oomph behind your components.

I do have one very tiny complaint about the GeForce RTX 3080, and I suspect the other cards in the family currently have this issue as well — they’ve removed the dedicated VR port. Being able to plug in a dongle to extend my Oculus Rift connections away from the back of my machine was a real boon, and unfortunately that’s gone. It will be sorely missed, but playing VR games at framerates that won’t make me nauseous seems to numb the pain just fine.

Price to Performance:

There are no bones about it, cards like the Titan, the 2080 Ti, and the 3090 are expensive. They represent the approach of not worrying about whether they should and simply whether they could. I love that kind of crazy as it really pushes the envelope of technology and innovation. NVIDIA held an 81% market share for the GPU market last year, and they easily could have sat back and iterated on the 2080’s power and delivered a lower cost version with a few new bells and whistles attached. That’s not what they did. They owned the field and still came in with a brand new card that blew their own previous models out of the water. The GeForce RTX 3070 has more power than the 2080 Ti and it costs $500 versus the $1200+ you’ll fork out to get your hands on the previous generation’s king. Similarly, the RTX 3080 eclipses everything on the market, even their Titan RTX, and at $699 it does so in a fashion that beats it and takes its lunch money. We haven’t seen a generational leap like this, maybe ever. The fact that NVIDIA priced it the way they did makes me think they had a reason, and I don’t think that reason in AMD.

Sure, I’m certain the green team is worried about how the new generation of consoles could impact their market, but as someone who has worked in tech for a very long time, there can be another reason. When you go to a theme park there are signs that say “You must be this tall to ride this ride,” and they are there for your safety. But safety is rarely fun or exciting. We’ve been supporting old technology like mechanical hard drives and outdated APIs for a very long time. Windows 10 came out five years ago, but there are still plenty of folks who want to use Windows 7. Not pushing the envelope stifles innovation, and it stops us from realizing the things we could achieve. By occasionally raising that “this tall” bar to introduce a new day and a new way, we send a message to consumers that it’s time to upgrade, and we send a strong signal to developers that they can push their own envelopes. That’s how we get games like Cyberpunk 2077, and it’s how we see lighting like we do in Watch Dogs: Legion. It’s what takes us from this, to this. If it’s time to raise the bar, the Geforce RTX 3080 might have kicked it so high it’ll take us years to catch up.

Executive Director and Editor-in-Chief | [email protected]

Ron Burke is the Editor in Chief for Gaming Trend. Currently living in Fort Worth, Texas, Ron is an old-school gamer who enjoys CRPGs, action/adventure, platformers, music games, and has recently gotten into tabletop gaming.

Ron is also a fourth degree black belt, with a Master's rank in Matsumura Seito Shōrin-ryū, Moo Duk Kwan Tang Soo Do, Universal Tang Soo Do Alliance, and International Tang Soo Do Federation. He also holds ranks in several other styles in his search to be a well-rounded fighter.

Ron has been married to Gaming Trend Editor, Laura Burke, for 27 years. They have three dogs - Pazuzu (Irish Terrier), Atë, and Calliope (both Australian Kelpie/Pit Bull mixes).

95

Excellent

GeForce RTX 3080

Review Guidelines

With staggering performance across all platforms, and unbelievable gains for anything supporting RTX, the GeForce RTX 3080 is a technical marvel. Packed with a developer’s wish list of features, and cooled by a quieter and more efficient cooling system, the 3080 delivers 4K/60 and far more, and it does it without compromise. The fact that it also does it at a price the average person might afford makes it the best way to enjoy any game you could possibly want now, and for the foreseeable future.

Ron Burke

Unless otherwise stated, the product in this article was provided for review purposes.

See below for our list of partners and affiliates:

Trending

To Top
GAMINGTREND