Is RTX 2060 laptop future proof

This site may earn affiliate commissions from the links on this page. Terms of use.

Talk to anyone about building a new PC, and the question of longevity is going to pop up sooner rather than later. Any time someone is dropping serious cash for a hardware upgrade theyre going to have questions about how long it will last them, especially if theyve been burned before. But how much additional value is it actually possible to squeeze out of the market by doing so and does it actually benefit the end-user?

Before I dive in on this, let me establish a few ground rules. Im drawing a line between buying a little more hardware than you need today because you know youll have a use for it in the future and attempting to choose components for specific capabilities that you hope will become useful in the future. Let me give an example:

If you buy a GPU suitable for 4K gaming because you intend to upgrade your 1080p monitor to 4K within the next three months, thats not future-proofing. If you bought a Pascal GPU over a Maxwell card in 2016 [or an AMD card over an NV GPU] specifically because you expected DirectX 12 to be the Next Big Thing and were attempting to position yourself as ideally as possible, thats future-proofing. In the first case, you made a decision based on the already-known performance of the GPU at various resolutions and your own self-determined buying plans. In the second, you bet that an API with largely unknown performance characteristics would deliver a decisive advantage without having much evidence as to whether or not this would be true.

Note: While this article makes frequent reference to Nvidia GPUs, this is not to imply Nvidia is responsible for the failure of future-proofing as a strategy. GPUs

have advanced more rapidly than CPUs
over the past decade, with a much higher number of introduced features for improving graphics fidelity or game performance. Nvidia has been responsible for more of these introductions, in absolute terms, than AMD has.

Lets whack some sacred cows:

DirectX 12

In the beginning, there were hopes that Maxwell would eventually perform well with DX12, or that Pascal would prove to use it effectively, or that games would adopt it overwhelmingly and quickly. None of these has come to pass. Pascal runs fine with DX12, but gains in the API are few and far between. AMD still sometimes picks up more than NV does, but DX12 hasnt won wide enough adoption to change the overall landscape. If you bought into AMD hardware in 2013 because you thought the one-two punch of Mantle and console wins were going to open up an unbeatable Team Red advantage [and this line of argument was commonly expressed], it didnt happen. If you bought Pascal because you thought it would be the architecture to show off DX12 [as opposed to Maxwell], that didnt happen either.

Now to be fair, Nvidias marketing didnt push DX12 as a reason to buy the card. In fact, Nvidia ignored inquiries about their support for async compute to the maximum extent allowable by law. But that doesnt change the fact that DX12s lackluster adoption to-date and limited performance uplift scenarios [low-latency APIs improve weak CPU performance more than GPUs, in many cases] arent a great reason to have upgraded back in 2016.

DirectX 11

Remember when tessellation was the Next Big Thing that would transform gaming? Instead, it alternated between having a subtle impact on game visuals [with a mild performance hit] or as a way to make AMD GPUs look really bad by stuffing unnecessary tessellated detail into flat surfaces. If you bought an Nvidia GPU because you thought its enormous synthetic tessellation performance was going to yield actual performance improvements in shipping titles that hadnt been skewed by insane triangle counts, you didnt get what you paid for.

DirectX 10

Everybody remember how awesome DX10 performance was?
Anybody remember how awesome DX10 performance was?

Anybody?

If you snapped up a GTX 8xxx GPU because you thought it was going to deliver great DX10 performance, you ended up disappointed. The only reason we cant say the same of AMD is because everyone who bought an HD 2000 series GPU ended up disappointed. When the first generation of DX10-capable GPUs often proved incapable of using the API in practice, consumers whod tried to future-proof by buying into a generation of very fast DX9 cards that promised future compatibility instead found themselves with hardware that would never deliver acceptable frame rates in what had been a headline feature.

This is where the meme Can it play Crysis? came from. Image by CrysisWiki.

This list doesnt just apply to APIs, though APIs are an easy example. If you bought into first-generation VR because you expected your hardware would carry you into a new era of amazing gaming, well, that hasnt happened yet. By the time it does, if it does, youll have upgraded your VR sets

and the graphics cards that power them at least once. If you grabbed a new Nvidia GPU because you thought PhysX was going to be the wave of the future for gaming experiences, sure, you got some use out of the feature just not nearly the experience the hype train promised, way back when. I liked PhysX still do but it wound up being a mild improvement, not a major must-have.

This issue is not confined to GPUs. If you purchased an AMD APU because you thought HSA [Heterogeneous System Architecture] was going to introduce a new paradigm of CPU GPU problem solving and combined processing, five years later, youre still waiting. Capabilities like Intels TSX [Transaction Synchronization Extensions] were billed as eventually offering performance improvements in commercial software, though this was expected to take time to evolve. Five years later, however, its like the feature vanished into thin air. I can find just one recent mention of TSX being used in a consumer product. It turns out, TSX is incredibly useful for boosting the performance of the PS3 emulator RPCS3.Great! But not a reason to buy it for most people. Intel also added support for raster order views years ago, but if a game ever took advantage of them Im not aware of it [game optimizations for Intel GPUs arent exactly a huge topic of discussion, generally speaking].

You might think this is an artifact of the general slowdown in new architectural improvements, but if anything the opposite is true. Back in the days when Nvidia was launching a new GPU architecture every 12 months, the chances of squeezing support into a brand-new GPU for a just-demonstrated feature was even worse. GPU performance often nearly doubled every year, which made buying a GPU in 2003 for a game that wouldnt ship until 2004 a really stupid move. In fact, Nvidia ran into exactly this problem with Half-Life 2. When Gabe Newell stood on stage and demonstrated HL2 back in 2003, the GeForce FX crumpled like a beer can.

Id wager this graph sold more ATI GPUs than most ad campaigns. The FX 5900 Ultra was NVs top GPU. The Radeon 9600 was a midrange card.

Newell lied, told everyone the game would ship in the next few months, and people rushed out to buy ATI cards. Turns out the game didnt actually ship for a year and by the time it did, Nvidias GeForce 6xxx family offered far more competitive performance. An entire new generation of ATI cards had also shipped, with support for PCI Express. In this case, everyone who tried to future-proof got screwed.

Theres one arguable exception to this trend that Ill address directly: DirectX 12 and asynchronous compute. If you bought an AMD Hawaii GPU in 2012 2013, the advent of async compute and DX12 did deliver some performance uplift to these solutions. In this case, you could argue that the relative value of the older GPUs increased as a result.

But as refutations go, this is a weak one. First, the gains were limited to only those titles that implemented both DX12 and async compute. Second, they werent uniformly distributed across AMDs entire GPU stack, and higher-end cards tended to pick up more performance than lower-end models. Third, part of the reason this happened is that AMDs DX11 driver wasnt multi-threaded. And fourth, the modest uptick in performance that some 28nm AMD GPUs enjoyed was neither enough to move the needle on those GPUs collective performance across the game industry nor sufficient to argue for their continued deployment overall relative to newer cards build on 14/16nm. [The question of how quickly a component ages, relative to the market, is related-but-distinct from whether you can future-proof a system in general].

Now, is it a great thing that AMDs 28nm GPU customers got some love from DirectX 12 and Vulkan? Absolutely. But we can acknowledge some welcome improvements in specific titles while simultaneously recognizing the fact that only a relative handful of games have shipped with DirectX 12 or Vulkan support in the past three years. These APIs could still become the dominant method of playing games, but it wont happen within the high-end lifespan of a 2016 GPU.

Optimizing Purchases

If you want to maximize your extracted value per dollar, dont focus on trying to predict how performance will evolve over the next 24-48 months. Instead, focus on available performance today, in shipping software. When it comes to features and capabilities, prioritize what youre using today over what youll hope to use tomorrow. Software roadmaps get delayed. Features are pushed out. Because we never know how much impact a feature will have or how much itll actually improve performance, base your buying decision solely on what you can test and evaluate at the moment. If you arent happy with the amount of performance youll get from an update today, dont buy the product until you are.

Second, understand how companies price and which features are the expensive ones. This obviously varies from company to company and market to market, but theres no substitute for it. In the low-end and midrange GPU space, both AMD and Nvidia tend to increase pricing linearly alongside performance. A GPU that offers 10 percent more performance is typically 10 percent more expensive. At the high end, this changes, and a 10 percent performance improvement might cost 20 percent more money. As new generations appear and the next generations premium performance becomes the current generations midrange, the cost of that performance drops. The GTX 1060 and GTX 980 are an excellent example of how a midrange GPU can hit the performance target of the previous high-end card for significantly less money less than two years later.

Third, watch product cycles and time your purchasing accordingly. Sometimes, the newly inexpensive last generation product is the best deal in town. Sometimes, its worth stepping up to the newer hardware at the same or slightly higher price. Even the two-step upgrade process I explicitly declared wasnt future-proofing can run into trouble if you dont pay close attention to market trends. Anybody who paid $1,700 for a Core i7-6950X in February 2017 probably wasnt thrilled when the Core i9-7900X dropped with higher performance and the same 10 cores a few months later for just $999, to say nothing of the hole Threadripper blew in Intels HEDT product family by offering 16 cores instead of 10 at the same price.

Finally, remember this fact: It is the literal job of a companys marketing department to convince you that new features are both overwhelmingly awesome and incredibly important for you to own right now. In real life, these things are messier and they tend to take longer. Given the relatively slow pace of hardware replacement these days, its not unusual for it to take 3-5 years before new capabilities are widespread enough for developers to treat them as the default option. You can avoid that disappointment by buying the performance and features you need and can get today, not what you want and hope for tomorrow.

Update [7/21/2021]:

Its now been over two years since we first wrote this piece, although weve updated it in the interim, and since its an article about future-proofing, it makes thematic sense to return to our own conclusions and survey whether things have changed.

They have not. Thats before we take into consideration the ruinous impact of high prices on the retail PC industry. Right now, chances are good that youll pay far more for any GPU than you would have a year ago, and the idea of future proofing under these conditions is impossible. The best way to future proof yourself at the moment is to buy as little hardware as you can get away with, at least as far as graphics are concerned, and wait for prices to come down.

Nvidias RTX 3000 series, which launched in the fall of 2020, dramatically improved ray tracing performance and performance per dollar compared with Nvidias previous generation of Turing cards. When Nvidia launched its Turing architecture, it argued that buying into the GPU family now would unlock a gorgeous future of ray-traced games. In reality, only a modest handful of games shipped with ray tracing support during Turings life, and Amperes strongest gains over Turing are often in ray-traced games.

There is nothing wrong with buying an expensive GPU because you want the best card. Theres nothing wrong with choosing to buy in at the top of a market because you want the best performance possible and are willing to sacrifice performance per dollar to reach a given performance target. Anyone who bought an RTX 2080 Ti to make certain theyd be able to play ray-traced titles as long as possible would have done better to hold on to a Pascal-era GPU in 2018 and then buy Ampere in 2021 2022, assuming availability has improved by then.

Now Read:

  • GPU Prices Are Stuck Well Above MSRP
  • How to Buy the Right Video Card for Your Gaming PC
  • What is DirectX 12?

Video liên quan

Chủ Đề