Jump to Section:Best Price
Comments
Our Verdict
With no single-GPU AMD competition, Nvidia's GTX Titan X is simply the most efficient and most elegant 4K gaming card around.
Got some spare cash burning a sizable hole in your pocket? That's lucky, because Nvidia's latest supercomputer graphics card has just arrived. This time, it's got an X tacked onto to the end. You know. For extreme .
Nvidia’s Titan range of graphics cards have always been more about psychology than practical gaming. They’re top-of-the-line GPUs designed to be objects of desire, not something every PC gamer is necessarily going to be in a rush to order. The Titan X is another ‘ultra-enthusiast’ GPU, with a crazy-high, $999 price tag and an Nvidia assertion that it’s “an extremely high-end graphics card that probably won't appeal to those who are price conscious.” The psychology is in creating something that practically all gamers will want, but few can afford.
And those that can’t? Well, they’ll just look a little further down the product stack and choose a card with essentially the same DNA.
With the GTX Titan X, Nvidia is not messing around with that DNA, and it's not waiting for AMD to get around to finally releasing their competing range of new GPUs either. Nvidia has unleashed the full power of the Maxwell GPU architecture in this latest Titan. Rocking a brand new GPU, the GM 200, the GTX Titan X is the pinnacle of their latest graphics tech. But Nvidia is also heralding the GTX Titan X as the ultimate PC gamer’s graphics card—one that will blaze through today’s games at top 4K settings and still be capable of doing so a couple of years down the line.
Does it succeed? Well, the Titan X is indisputably the new single-GPU champion, but even this $999 extreme Titan can't quite handle 4K, 60 fps gaming on ultra settings by its lonesome.
The core of it all
There’s no messing around with the latest Maxwell core. Right off the bat Nvidia is delivering the complete GM 200 chip. There are no SMMs left in the factory, no CUDA cores shut down to boost yields and no shenanigans on the memory side. We’re promised it’s a full 12GB graphics card, not a 10+2GB one.
And there’s no reason why it shouldn’t be. The Maxwell architecture is a mature one, having started life in the GTX 750 Ti, back when the Titan Black was being unveiled, and so is the 28nm production process the chips are being manufactured with. That means yields out of the factories should be excellent, and means Nvidia ought to be able to produce as many of these full GM 200 cores as they like.
Inside that matte black aluminium shell is a GPU with a full eight billion transistors. That’s almost three billion more than the GTX 980, nearly two billion more than the Hawaii XTcore at the top of AMD’s current tech tree and around a billion more than the original Kepler-based Titans.
Because it’s still running on the same production process as the previous Kepler generation that does mean its also a bigger slice of silicon too. The chunky GK110 GPU ran to around 561mm2 while the GM 200 is some 601mm2.
It’s hands-down the biggest gaming GPU around, in many different ways.
Inside that chunky chip are 24 streaming microprocessors (SMM) in six graphics processing clusters (GPC). With the Maxwell design running to 128 CUDA cores in each SMM that makes for a grand total of 3,072 cores in the GTX Titan X. Completing the core configuration are 192 texture units and 96 ROPS.
That’s a whole heap of graphics processing power right there.
Nvidia have also doubled the size of the frame buffer compared to the previous Titancards, maxing out at 12GB GDDR5 memory, running across six 64-bit memory buses to deliver an aggregated 384-bit total memory bus.
That memory capacity might well look a little bit like big numbers for the sake of it, but we thought it would be a long time before the original Titan’s 6GB frame buffer was anywhere near fully utilised. Yet right now Shadow of Mordor is filling up around 5.7GB with the HD texture pack at Ultra 4K settings; we may only be a couple of years away from 12GB actually getting used. Right now, 12GB is more future-proofing than anything else.
What’s designed for the here and now is the speed at which the GM 200 is running inside the GTX Titan X. While the first Titan’s GK 110 GPU was clocked significantly lower than the GTX 680’s GK 104 chip, the GM 200 is coming out of the blocks with a 1GHz clockspeed. And, because we’re talking efficient Maxwell silicon here, this thing boosts way past that.
The somewhat conservative Nvidia estimate has it boosting to around 1,075MHz on average, but in my testing Bioshock Infinite’s inefficient engine was the only thing to bring it down to that level. Pretty much every other game in my testing suite was hovering around the 1,164MHz mark.
What does that all mean in terms of performance? Is the GTX Titan X the fastest graphics card on the planet?
Well, no.
It’s the fastest single GPU on the planet, but history always has a way of repeating itself and there is a faster card out there, one packing a pair of GPUs onto a single slice of PCB. Just like when the first GTX Titan was unleashed.
A Titan history lesson
Back in 2013, the first Titan followed Nvidia’s dual-GPU GTX 690, a card packing in two GK 104 GPUs and offering serious gaming frame rates. The inaugural Titan wasn’t quite able to deliver the same level of performance, but it wasn’t far off, packed in more video memory and was capable of running with less power all on a single graphics processor. That meant it wasn’t the fastest overall card, but made it a much smarter choice for us gamers because it meant you weren’t suffering at the vagaries of multi-GPU gaming.
And it’s almost an identical situation today with the GTX Titan X. Except the competing card doesn’t come from within Nvidia’s own stables, it’s AMD’s Radeon R9 295X2. At 4K settings the Radeon card really does have the edge when it comes to average framerates. Its pair of Hawaii GPUs and twin 4GB frame buffers means it can push past the GTX Titan X in most of our gaming benchmarks. It was the first card to really make a case for running games at max 4K settings on a single graphics card, and by those metrics it looks like the GTX Titan X can’t compete.
But average framerates are not the only metrics by which GPUs should be measured. The difference between the minimum and average framerates can be just as important in showing just how smooth an experience you’re going to get, and often the Nvidia GPU can boast significantly higher scores.
Battlefield 4 is a prime example. At 4K Ultra the Radeon R9 295X2 is running with an average FPS of 60 while the GTX Titan X is sitting at around 48FPS. The difference in the minimum frame rate, however, is considerable, with the AMD card down at 13FPS and the Nvidia at a much smoother 31FPS.
That’s also not taking into account the impact running a multi-GPU setup has on your gaming experience. In GPU intensive games like Shadow of Mordor the dual-GPU Radeon can suffer from quite severe frame time stuttering, while the GTX Titan X and its single GPU have showed no such issues in my testing.
Micro-stutter has long been the bane of both SLI and CrossFire gaming rigs, as has the somewhat lackadaisical approach some game developers have towards implementing multi-GPU compatibility in their games. Rome 2 and Company of Heroes 2 have been recent examples of long-winded problems, but there are also often day one problems with multi-GPU systems not getting decent experiences with the latest games.
Benchmarks
Our GPU test rig is a stock-clocked Intel Core i7-4770K on an Asus Maximus VI Hero Z97 motherboard with 8GB of Corsair Dominator 2,133MHz DDR3.
Average framerate is one thing, but a good overall experience is much more valuable.
I’d always, always recommend getting a good single GPU card over either a dual-GPU card or multiple graphics cards. Average frame rate is one thing, but being able to get a good overall experience every time you game is much more valuable, even if it is at a slightly slower rate. And that recommendation has not changed with the release of the GTX Titan X. I’d be much happier with that gaming monster in my rig than an R9 295X2.
But there are other reasons too, power draw being one of the most beguiling. The Maxwell architecture has shown itself to be one of the most efficient GPU designs ever, and the Titan X is drawing only around the same peak platform power as AMD’s Radeon R9 290X, a card which is much slower. The dual-GPU R9 295X2, by comparison, devours around twice the power the GTX Titan X is asking for. The performance per watt of this latest Maxwell marvel then is simply stunning.
The Titan X also doesn’t need to have a closed-loop water cooler strapped to its silicon to keep it from melting your motherboard into so much unusable slag. The cooling array on the GTX Titan X is largely unchanged from the one on the very first Titan.
It’s still rocking a large copper vapour chamber cooler atop the chip and an impressively quiet blower keeping the air running across it and the main components. It does still get rather toasty, topping out at 83 degrees as standard, but you never get the traditional ol’ Radeon roar. This all means you can now put together a startlingly powerful gaming machine in a seriously small form factor without needing a hefty case to cope with the cooling demands of a dual-GPU card.
There’s also that memory configuration of the latest Titan: at 12GB you get a frankly enormous single pool of video memory for your games to play with. The dual-GPU Radeon on the other hand has a pair of 4GB frame buffers, and under DirectX 11 that doesn’t equal a full 8GB. Times will change when DirectX 12 gets released and you can access any part of that combined GPU power. But that will still do nothing for the existing games and game engines that don’t have specific code paths put in place by diligent, PC-focused game developers.
Clock tweaks
There’s one other element I haven’t touched on yet: the overclocking performance of the GTX Titan X. When Nvidia announced to an intimate gathering of journos the Titan X was ‘designed for overclocking’ there were a few half smirks in the room. I’m pretty sure they said the same thing about the GTX 960 at launch.
While Nvidia did allow you to boost the clockspeed of you mid-range graphics card, no matter how much you overclocked the cut down GM 204 GPU, it still didn’t translate to any real tangible benefit in terms of gaming performance. There was also the fact the original Titan was running so close to the limits of its architecture, even specced as slow as it was, overclocking wasn’t really something the first version took too well.
I’m guessing those smirks have all vanished now, because the Titan X is both able to buff its clocks considerably and have that make a genuine difference to gaming performance. In fact with my reference sample having its base clock boosted up to around 1,400MHz and the memory to around 3,900MHz, I had all but closed the gap between it and the dual-GPU Radeon R9 295X2.
Battlefield 4 was running 4K Ultra at 55FPS and never dropped below 34FPS. In Shadow of Mordor the Titan X was coasting along at 56FPS, never dipping below 42FPS. This is 3840 x 2160, Ultra settings, full HD texture pack and 5.7GB VRAM load territory here, and it’s barely breaking a sweat.
Seriously, even at that high an overclock, the GPU was still only pushed another 3ºC above its reference state. The fan did have to hit 55% to maintain at 86ºC during operation, and that does make it more noticeable acoustically during play, but the pitch of the cooler is well targeted enough that it’s certainly never ear-piercing.
You won't find manufacturers strapping their own third-party coolers to the Titan X, however. It’s another reference-only design, so matte black is the only style you’ll find on the shelves. That was the same with the original Titans though you did find some enterprising folk, such as Gigabyte, selling Titans in reference form with additional Gigabyte cooling arrays in the packaging for the DIY faithful. And that homebrew crowd isn’t something Nvidia is necessarily discouraging. They are already working with manufacturers to put together bespoke water blocks for the Titan X so you can strap it into your custom cooling loop should you so desire.
Final reckoning
So, should you buy one? Honestly, that’s probably not a question I can reasonably answer. How much cash can you genuinely afford to drop on a single component for your gaming rig? You should definitely want one, but given the seriously high $999 price tag it’s going to be out of the reach of a good many of us PC gamers. But it is the top graphics card as of today, packing both the most advanced and the most efficient top-end GPU around.
Sure, you could get higher average frame rates with the dual-GPU Radeon, but your overall experience is likely to be far smoother and less fraught with the traditional multi-GPU woes associated with either CrossFire or SLI.
And make no mistake: the Titan X isn’t going to be the only card Nvidia releases using the GM 200 GPU. The chances of them not producing a more affordable GeForce GTX 980 Ti with the top Maxwell GPU are practically zero. We’re still waiting on AMD to release their Fiji GPU, the silicon set to power the upcoming Radeon R9 390X. That card likely won't appear until the summer, along with its high-bandwidth stacked memory and massive Graphics Core Next GPU.
I wouldn’t be at all surprised to see a July/August release of a GTX 980 Ti once Nvidia is sure it can counter the new AMD card. And that’s going to be priced far more competitively, which is something Nvidia doesn't have to do with the super-expensive Titan X—it simply has no single-GPU competition at all.
If you’re looking to put together the fastest, money-no-object gaming PC today then this is the card to drop into your rig. Especially if you want a small, super-powered, efficient machine.
The Verdict
Nvidia GeForce GTX Titan X
With no single-GPU AMD competition, Nvidia's GTX Titan X is simply the most efficient and most elegant 4K gaming card around.
We recommend By Zergnet
Post a Comment