
The AMD Vega GPU architecture is the next
generation graphics silicon the Radeon-red team are working on for
release next month. The gaming-focused Radeon RX Vega promises to
deliver AMD a graphics card that can finally compete with the top-end of
Nvidia’s rival GPU stack.
Vital stats
- AMD Vega release dateThe gaming-focused AMD Vega GPUs are launching August 14, with three different SKUs... one being 'the world's most powerful sub-$400 GPU'.
- AMD Vega architecture
- "The biggest improvement in our graphics IP in the past five years." Vega promises a host of improvements, including a new memory caching system.
- AMD Vega specsThe Vega GPU is built on 14nm silicon with the lower-end RX Vega 56 sporting 3,584 GCN cores and the RX Vega 64 and RX Vega 64 Liquid each rocking 4,096.
- AMD Vega priceThe air-cooled Vega Frontier Edition is on sale for $999, with prices of the RX Vega cards ranging between $599 for the top GPU and $399 for the basic Vega.
- AMD Vega performanceThe pro-level Frontier Edition delivers between GTX 1070 and GTX 1080 gaming performance, but AMD is targeting ~GTX 1080 speeds for the RX Vega.
- AMD Vega power drawIt's been suggested by an MSI representative that the RX Vega is going to need a lot of power, with a 1KW PSU recommendation for the top version of the GPU.
________________________________________________
Update July 31, 2017: The Capsaicin event
at SIGGRAPH has happened and we've finally got a little more official
insight into the new AMD RX Vega graphics cards. The three new Vega cards will launch on August 14 with prices starting at $399. AMD are promising that the base card will be the world's most powerful sub-$400 GPU.
That $399 card is the Radeon RX Vega 56, a 3,854 GCN 4.0 core card,
sporting 8GB of HBM2 memory. It comes with a standard AMD reference
blower cooler and base and boost clockspeeds of 1,156MHz and 1,471MHz
respectively.

There is also a $499 Radeon RX Vega 64 card appearing at launch. This
houses the full Vega 10 GPU, again with 8GB HBM2, but 64 next-gen
compute units (nCU) and the same 4,096 cores as the professional-level
Frontier Edition. There are two standard coolers for this card, the
black one, which looks identical to the RX Vega 56 card, and a metallic
shroud which probably costs a chunk more cash. Think Founders Edition...
There is also a liquid-chilled RX Vega 64 too. That retains the same
basic GPU configuration as the air-cooled version, but has higher
out-of-the-box clockspeeds. The standard RX Vega 64 clocks in at
1,247MHz and 1,546MHz, while the water-cooled variant has base and boost
frequencies of 1,406MHz and 1,677MHz.

There have also been shots of an AMD RX Vega Nano released by Bits and Chips,
so it looks like there'll be a cute l'il Vega popping up later on, too.
But that's not going to be the full-fat RX Vega card rocking the same
16GB HBM2 the Frontier Edition is holding onto. Though it's entirely
possible that much new memory tech will make a 16GB RX Vega card too
expensive considering it will still probably struggle to top one of
Nvidia's GTX 1080 Ti cards.
________________________________________________

AMD have announced the naming scheme for what will be the flagship
Vega gaming card - the imaginatively-titled Radeon RX Vega. It's likely
to be the direct replacement for the R9 Fury cards from the last
generation of high-end AMD graphics cards.
In 2016, AMD promised their Polaris graphics cards would bring their Radeon graphics cards back into the game. But while the RX 580 and RX 570
have shown impressive DirectX 12 performance against the mid-range
GeForce-shaped competition, AMD have yet to release a high-end card to
give them genuine 4K gaming.
This is where the AMD Vega GPU architecture comes in, aiming to jump
in at the high-end and providing the Radeon faithful with a serious GTX 1080 Ti contender.
AMD Vega release date

AMD launched the professional (read: non-gaming) Radeon Vega Frontier
Edition on June 27 this year, with the gaming RX Vega line launching at
SIGGRAPH 2017 at the end of July.
They've recently confirmed the SIGGRAPH event will be where AMD do
finally announce the RX Vega cards as well as further Vega-based GPUs,
probably the FirePro workstation versions.

That's just the announcement, however, not necessarily a launch. But
the rumours are AMD's partners will be releasing versions of the card
during August, so the referenced RX Vega cards will need to launch at
the beginning of August if they're going to get some time on the shelves
before the likes of Sapphire, MSI, Asus, and Gigabyte get in on the
Vega action.
AMD also announced the Vega GPU in their professional-class Radeon Instinct deep learning accelerators
at the start of December 2016. The top-end Instinct card is the MI25,
which is a Vega Frontier Edition by any other name, and likely
represents the fastest Vega can go right now.
AMD Vega architecture

AMD lifted the lid on their new Vega GPU architecture in a series of
short videos on their YouTube channel. If you don't want to listen to a
load of AMD marketing folk we've distilled their tech-essence down for
you.
AMD's Scott Wasson is calling Vega "the biggest improvement in our
graphics IP in the past five years." And given the few details they've
given away today that doesn't look like too much marketing hyperbole.
The redesigned geometry engine in the Vega GPUs is promising much
higher performance. "It now has the capability to process more than
twice as many polygons per clock cycle than we could in our previous
generation," Wasson explains.
But it's the High Bandwidth Cache and High Bandwidth Controller
silicon which looks the most exciting and that's all related to moving
outside of the limits of the graphics card's video memory. In normal
GPUs, developers have to fit all the data they need to render into the
frame buffer, meaning all the polygons, shaders, and textures have to
squeeze into your card's VRAM.
That can be restrictive and devs have to find clever workarounds for
large, open-world games. The revolution with AMD's Vega design is to
break free of those limits. The High Bandwidth Cache and High Bandwidth
Controller mean the GPU can now stream in rendering data from your PC's
system memory or even an SSD, meaning it doesn't have to come via the
card's frame buffer.

"You are no longer limited by the amount of graphics memory you have
on the chip," Wasson explains. "It's only limited by the amount of
memory or storage you attach to your sytem."
The Vega architecture is capable of scaling right up to a maximum of
512TB as the virtual address space available to the graphics silicon.
AMD are calling Vega 'the world's most scalable GPU memory architecture'
and they look on the money with that claim at first glance. But it will
still depend on just how many developers will jump onboard with the new
programming techniques when Vega launches.

This could potentially mean the recently-announced 4GB versions of
AMD's upcoming RX Vega graphics cards won't suffer from having such a
relatively small amount of video memory.
It might well get complicated recommending minimum GPU memory
capacities in the system specs of new games once the RX Vega cards
launch...
The new chip design also allows for more concurrency in processing
non-uniform graphics workloads. Previous designs potentially left large
chunks of silicon idle when the GPU was processing smaller operations,
bottlenecking the graphics system. The new NCU design, though, is meant
to allow parts of the GPU to work on smaller operations when there is
spare capacity, meaning that there shouldn’t be as many wasted, idle
parts of the chip.
This should mean more work gets done in the same amount of time as
previous GPU designs. How much impact this will have on gaming workloads
is difficult to say, but it could end up being important for the lower
level APIs like DirectX 12 and Vulkan.
AMD Vega specs

The AMD RX Vega cards will appear as standalone named cards sitting outside the rest of the Polaris-based RX 500 series.
Those essentially rebadged cards are slightly tweaked Polaris GPUs with
a very modest boost to their clockspeeds. The RX Vega cards though are
completely different beasts from GPU to memory architecture.
AMD Vega specs
AMD Vega specs
AMD RX Vega 64 Liquid | AMD RX Vega 64 | AMD RX Vega 56 | Radeon Vega Frontier Edition | Radeon RX 580 | GTX 1080 Ti | |
GPU | AMD Vega 10 | AMD Vega 10 | AMD Vega 10 | AMD Vega 10 | AMD Polaris 20 | Nvidia GP102 |
Architecture | GCN 4.0 | GCN 4.0 | GCN 4.0 | GCN 4.0 | GCN 4.0 | Pascal |
Lithography | 14nm FinFET | 14nm FinFET | 14nm FinFET | 14nm FinFET | 14nm FinFET | 16nm FinFET |
Base clockspeed | 1,406MHz | 1,247MHz | 1,156MHz | 1,382MHz | 1,257MHz | 1,480MHz |
Boost clockspeed | 1,677MHz | 1,546MHz | 1,471MHz | 1,600MHz | 1,340MHz | 1,645MHz |
Stream Processors | 4,096 | 4,096 | 3,584 | 4,096 | 2,304 | 3,584 |
Texture units | 256 | 256 | 256 | 256 | 144 | 224 |
Memory Capacity | 8GB HBM2 | 8GB HBM2 | 8GB HBM2 | 16GB HBM2 | 8GB GDDR5 | 11GB GDDR5X |
Memory bus | 2,048-bit | 2,048-bit | 2,048-bit | 2,048-bit | 256-bit | 352-bit |
Performance | 13.7 TFLOPs | 12.7 TFLOPs | 10.5 TFLOPs | ~13 TFLOPs | 5.8 TFLOPs | 11.8 TFLOPs |
TDP | 345W | 295W | ~250W | < 300W | 185W | 250W |
The first Vega GPU, the Vega Frontier Edition, will use Vega 10
silicon and sees AMD coming out with a high-end halo graphics card
first, with the RX Vega-branded versions coming later, in the same way
the GTX 1080 Ti followed Nvidia's Titan X.
The Radeon Vega Frontier Edition, then, is going to be AMD's answer
to Nvidia's Titan cards, offering pro-level specs in a card that's not
really designed for the consumer, but for the creators. That said, I bet
there's going to be a fair few well-off AMD fans dropping the cash on a
Frontier Edition in the same way the Titan GPUs have always found their
way into gaming rigs.
That’s a 16GB card, sporting second-gen high bandwidth memory (HBM2),
which will give it an insane level of video memory performance. We’re
expecting a 2,048-bit memory bus with bandwidth around the 512GB/s
mark.
The consumer-facing versions of the Vega architecture, though, seem
to have been announced in more detail from a rather unlikely
source. Apple announced the Radeon Pro Vega at a recent event,
introducing the GPU which is set to make their new iMac Pro the fastest
all-in-one machine that's ever spat out 1s and 0s. And if these aren't
the specs for July's first two RX Vega cards I'll eat a Frontier
Edition...
For their part, the fruity Mac gang are getting two flavours of Vega, the Radeon Pro Vega 56 and Radeon Pro Vega 64.

The numbers refer to the amount of next-gen compute units the
different Vega cores will contain, which means the top-end chip will
house 4,096 cores and the lower end chip will have 3,584 cores. They
will also utilise different levels of HBM2 memory too, coming in both
16GB and 8GB flavours.
It's probably not much of a stretch to think these will be the
essential specs of the RX Vega chips which are due to be launched at
SIGGRAPH next month. AMD habitually release their GPU cores in pairs so
having a 64 CU and 56 CU version for us PC gamers at launch wouldn't be
much of a surprise.
The latest rumours surrounding the consumer-focused RX Vega cards is
that there will indeed be a pair of different AMD Vega GPUs at launch,
across three different SKUs. The names are harkening back to the good
ol' ATI days, with AMD resurrecting the XT, XTX and XL names.
I loved me some X1900XTX, now that was a frickin' card. A loud and hot
one, granted, but it was a beauty. So was the X1950XT. Mmm, halcyon
days...
The RX Vega XTX is the reportedly top-spec, water-cooled version,
rocking 4,096 GCN cores and a TDP of 375W. The RX Vega XT uses the same
4,096 core GPU, but has air-cooling and a lower TDP of 285W. That kind
of suggests it might have a lower clockspeed than the liquid-chilled
chip too.
At the bottom end of the AMD Vega tech tree is the AMD RX Vega XL,
with a 3,584 GCN core GPU and the same 285W TDP. All three cards seem
like they're going to be released with 8GB of HBM2 memory, and there's
no rumours of a 16GB version as yet - I smell a Titan-esque Radeon Vega
on the horizon.
The early rumours had the three RX Vega cards
designed for the consumer with the names: RX Vega Nova, RX Vega Eclipse
and finally the RX Vega Core. Looks like they may have been internal
codenames, unless they were total balls of course...

AMD have said their GCN architecture can be configured to work with
both HBM2 and GDDR5, so it's possible AMD will hold the top memory tech
for just the top two cards in their RX Vega 10 GPU stack, leaving the
cheaper GDDR5 to cover the potential RX Vega Core edition.
There have also been reports of a Vega 11 GPU too - though the rumour
mill has been surprisingly quiet on that front recently. It’s possible
AMD have decided the Polaris 12 GPUs will shore up the bottom end of the new 500-series range and we won’t see a Vega 11 chip until later on.
There have though been earlier rumours of a Vega 20 GPU
that AMD are working on with GlobalFoundaries. The Vega 20 is
reportedly going to be a 7nm GPU releasing in 2018 with up to 32GB of
HBM2 and memory bandwidth of a ludicrous-speed 1TB/s. There are also
rumours of it sporting 64 NCUs, support for PCIe 4.0 and coming with a
teeny tiny 150W TDP.
With GlobalFoundaries looking to hit 7nm production in 2018 that part at least looks plausible. The rest? I’m not so sure. It looks like very wishful thinking to me.
Given AMD have shown roadmaps with Vega sticking to 14nm with a 14nm+
refresh to follow, it looks unlikely we'll see 7nm Vega. That's more
likely to come with the Navi GPU architecture which is to follow it.
AMD Vega price

The professional-level Frontier Edition cards are now available and,
apparently, the consumer-facing AMD RX Vega cards' pricing will be
"excellent."
That comes via a tweet from Bits and Chips where they claim the real
GPU game changer will be AMD's Navi architecture, but that the pricing
for Vega will still make the new GPUs a big deal. They also go on to say
that, from their sources, it will have "a terrific price/performance
ratio."

As well as outlining the different levels of AMD RX Vega GPU older
leaks have also defined pricing for them, too. The top-end RX Vega Nova
is rumoured to be released at $599, around $100 less than the competing Nvidia GTX 1080 Ti. Whether that means it's $100 slower we don't yet know.
We have, however, done a little number-crunching around the blind
test machines AMD have been showing off with their pre-SIGGRAPH Vega
roadshow. The AMD Vega machine is meant to be $300 cheaper than the
Nvidia GTX 1080 one, but, because of their respective monitors, that
actually means the RX Vega card in the test rig is valued at around $800. Ooof.
The rumours also put a $499 price tag on the middle-order RX Vega
Eclipse and the RX Vega Core at $399. That will leave the RX 500-series
cards looking after the mainstream, sub-$250 level with Vega shoring up
the high-end for the first time since the Fury X in 2015.
The final pricing for the Radeon Vega Frontier Edition has been finally unveiled, with the standard air-cooled card now retailing for $999,
while the liquid-chilled version will be $1,499 when it launches later
on this year. The $500 delta between the air- and water-cooled versions
is incredibly steep as they're both 16GB versions with identical GPUs. That's some expensive coolant at these prices...
Now, it's worth remembering these are high-end professional-level
cards, designed for content creators and developers rather than a GPU
for you to jam into your rig in preparation for Far Cry 5, so the
Frontier Edition's gaming performance isn't 100% representative of the
RX Vega cards.
And, with AMD's Raja Koduri saying there will be gamer-focused RX
Vega cards which will be quicker than these professional cards, it does
call into question the veracity of the rumoured pricing we heard about
previously. I can't see the top-end RX Vega card retailing at $600 if
the professional version is almost twice that price. Still, we won't
have long to wait to sort the truth wheat from the rumoured chaff, as
the RX Vega cards will be officially unveiled at the SIGGRAPH event
kicking off on July 30.
AMD Vega performance

AMD's Raja Koduri took part in a recent Reddit AMA where he confirmed there will be RX Vega cards which outperform the recently announced Radeon Vega Frontier Edition.
He was asked directly if the consumer RX version of the Vega GPU
would be as fast as the Frontier Edition and responded with: "Consumer
RX will be much better optimized for all the top gaming titles and
flavors of RX Vega will actually be faster than Frontier version!"
And that's something we need to remember as the initial performance
of the Frontier Edition, in gaming terms at least, has been rather
lacklustre. In PCPerspective's tests the FE delivers game frame rates somewhere in between GTX 1080 and GTX 1070 levels.
The drivers used for the Frontier Edition aren't optimised for gaming
loads, which would go some way to explaining away the poor performance,
but if they are hoping to be able to launch a card that's competitive
with Nvidia's best AMD's driver team have got to find a way to shore up
the >30% performance delta between where it is and where it needs to
to take on the GTX 1080 Ti.
AMD have taken the RX Vega on its first public tour, and look to be targeting GTX 1080 performance
with their blind tests at the recent Budapest event. It would seem that
for the first 8GB consumer cards that's where they're aiming their Vega
architecture at.
Koduri also answered an earlier question about whether there would be
a 16GB variant of the RX Vega and gave a tantalising "we will
definitely look at that..."
Considering they haven't officially announced any final specs for the
gaming-focused versions of the new cards that would seem to suggest
there will indeed be a 16GB RX Vega card, possibly released as a sort of
RX Vega Fury. And if it's going to be quicker than the Frontier Edition
we might be looking at a 1,600MHz GPU too.
He also confirmed Vega would be their first Infinity Fabric GPU, which is likely how the two components of the upcoming Raven Ridge APU will
talk to each other. With Vega utilising the same Infinity Fabric
interconnect which allows the two Zen modules in the Ryzen processors to
communicate at high speed with minimal latency it's not beyond the
realms of possibility that we'll see multiple Vega GPUs connected via
the Infinity Fabric on a single board.
"Infinity Fabric allows us to join different engines together on a
die much easier than before," Koduri explains. "As well it enables some
really low latency and high-bandwidth interconnects. This is important
to tie together our different IPs (and partner IPs) together efficiently
and quickly. It forms the basis of all of our future ASIC designs. We
haven't mentioned any multi GPU designs on a single ASIC like Epyc, but
the capability is possible with Infinity Fabric."
AMD also showed four Frontier Editions running with a 16-core
Threadripper making mincemeat of some seriously high-end graphics
workloads at this year's Computex show in Taiwan.

We’ve also seen quite a lot from AMD’s new Vega GPU in benchmark form
so far. The 8GB HBM2 version of the GPU has been shown in public at the
recent New Horizon event, where it was running a Ryzen-powered gaming
rig with the new Star Wars Battlefront Rogue One DLC. It was playing the
game at 4K and was able to keep running consistently at over the 60fps
mark.
At the recent AMD Tech Summit in China AMD showed a demo of Deus Ex: Mankind Divided running
with the high-bandwidth cache controller off and on side-by-side. The
GPU-intensive demo showing a 50% improvement in average frame rate and
100% higher minimum frame rates.
AMD have also shown Doom running at 4K using the flagship Vega
graphics card. Running at the game's Ultra graphics settings the frame
rate is shown at around 70fps with a few dips below 40fps here and
there. That's not far off GTX Titan X performance - no wonder Nvidia is waiting for Vega to release before launching the GTX 1080 Ti.
The demo of the unreleased Vega card was running the Vulkan version
of Doom live at the Ryzen event and was shown outperforming the GTX 1080
by 10%. That demo also confirmed the 687F:C1 device ID for the Vega
GPU. If that sounds familiar it’s because that designation was seen in
the Ashes of the Singularity benchmarking database recently, as well as a C3 revision, offering performance around the GTX 1080 mark too.
That device ID has appeared again in a recent 3DMark Timespy benchmark result
found online. This appears to be the 8GB version of the RX Vega, but
instead of the GTX 1080 performance we've previously seen, the DX12
benchmark seems to show it performing at GTX 1070 speeds.
That’s maybe a little disappointing at first glance, but these are
all benchmarks running on unreleased, unoptimised drivers. The
clockspeeds for the leaked benchmarks have the Vega GPU running with a
boost clock of just 1,200MHz which puts it a far cry from the 12.5TFLOPs
the peak Vega GPU is capable of.
It’s been reported the Doom benchmarks were run on a slightly
modified driver for the old Fiji GPUs so there is potentially more
headroom left to come from the first Vega GPUs when they do finally
launch.
The big-boy Vega 10 chip, the one that’s meant to be based on the
Radeon Instinct professional MI25 card, could potentially hit 12.5
teraflops of single precision processing. The GTX Titan X runs to a
little under 11 teraflops for its part, so even if AMD releases the card
with a slightly cut-down GPU compared with the one in the expensive
MI25 it may still have a version able to compete with both the Titan X
and the GTX 1080 Ti.

The most recent performance indicators, though, have come from the
recent financial analyst day at AMD HQ. Here they showed Vega's 4K
capabilities with Sniper Elite 4 running on a single GPU bouncing around
the 60 frames per second mark. That's key because Raja Koduri made that
a target for Vega from day one.
"One of the goals for Vega is that it needs to cross that 4K/60Hz
barrier, a single GPU that can cross the 4K/60Hz barrier," he explains.
"That seemed quite daunting two years ago."
Now, we've already seen Vega running Star Wars Battlefront at 4K/60Hz
so the fact it can do the same with Sniper Elite 4 shouldn't really
come as much of a surprise. What was more revealing from last night's
demos, however, was the showcasing of what Vega's high-bandwidth cache
controller might be able to offer the games of tomorrow. We have seen
HBCC in action with Deus Ex: Mankind Divided at a previous event in
China, but the Rise of the Tomb Raider test shows an even greater
performance improvement.
The HBCC tech baked into Vega allows the card to stream in larger
pools of data from the PC's system memory or even from an attached
storage device. Right now that's incredibly useful for professional GPUs
using massive data sets, but in-game, not so much.
"Today's games are written for today's memory constraints," Koduri
explains, "so we don't see many games cross the requirements for 4GB...
We are building this architecture not just for one generation."

In order to show what HBCC can offer games AMD showed a version of
Vega limited to just 2GB of video memory where one card had HBCC enabled
and one with the memory caching tech turned off. This was to simulate
where the new technology can help out when the frame buffer is being
maxed out.
The Rise of the Tomb Raider demo showed a massive difference in both
the average and minimum frame rates outputted by the same spec of GPU.
It's the minimums that are the most interesting part of this, though,
with the 2GB card without HBCC bottoming out at 13.7fps while the
equivalent GPU running the new HBCC tech scores 46.5fps. That's between
2x and 3x the minimum frame rate which will have a huge impact on just
how smooth a game feels to play.
"This is going to be a big deal," Koduri explains, "when we put it in the hands of both game developers and gamers."
And he could be right... if those game developers are given enough
incentive to code specifically for this AMD-only technology. As Koduri
says, current games are not designed to take account for this advance,
so AMD are going to have to give them a solid reason to do so.
AMD Vega power draw

And what about the power draw of the new Vega architecture? Well, an MSI representative on the Tweakers forum in
the Netherlands seeming to suggest the AMD RX Vega consumer version is
going to demand a lot of power. There are a couple of different
translations going around, but whether this is just a reaction to the
expected ~300W TDP of the top Vega GPU compared with the sub-200W TDP of
the Polaris architecture, I'm not sure.
The 300W TDP for the reference version isn't a crazy-high power
demand for a high-end graphics card, though when the board partners
start to work their magic, with new coolers and factory-overclocked
editions, you can probably expect that figure to get mighty close to
400W.
The instant reaction in the forum to the suggestion that Vega needs a
lot of power was initially negative, until the same MSI rep pointed out
that whether the power draw was a bad thing depended on what kind of
performance you get back.