What Is GDDR6 And What’s Its Use?
What Is GDDR6?: The acronym GDDR6 informs the user which generation the memory chips inserted in a graphics card come from.
As you might expect, GDDR6 is faster than GDDR5, as each new generation brings a leap in efficiency, performance, latencies, and voltages required to function. GDDR6 chips consume less than GDDR5, are faster, have lower latencies, and have higher bandwidth. All this is done to make better graphics cards, although they are not the only memory chips built into them.
What is GDDR? What is GDDR5, and What’s its use? What Is GDDR6?
GDDR stands for Graphics Double Data Rate.
It was introduced in 1999 by ATI Technologies, which later become part of AMD Corporation. It is used to store data sent to the graphic cards, which use DRAM technology instead of SRAM technology. The maximum bandwidth achievable with the GDDR5 memory module is up to 28Gbps (gigabits per second). This article will focus on What Is GDDR6 and what’s its use.
GDDR memory modules are different from traditional single data rate (SDR) or double data rate (DDR) memory modules, as they are not only composed of one transistor and capacitor pair for each bit but are composed of a number of such pairs that are selected through the use of on-die termination (ODT). What is GDDR5, and What’s its use?
The voltage required for operating each transistor and capacitor pair is supplied by the memory controller using separate supply voltages.
A good example, in this case, would be DDR3 SDRAM compared with DDR2 SDRAM, both runoff power supplied from two supply voltages, Vcc and Vdd. What is GDDR5? What’s its use
While there was no specific initial goal to create a performance difference between these higher speed DRAM modules than what was available at lower speeds with only one supply voltage, it certainly was an after-effect of what was being implemented.
A good example would be comparing a DDR3 SDRAM module running at 400 MHz with a DDR2 SDRAM module running at 200 MHz, both may have been run off 2.5 Volts, but the DDR3 design allows for it to reach speeds up to 1600 MHz while the DDR2 is limited to speeds up to 800 MHz. What is GDDR4? What’s its use?
What Is GDDR6 and What’s its Use?
As many of you already know, the newer GPUs use the GDDR6 type of memory. This is a new technology built on the foundations of GDDR5 and GDDR5X that GPUs will use in the upcoming generations.
It was announced during this year’s GDC2017 event along with Pascal GP104-based GTX 1080 cards which use GDDR5X and HBM2 memory.
For those of you who are not familiar with the GDDR5X technology, it’s a new GPU memory standard based on previous generations of GDDR5. Still, it has much higher clock speeds, resulting in increased bandwidth, aka wider bus.
A good example would be the NVIDIA Titan X (Pascal) which uses a 384-bit wide memory bus running at 10 GHz, providing up to 480 Gbps of bandwidth. Compare that to the Radeon RX Vega 64, which has only a 2048-bit wide memory interface but also features a clock speed capable of reaching 1890 MHz, thus reaching 453 GB/s or almost 60% more than the Titan X.
Today’s graphics card can have memory chips GDDR6 or HBM2. The latter is a technology developed by AMD and implemented in many of its graphics, which promises better development and scaling over the years.
HBM stands for High Bandwidth Memory, and it is a stacked DRAM solution where the chips are placed vertically instead of being placed horizontally as they used to be when using only one chip per module (GDDR5). The stacking process takes more space, so keep in mind that HBM GPUs will have taller PCBs and more expensive GPUs than their older counterparts (Pic used from PCWorld.com article).
The first generation of GDDR6 was announced by Micron during Computex 2016 event, and it was said to be available in mass quantities around Q1 2017 (February). The first graphics card manufacturer to adopt GDDR6 for their upcoming GPUs is not known yet, but we expect an announcement before this year’s end.
The speeds and feeds of the new GDDR6 memory modules have been explained during Nvidia’s GTC2017 presentation:
Samsung 8Gb chips are used for a 384-bit memory controller so that each chip can provide 28GB/s per module totaling 14GB/s (with eight chips) or 16GB/s (with ten chips). It makes sense that manufacturers will use 10-chip modules on 500 cards because there has always been a pattern of using ten chips per card. This will result in memory size of 640GB/s (with ten chips) or 768GB/s (with 12 chips).
The same technology can also be used on 256-bit cards to do half as fast modules, resulting in 28GB/s bandwidth with eight chips and 14GB/s bandwidth with four chips. Since Micron announced that the first GDDR6 GPUs would go into mass production during Q1 2017, it’s safe to assume that GeForce GTX 2080 and Quadro RTX 8000 series cards will use GDDR6 memory, but we cannot say for sure if there is going to be any available at launch or not. There are still no specifications concerning GDDR6 speeds, but it’s safe to assume that they will be a lot higher than GDDR5X and GDDR5.
We know that Samsung has their GDDR6 memory, and since Micron announced that they’re partnering with them, it’s safe to assume that NVIDIA (and AMD) are going to use memory made by this company. It wouldn’t make any sense for both companies not to do it because Samsung is already producing the industry’s fastest graphics memory modules. If you combine their production capabilities with Nvidia’s and AMD’s demand for such modules, then you get an unbeatable combination.
The question on everyone’s mind is whether or not the new GPUs based on GP102 GPU will use HBM2, GDDR6, or both, right? The answer to this question is: we don’t know yet!
Nvidia may use GDDR5X and HBM2 memory for their high-end GP102 GTX 2080 Ti models (just like they did on the GTX 1080), but it’s also possible to use them GDDR6 memory only and forget about 8GB/s per the second barrier. It all depends on how much money they are going to invest into GDDR6 development.
Since GDDR5X and HBM2 memory is not available in mass quantities just yet, we can guess that most of the consumer-grade cards based on GP102 will use GDDR6.
We cannot confirm nor deny whether AMD will use GDDR6 memory for their upcoming Vega GPUs. Still, we can confirm that Samsung is already producing mass quantities of GDDR6 memory, and AMD has been using Samsung’s products for quite some time now.
The next question on everyone’s mind is: when are GPU makers going to adopt GDDR6? The answer to this question depends on the availability of both HBM2 and GDDR6 modules.
What Is GDDR6 And What Is It For Further Going:
The GDDR6 memory chips are manufactured by various companies worldwide, having Micron, Samsung, or Hynix as the main manufacturers of these memories for graphics cards. Once implemented in the graphics, they provide them with better performance at high resolutions. The higher the GDDR6 memory, the better the graphics card will perform at high resolutions since a large amount of data needs to be stored for resolutions such as 4K and above. This is highly dependent on the amount of memory on the graphics card.
GDDR6 memory chips offer a baseline bandwidth of 16 Gbps and run at 1.35 V. This is variable since each graphics card assembler can tune the chips they have purchased from Samsung, Micron, or Hynix to offer different features. Generally, by raising the voltage, higher speeds are achieved and greater bandwidth. Similarly, it is possible to overclock GDDR6 memory chips on a graphics card, which is typically based on the use of a program provided by the board’s assembler to increase the voltage and speed of the GDDR chips.
How much did you like this information about What Is GDDR6 And What Is It For? Do tell us in the comment section. We are always happy to know about your views on our provided information. Follow us on social media | Facebook| Instagram| Twitter| LinkedIn| for all the latest information and news worldwide. Join our tribe and allow us to become a part of your life.