GDDR4 SDRAM

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

GDDR4 SGRAM (Graphics Double Data Rate type four Synchronous Graphics Random-Access Memory) is a type of graphics card memory specified by the JEDEC Semiconductor Memory Standard.[1][2] It is a rival medium to Rambus's XDR DRAM. GDDR4 is based on DDR3 SDRAM technology and was intended to replace the DDR2-based GDDR3, but it ended up being replaced by GDDR5 within a year.

History

  • On October 26, 2005, Samsung announced the development of 256-Mbit GDDR4 memory running at 2.5 Gbit/s. Samsung also revealed plans to sample and mass-produce GDDR4 SDRAM rated at 2.8 Gbit/s per pin.[3]
  • On February 14, 2006, Samsung announced the development of 32-bit 512-Mbit GDDR4 SDRAM capable of transferring 3.2 Gbit/s per pin, or 12.8 GB/s for the module.[4]
  • On July 5, 2006, Samsung announced the mass-production of 32-bit 512-Mbit GDDR4 SDRAM rated at 2.4 Gbit/s per pin, or 9.6 GB/s for the module. Although designed to match the performance of XDR DRAM on high-pin-count memory, it would not be able to match XDR performance on low-pin-count designs.[5]
  • On February 9, 2007, Samsung announced mass-production of 32-bit 512-Mbit GDDR4 SDRAM, rated at 2.8 Gbit/s per pin, or 11.2 GB/s per module. This module will be available for the latest version of AMD cards.[6]
  • On February 23, 2007, Samsung announced 32-bit 512-Mbit GDDR4 SDRAM rated at 4.0 Gbit/s per pin or 16 GB/s for the module and expects the memory to appear on commercially available graphics cards by the end of year 2007.[7]

Technologies

GDDR4 SDRAM introduced DBI (Data Bus Inversion) and Multi-Preamble to reduce data transmission delay. Prefetch was increased from 4 to 8 bits. The maximum number of memory banks for GDDR4 has been increased to 8. To achieve the same bandwidth as GDDR3 SDRAM, the GDDR4 core runs at half the performance of a GDDR3 core of the same raw bandwidth. Core voltage was decreased to 1.5 V.

Data Bus Inversion adds an additional active-low DBI# pin to the address/command bus and each byte of data. If there are at least four 0 bits in the data byte, the byte is inverted and the DBI# signal transmitted low. In this way, the number of 0 bits across all 9 pins is limited to 4. This reduces power consumption and ground bounce.

On the signaling front, GDDR4 expands the chip I/O buffer to 8 bits per two cycles, allowing for greater sustained bandwidth during burst transmission, but at the expense of significantly increased CAS latency (CL), determined mainly by the double reduced count of the address/command pins and half-clocked DRAM cells, compared to GDDR3. The number of addressing pins was reduced to half that of the GDDR3 core, and were used for power and ground, which also increases latency. Another advantage of GDDR4 is power efficiency: running at 2.4 Gbit/s, it uses 45% less power when compared to GDDR3 chips running at 2.0 Gbit/s.

In Samsung's GDDR4 SDRAM datasheet, it was referred as 'GDDR4 SGRAM', or 'Graphics Double Data Rate version 4 Synchronous Graphics RAM'. However, the essential block write feature is not available, so it is not classified as SGRAM.

Adoption

The video memory manufacturer Qimonda (formerly Infineon Memory Products division) has stated it will "skip" the development of GDDR4, and move directly to GDDR5.[8]

See also

References

External links