ATI Radeon HD 4770 512MB Video Card Review
The First 40nm Graphics Card Arrives
AMD is launching the World’s first 40nm graphics card today with the introduction of the ATI Radeon HD 4770. With such a small die size, it means that AMD can fit more dies per wafer and reduce costs. AMD expects that the Radeon HD 4770 should sell for $109 with an available rebate of $10, which brings the final price into the uber sexy sub-$100 category. Usually the sub-$100 category means crappy performance, but with a core clock of 750MHz and 512MB of GDDR5 memory you might want to think twice about that.
The ATI Radeon HD 4770 is marketed as being the fastest graphics card available for under $100 after mail-in-rebate that is still able to play current game titles at decent resolutions. This is made possible since the dual-slot card runs a core clock speed of 750MHz and 800MHz on the 512MB of GDDR5 memory. This is enough horse power to produce a compute power of 960 GFLOPs with a memory bandwidth of 51.2GB/s.
The Radeon HD 4770 has a total of 640 stream processors, which is double the number that a Radeon HD 4670 has, but is still fewer than the 800 stream processors on the Radeon HD 4800 series of cards. The Radeon HD 4770 has 20% fewer stream processors than the Radeon HD 4850, which will hurt the shader power of the card. Some of that performance loss is gained back by a higher core clock speed and GDDR5 memory, but it will be slower than a Radeon HD 4800 series graphics card. The specifications for the Radeon HD 4770 are very impressive though, and it has twice the compute performance of the Radeon HD 4670 in a card that costs roughly $100.
AMD is targeting the Radeon HD 4770 512MB against the NVIDIA GeForce 9800 GT 512MB as you can see from the ATI slide shown above. In terms of general technology, the ATI Radeon HD 4770 is superior, but as we all know what looks good on paper might not work to well in the real world, so let’s take a closer look at this new graphics card and then hit the benchmarks. It’s not every day you see the GPU manufacturing process (40nm) pass up desktop processors (45nm), so this should prove to be interesting.
Comments are closed.