THG Graphics Card Buyers Guide

The THG Graphics Card Buyer’s Guide has been written to be a guideline for the purchase of a new graphics card. It aids beginners in selecting the right model with the right feature set, and explains the newest technologies and features.

#1: Intended Use #2: Technology

#3: Performance & Image Quality

#4: Budget

#5: Manufacturer & Feature Set

#6: The Purchase

uying a new graphics card may seem like a simple matter at first. After all, both Internet shops and local retail stores carry a plethora of graphics cards in every performance and price category. This large variety of cards, however, makes it hard to select the one that is ideal for you. A multitude of factors need to be considered in the selec- tion process, to ensure that the choice you make will keep you happy over as long a period of time as possible. This article covers all of the criteria

involved in selecting and buying the graph-

ics card that is right for you. How impor- tant each factor is will depend on your personal preferences and the way you intend to use the card. For example, some people will require a video-in line, and for

them this will be a make-or-break feature; others will not care about this particular capability. To help you define your requirements, we will also give a short overview of the technologies used in graphics cards of the past and present.

We've broken this buyer’s guide up into six large sections that cover all of the important factors. Obviously, there is no perfect way to prioritize selection criteria, because preferences and needs differ for each individual. The order that we present here is only one possibility among many, and is meant more as a guideline to help you find your own personal ranking of cri- teria. Remember also that it’s sometimes difficult to draw a line between these issues, so there will be some overlap in cer- tain areas.


THG Graphics Card

Buyers Guide

#1: Intended Use A Short Overview

o matter what the intended use of your PC,

be it games, office work, photo and video

editing or anything else, you're going to

need a graphics card. However, the impor- tance of the card’s performance depends greatly on the nature of the application! These days, the most important differentiating factors are video and 3D performance and quality.

The first step in determining your ideal graphics card is to take stock of the primary applications for which you use your PC. If most of your time on the computer is spent using office applications (word processing, spreadsheets), or other 2D software, then the 3D performance of a graphics card won’t play a great role in your buying decision.

However, in future operating systems such as Microsoft’s “Longhorn”, the user interface will make much heavier use of a graphics card’s 3D functional- ity, so 3D performance may be potentially important even for those who do not use 3D applications. For example, to use even the simplest 3D version of the Longhorn interface -- which goes by the name “Aero” -- full DirectX 9 support and 32MB of video memory are likely to be the bare minimum graphics card requirements. The grander “Aero Glass” interface version will require DirectX 9 sup- port and 64MB of video memory!

Of course, there is still some time until Longhorn makes it to the marketplace and a computer near you. And even when it arrives, it will also come with a 2D-only user interface for systems that don’t meet the 3D requirements. You can get more info on Microsoft’s Longhorn here: ice/display/graphics-reqs.mspx.

There are measurable 2D performance differences between individual cards and the various chip gen- erations. However, the 2D performance of current graphics processors has reached such a high level overall that these differences won’t make a tangible difference in everyday use, for example in a Windows XP environment. Applications such as Word, PowerPoint, Photoshop or Acrobat won't run any faster on a bleeding-edge high-end card than on a mainstream offering. This means that these days, a graphics card’s performance is determined nearly entirely by its 3D performance.

Modern games such as Doom3 are very demand- ing on graphics cards.

Since today’s graphics cards differ the most in 3D performance, this is the probably the main factor to look for if you intend to do any gaming on your PC. The variety of different card models from differ- ent generations and price brackets is enormous, as are the differences in 3D performance and feature sets. Even if you’re more of a casual gamer that only plays a game every now and then, you shouldn’t try

to save money in the wrong place. After all, gaming

time is your free time, and you don’t want to ruin it with stuttering or low-detail graphics. Cut too many corners and may end up with more exasperation than entertainment.

The 3D architecture of the card -- that is, which generations of which 3D standards it supports -- is ve ry important. Usually, adhernce to 3D standards is expressed in terms of support for a certain generation of Microsoft’s DirectX 3D API, which is updated re g-

THG Graphics Card Buyers Guide

ularly. We’ll talk about this some more later on in this guide. For now, we’d just like to mention that while most DirectX 8 compliant cards will be suffi- cient for current games, they won't do as well in the most recent and soon-to-come hit games, such as Doom III, Stalker and Half-Life 2.

If you're looking to replace your motherboard as well as your graphics cards, integrated graphics solutions may be an option for you. Beware, how- ever, that the 3D performance of these solutions is, at best, comparable to that of the slowest add-in cards. As a result, these motherboards are only of limited use to PC gamers. If your focus lies more in the areas of office work and video editing, then they will usually be quite sufficient.

Recently, many companies have begun campaigns to secure a foothold for the PC in the living room. The primary selling point of such a solution is the PC’ inherent suitability for video and audio play- back. Again, special attention is given to the graph- ics card here as well. In principle, any graphics card is capable of displaying any video format, but there are major differences between cards in the resulting CPU load on the PC, and the output image quality. If the CPU load is too high when playing high-res- olution HDTV videos (for example), there will be noticeable stuttering during playback. Graphics processors also differ in their offered color fidelity, and features such as de-interlacing and scaling. We’ll look at this in more detail in section #2.

#2: Technology (Future Proofing)

ver the past few years graphics processor

have evolved from pure 3D accelerators that

could only perform pre-determined, special-

ized tasks, into real processors that are pro- gammable to a certain extent. This development has allowed game designers to create their own 3D effects, in the same way as the creator of profes- sional 3D rendering applications. These applications use their own programs for 3D effects, called shaders.

Simply put, a shader is a specified mathematical

definition or description of an effect. For example, if a stone in a game is supposed to look wet, then a shader can be written for this purpose, which would define the sheen effect, reflections, incidence of light, and so on. The graphics processor then uses the shader to calculate this effect in real time.


In the past, the solution might have been taking the texture of the stone and overlaying it with a second texture that incorporates pseudo reflections, thereby creating the illusion of shininess. Of course, this wouldn’t exactly have looked realistic. Today, these effects can be rendered with a high level of realism. In short, shaders add a great deal of realism to any game, though due to the topic's complexity, we will only be able to cover the most important aspects of how they work.

As we discussed earlier, a ve ry important factor to consider when choosing a graphics card is which DirectX generation the graphics processor supports. The DirectX support of a card has important impli- cations for its ability to make use of shaders, because each generation of DirectX increases the complexity of what calculations can be done by shaders. So, let’s get back to the matter of DirectX generations.

DirectX Technology DirectX 7 Class

The 3D engine of the game Battlefield 1942 sits solidly on a DirectX 7 foundation. Through the clever use of textures, the developers really squeeze

THG Graphics Card Buyers Guide


or ST Micro’s Kyro II.

DirectX 8 Class


Introduction to DirectX 8: us/dndrive/html/directx112000.asp? frame=true

Programmable Shaders for DirectX 8: us/dndrive/html/directx01152001.asp?f rame=true

Introduction to DirectX 9: ues/03/07/DirectX90/toc.asp?frame=true

Shader Model 3.0: partners/shadermodel30_NVIDIA.mspx

Microsoft DirectX Overview:

a lot out of the engine, but the in-game world is very static; dynamic lighting is not possible, for example. Another very popular DX7 game is

Games such as Quake 3 (OpenGL), Unreal, and even comparatively recent games such as Battlefield 1942 belong to this generation. Almost all effects in these games are realized through simple textures. Aside from transformation and lighting (T&L), these cards are not programmable. In fact, not all graphics processors of this generation even offer T&L support; for example Intel’s integrated 1865G

Unreal Tournament 2003 uses a number of DirectX 8 shader effects. As a result, the game’s graphics look much better than that of older games, and the in-game world seems more alive.

Graphics processors truly began to become programmable starting with DirectX 8.There are two capabilities that need to be taken into account here, namely pixel and vertex (=geometry) calcula- tions through shaders. DirectX 8 incorporated

several different pixel shader models (SMs), which support varying levels of programmability (PS 1.0, 1.1 and 1.2 are part of DirectX 8, while PS 1.4 was added in DirectX 8.1). At first, the complexity of the shader programs was quite limited, but their complexity has increased with the newer shader models. There is only one vertex shader model that is shared by both DirectX 8 and DirectX 8.1: Vertex Shader 1.0.

Direct X 9 Class

FarCry can be considered the first game that makes consistent use of shaders. Thanks to DirectX 9, the surfaces look very realistic and react to changes in lighting, throw believable shadows, and more. The game’s environment seems very “alive.

Microsoft’s current 3D API is DirectX 9, which permits even more freedom in shader programming than DirectX 8, and also allows for longer and more complex shaders. It also introduces the float- ing-point data model, which allows for detail calcu- lations that are much more exact.

ATI and NVIDIA are the two companies that dominate the consumer 3D market, and their cards offer varying levels of precision. While ATT’s proces- sors use 24-bit precision across the board, NVIDIA's cards also support 16-bit and 32-bit floating point modes (as well as some other FF for- mats). The rule of thumb here is simple: “the higher

the precision, the more complex the calculation.”

Which data format is required depends greatly on the effect that is to be created -- not every effect requires the highest available precision.

DirectX 9 also incorporates several pixel shader models. First there is the original SM 2.0, to which the evolutionary SM 2.0a and 2.0b were later

added. SM 3.0 is a completely new and very recent addition, which is supported starting with DirectX 9.0c. Currently, only NVIDIA’s GeForce 6xxx line of graphics processors can make use of SM 3.0.

If you would like to find out more about the various DirectX versions and the associated shader models, you will find lots of relevant information at the following sites:

It is important to note that you can’t fully assess the graphics of a game solely by the DirectX version it uses. For example, DirectX 8 shaders can be used to implement many of the effects used these days,

DirectX 9.0

which can bring even cutting-edge graphics proces- sors to their knees. Game developers strive to use as low a DirectX version as possible, so they can target as large an audience as possible. How much comput- ing power a shader will end up needing depends pri- marily on its complexity. Finally, it should also be noted that all cards are downward compatible. Upward compatibility is only possible in the case of ve rtex shaders which can be calculated by the CPU, and while possible, this would be very slow.

Two screenshots of the same scene in the game FarCry; one on a GeForce 4 Ti (DX8.1) and one on a GeForce 6800 (DX9).

Bear in mind that although many entry-level cards are DirectX 9 compliant, they are unable to deliver playable frame rates due to their low pro- cessing power (more on this in section #3). In some cases, the DirectX 9 compliance also refers only to certain areas. A prime example of this is Intel’s new 1915G integrated graphics chipset. Although the graphics processor supports Pixel Shader 2.0 (making it DirectX 9 compliant), it offloads all vertex shader calculations to the CPU, increasing CPU load.



After DirectX, OpenGL is the next most popular 3D API. It has existed for far longer than DirectX, and is available for a large number of operating sys- tems. DirectX, on the other hand, is confined to Microsoft platforms.

Like DirectX, OpenGL is constantly being refined, updated and extended in its capabilities. Also like DirectX, it is supported by virtually every current 3D graphics card. Furthermore, the newest 3D features can usually also be implemented in OpenGL, even if these features have not yet been defined in the OpenGL standard; these are called OpenGL extensions. Frequently, graphics chip mak- ers will offer their own extensions in drivers for certain effects that can be employed by applications or games. The two industry heavyweights, ATI and NVIDIA, offer very good OpenGL support, so there’s not much to worry about there. Things aren’t quite as rosy in the case of XGI and S3, however, which still have some room for improve- ment in their drivers.

Despite the seeming dominance of DirectX titles, there are still many games that are programmed for OpenGL. The most well known among these are the titles published by the Texan game designer id Software; many other game devel- opers have also licensed 3D game engines from id to use in their own software. The newest and defi- nitely most demanding OpenGL

Graphics Card Buyers Guide

More information on Linux and graphics cards:

game from id is the first person shooter Doom II]. NVIDIA cards perform especially well running this game, closely followed by ATT’s offerings. The game will also run on XGI cards, with some effort and at reduced quality settings. For its part, S3 has published a special Doom II driver.

Interested readers can find more information on OpenGL at

Other Operating Systems Things get more complicated for operating systems other than Microsoft Windows. The various cards’ 3D performance under

Linux differs drastically from that in Windows. Both

ATI Linux Drivers FAQ (| inux.html)

HOWTO: Installation Instructions for the ATI Proprietary Linux Driver

(http://www. nuxhowto-ati.html)

NVIDIA Linux Advantage PDF ( 2003 0328_6790.html)

NVIDIA Linux Driver Forum @ NVNews

( display.php?s=&forumid=14)

THG Graphics Card

Buyers Guide

ATI and NVIDIA support Linux with special drivers. Linux drivers can be found on ATT’ and NVIDIA’s download pages.

Video Playback

Video playback and Media Player visualizations can be accelerated by graphics cards, taking load off the CPU.

As we mentioned near the beginning of the arti- cle, video can be played back on practically any graphics card, as long as the correct codec is installed. Almost all graphics cards available today also offer special video acceleration features that handle effects such as resizing a video to fit a window, filter

ing and the like. The more tasks the graphics proces- sor can handle, the less work is left to the CPU, improving overall performance. In the case of HDTV videos using ve ry high resolutions, it is possi- ble that the CPU alone isn’t up to the task of decod- ing and playing back a video at all -- and this is where the video processor can step in to help.

Video acceleration is also an important issue for notebooks, as a CPU usually requires more power than a graphics processor. As a result, a good video acceleration will do its part in lengthening the run- ning time of a notebook. Video acceleration fea- tures also come into play when watching DVDs.

Recently, both ATI and NVIDIA have put special emphasis on video features, and practically every new generation of graphics processors comes with extend- ed video functionality. ATI groups together these capabilities, which can be found in the new X800 and X700 line of cards, under the name ‘“‘FullStream HD.’ More information is available here:

res/5639fullstream WP.pdf.

NVIDIA has equipped its newest chip family, the NV 4x line, with a special, programmable video processor. This ensures support even for future video formats. Additionally, the video processor is designed to take some of the burden off the CPU when recording videos or during video encoding processes. More detailed information is available here: chip-video.html.

#3 Performance & Image Quality


The performance of a graphics card is normally measured by its frame rate, which is expressed in frames per second (FPS). The higher the frame rate a card can support, the more fluid the gaming experience will seem to the user. Essentially, a game displays a sequence of individual images (frames) in rapid succession. If they are output at a rate exceed- ing 25 fps, then the human eye is usually no longer capable of distinguishing the individual frames. However, in fast-paced games, such as first person shooters, even 25 fps will not be enough to make the game and all movements seem fluid. The bar for such games should be set at least at 60 fps.

Aside from features such as FSAA and AF (which we will come to shortly), frame rate primarily depends on the selected screen resolution. The higher the resolution, the more pixels are available to display the scene, making the resulting output much more detailed. However, with increasing res- olution, the amount of data that a graphics card has to handle also increases, meaning greater demands are placed on the hardware.

There are two important factors in assessing the ability of a graphics processor to provide high frame rate. The first is its pixel fill rate, which deter- mines how many pixels can be processed per sec- ond (megapixels per second). The second is memo- ry bandwidth, which measures how quickly the processor can read and write data from memory. In both cases, the “more is better” mantra applies.

At higher resolutions, more pixels are available to depict a more detailed image, as you can see in this image. While only very rough details can be made out at 800x600 (the small tree next to the Jeep), the detail level is much higher at 1600x1200.

Today, 1024x768 pixels is considered the standard

THG Graphics Card Buyers Guide

gaming resolution. The most popular higher resolu- tions are 1280x1024 and 1600x1200. In the case of classical CRT (cathode ray tube) monitors, the res- olution can be selected freely, as long as it doesn’t exceed the maximum possible physical resolution supported by the screen. Things are more compli- cated when TFT (thin film transistor, aka flat screen or LCD) monitors are used, since these have fixed resolutions. Any setting that differs from the moni- tor’s native resolution requires that the image be interpolated, meaning either shrunk or enlarged.



Depending on the model that is used, this can have a noticeably adverse effect on image quality. Therefore, it is a good idea to choose a graphics card that offers good frame rates at your TFT’s native resolution.

In addition to the resolution chosen, a card’s frame rate will also depend to a great extent on the game being run. The extensive use of complex shaders in new games slows down many older cards unacceptably, even if these same cards offer very reasonable performance when running older titles. Most PC games allow for a reduction in detail level, thereby also reducing the number and com- plexity of effects, but this of course has a negative impact on the image quality and, consequently, on the gaming experience. The most important factor here is the DirectX support of both graphics card and game, which should be on the same level (see the section on DirectX Technology).

Benchmark Results

Since the performance of a card depends to such a great extent on the game being played and the selected resolution, a large number of combinations

must be tested to reach a conclusive verdict on a card’s performance. Cards from different manufac- turers may show different performance in the same game.

This picture shows a typical benchmark table from the THG VGA Charts. Here, the game Doom3 was tested at a resolution of 1024x768 at 32-bit color depth. 4xFSAA and 8x anisotropic fil- tering were enabled, and the quality setting “High” was selected.

To determine a card’s in-game performance, frame rate measurements are taken at distinctive points in the game. Many titles offer a recording feature for motion sequences, making it very easy to take comparable measurements for a number of cards. Some games measure the frame rate using a built-in function, while others require additional add-on utilities such as FRAPS. Another option for benchmarking tests is using in-game cut scenes, which are of course identical every time. Finally, for games that don’t offer any of the choices above, the only remaining option is to try to replicate the same series of movements manually on every card.

The results found in the benchmark tables are usually the average of several tests, showing the average frame rate a card is able to sustain in a game. Thus, a result of 60 fps means that the frame rate may dip below and rise above that number at different places in the game. Minimum scores would be more meaningful, but these are very difficult to deter- mine; dips in frame rate can be caused by in-game load- ing or background activity of the operating system, and these factors cannot be easily replicated. Therefore, the average frame rate remains the most meaningful measur- ing standard.

Despite this, we can’t emphasize often enough

that you need to remember that these are indeed average values. If a card only runs a game at an aver- age of 25 fps, the game will show pronounced stut- tering during its “slower periods” which may seem to turn it into a slide show. In general, you should be on the safe side with a card that pushes 60-100 fps in games at the highest quality settings, of course.

You can find a good overview of the perform-

Buyers Guide |

THG Graphics Card

Comparisons with older graph- ics cards can be found in previ- ous iterations of our VGA







ance of different current and previous-generation graphics cards in the Tom’s Hardware VGA Charts:


The system CPU has quite a bit of influence on the graphics card’s performance. Even though mod- ern graphics processors no longer need any CPU time for their calculations, the data they process has to be prepared by the CPU and then transferred to the card. Additionally, the CPU also must take care of handling computer player AI, physics calculations and sound, all at the same time. To be able to push a fast graphics card to its limit, you'll also need a potent CPU.

Of course, the opposite case is just as true a fast processor won't do any good if the graphics card is limiting the frame rate. And the same also holds true for the system memory, which can hold the system back if it’s too slow, or if there isn’t enough of it. In summary, the individual components need to be well-balanced. A single weak component can cripple the entire system.

Fortunately, there aren’t any bad choices where the graphics interface is concerned. The current standard is the AGP 8x bus, which will gradually be sup- planted by its successor, PCI Express, over the coming months and years. For now, don’t expect to see any performance increases from switching to the new bus, however! If you'd like to read up on PCI Express and its future role in the graphics market, take a look at our article here: http://graphics.



The abbreviations FSAA and AF stand for two methods of improving the image quality in 3D games. FSAA is short for Full Scene Anti Aliasing, which is a technique for smoothing the edges of 3D objects within a scene. AF is shorthand for Anisotropic Filtering, which is a filtering method applied to textures on 3D objects to make them look crisper and less washed-out, greatly enhancing image quali-

ty. Both FSAA and AF are very demanding on graph- ics processors, especially

when used in combination.

These features can usually be enabled or disabled through the graphics driver's


control panel. Some games

also let you enable them directly through the in-game options menu, without the need for special software. However, some games have trouble with FSAA, due to peculiarities of the graphics engine they use. In these cases, leaving FSAA disabled is usually the better choice, as image corruption can occur otherwise.

The advantages of FSAA become especially obvi- ous on slightly slanted vertical object borders.

Anisotropic filtering results in much crisper tex- tures.

Although the underlying principles are the same everywhere, the technical implementation of these techniques differs from company to company and even from one card generation to the next. On older graphics cards or newer low-end models, FSAA can only be used to a limited extent; this is usually either because the card’s performance is too low to deal with the extra calculations, or because it uses a slow or outdated FSAA method. There are also a number of different AF methods that differ both in calculation complexity and resulting image quality.

Both FSAA and AF require a lot of computing power and memory bandwidth. For this reason, ATI and NVIDIA use heavily “optimized” versions of these methods to achieve better results (higher per- formance) while still offering greatly improved image quality compared to the standard rendering output. The heaviest optimization is done on the anisotropic filtering implementations. As a result, there are some cases in which a reduction in image quality compared to the “correct” or “real” method becomes visible. Unfortunately, both of the big players like to use this method of tweaking too much in order to try to win benchmark compar- isons. Therefore, image quality and performance can differ immensely between driver versions even on the same card!

You can read up on the texture filtering “opti- mizations” currently in use in the following article: graphic/20040603/index.html

Image Quality

Image quality is a topic that would easily merit its own article, if not a book in its own right. What I mean here is the quality of the rendered 3D scene as it appears on the player’s screen. This whole dis- cussion was originally caused by the tricks and tweaks that graphics card makers have begun to build into their drivers. Their goal is to get the

2 4x 1! Gx FFF ores Ny Midas

most perform- ance out of their cards, and to this end, sometimes cer- tain calcula-

GeForce FX tions are either

skipped or sim- plified. In prin- ciple, this is

| Radeon 9700 FSAA comparsion pylons oe possible in a lot of places without the player being forced to accept reduced image quality. Unfortunately, the chipmakers tend to do a bit too much tweaking, especially to win performance MAUGLI comparisons. The

result is often visibly reduced image quality, notice- able at least to experienced users. Casual gamers, on the other hand, may often not even notice any- thing. In our article (http://graphics.tomshard- we took a look at a number of optimizations used by the graphics chip companies, and explained how they work and what effect they have on image quality and 3D performance.

Here is an image quality comparison taken from the game FarCry using older drivers. In this driver, NVIDIA replaced some of the game’s own shaders with highly optimized ones. The result is visibly reduced image quality on the one hand, but improved performance on the other.

Meanwhile, the chipmakers have learned that many users don’t necessarily want such optimiza-

16x Anisotropic (Tr)

THG Graphics Card Buyers Guide

tions, especially if they are forced upon them. Anyone who pays $500 (or more) for a graphics card understandably expects the highest possible image quality. This is especially so considering that such optimizations are not really that essential -- the enthusiast cards are now more than fast enough to handle the highest quality settings. In response, NVIDIA and ATI now allow for most of these optimizations to be switched off in their most recent drivers.

Another reason for reduced image quality can be the use of reduced floating-point precision in DirectX 9 games. A good example of this is the game FarCry. NVIDIA’s GeForce FX cards render most of the shaders using only 16-bit precision, which leads to pronounced visual artifacts (see also: raphic/20040414/geforce_6800- 46.html). While NVIDIA has addressed these quality issues with newer drivers, the result is that the frame rates have taken a nosedive as a result (http://graphics. graphic/20041004/ vga_charts-08.html). NVIDIA was only able to over- come this performance handicap in DirectX 9 games with the new GeForce 6xxx line.

Since the image quality produced by a card can change with literally every driver release, we recommend staying informed by reading the reviews of new card generations, as we also regularly test the image quality in theseaticles.

#4 Budget (Card Overview)

ach graphics chip maker develops products for E ==: price category. Pictured here is NVIDIA’s roadmap from the year 2003.

Cards can generally be categorized into three large groups, each of which can once again be sub- divided into two subgroups. The two big graphics chip companies, ATI and NVIDIA, offer different chips for each of the various price brackets. Note that the boundaries between the categories tend to blur quite a bit, however, due to price fluctuations in the market.

The three main price grouns are the entrv-level

THG Graphics Card


Buyers Guide

For some further reading about image quality, check out these

articles: graphic/20040603/index.html graphic/20040414/geforce_6800- 43.html graphic/20040504/ati-x800-32.html

pre Fi Pe

or budget line, the mid-priced or mainstream prod- ucts, and finally, the higher-end enthusiast cards. Again, within each of these there are two versions offering different performance levels -- one is the standard version, while the other runs at higher clock speeds. ATI denotes these faster cards by the addition of a “Pro” or “XT” to the card name, while NVIDIA’s nomenclature uses the “GT” and “Ultra” suffixes.

Low-cost products are often tagged as SE or LE parts. However, these budget cards sometimes don’t carry any special tag at all, making them hard to tell apart from “the real deal”. In these cases, only care- ful attention to the technical data will help you from mistakenly purchasing the wrong card.

NVIDIA is a chipmaker only, focusing its attention solely on designing and producing graphics processors, while leaving the pro- duction and sale of retail cards to its board partners. ATI, on the other hand, is quite active in the retail market as well, albeit only in the United States and Canada. Its cards are usually designated “Built by ATT”, while those pro- duced and sold by other compa- nies are “Powered by ATI.”

Another factor further compli- cating any attempt to categorize the cards by price alone are the graphics cards from older generations, which keep getting cheaper due to the introduc- tion of newer models. There are especially pro- nounced differences between NVIDIA and ATI here. ATT’s second to last generation of chips (Radeon 9500, 9700, 9800) is still very much up- to-date from a technological perspective, with DirectX 9 support and multisampling FSAA. Only the Radeon 9000 and 9200 cards are the exception here, as they are still based on the DirectX 8 design of the Radeon 8500 along with its slower super sampling FSAA implementation. Shader Model 3.0 is not supported by any ATI card at this point. The only cards that actually can take advantage of it are

those of NVIDIA’s GeForce 6xxx line.

In contrast, NVIDIA’s second to last generation of cards are, by today’s standards, technologically outdated (DirectX 8 and multi sampling FSAA on the GeForce 4 Ti, DirectX 7 on the GeForce 4 MX). The last iter- ation of the GeForce FX 5xxx series performed very well in DirectX 8 titles, but drops to mediocre levels in current DirectX 9 games. As mentioned before, this weakness has been corrected in the new GeForce 6xxx line (note the absence of the “FX” designation).


NVIDIA GPU Positioning D

Po formance

Price Categories

Let’s now take a look at the three main price cate- gories. We begin with the cheapest cards, which are the entry-level or low-budget products. These fall either into the sub-$100 category, or the price bracket between $100 and $150. The second cate- gory, usually called the “mainstream”, begins at $150 and reaches up to the $300 mark. In this cate- gory, the largest selection of cards can be found between $150 and $250. Last, we have the enthusi- ast category which starts at around $300 and extends to $500 (and well beyond, in some cases.) This is where the latest top models from ATI and NVIDIA are to be found.

In the following overview, we have also listed cards from older generations that are still available in the market. The prices quoted here are current as of mid-October 2004; we take no guarantee for the correctness of this information.

Note that in some cases it is rather difficult to determine which models actually exist in the mar-

ket and what specifications they use. The low-cost sector, especially, is flooded with a multitude of dif- ferent configurations for the same basic chip. A good starting place to get an overview is Gigabyte’s product page ( VGA/Products/Products_Comparison Sheet_List.htm).

Older Radeon Models

Radeon 9200

The RV 280 (Radeon 9200), like its predecessor the RV 250 (Radeon 9000), is based on the DirectX 8.1 design of the Radeon 8500 (R 200). Compared to the Radeon 8500 with its 4x2 pipe design, this chip only features half as many tex- ture units per pixel pipeline (4x1) and only one vertex shader unit. The main differences between the Radeon 9000 and the 9200 are the newer part’s higher clock speeds, and its support for the AGP 8x interface. It is produced on a 0.15 process and contains roughly 32 million transis- tors.

The greatest weaknesses of the Radeon 9200 are its outdated and slow super sampling FSAA imple- mentation, as well as it being limited to bilinear fil- tering.


Radeon 9200 SE - 64/128 MB - 64-/128-bit DDR - 200/330 MHz

Radeon 9200 - 64/128 MB - 64-/128-bit DDR - 250/400 MHz

Radeon 9200 PRO - 128 MB - 128 Bit DDR - 300/600 MHz

Radeon 9600

The Radeon 9600, which has the internal designa- tion RV350, is the successor to the highly successful DirectX 9 chip RV300 (Radeon 9500). The RV300 only differed from the “big” R300 (Radeon 9700) in that it featured a memory bus that was pared down from 256 bits to 128 bits. In the standard version of the chip, ATI also disabled four of the eight pixel pipelines. Nonetheless, it was the exact same chip as the R300; its approximately 107 mil- lion transistors made it expensive to produce as a mainstream part. In the newer RV350, ATI didn’t just disable some of the pixel pipes through the card's BIOS, but physically reduced the number to

THG Graphics Card Buyers Guide

four in the chip design. Combined with a die- shrink to a 0.13p1 process, this made the 75-million transistor chip much cheaper to produce.

The Radeon 9600’s advantage over its predeces- sor lies in its much higher clock speeds, which usu- ally outweighs the disadvantages incurred by the reduction in the number of pixel pipelines. Despite this, the Radeon 9600 Pro is sometimes outper- formed by the Radeon 9500 Pro in fill-rate inten- sive applications. Other than that, the 9600 offers DirectX 9, modern multi-sampling and fast anisotropic filtering in short, everything that the flagship products have.

The Radeon 9600XT (codename RV360) takes a special place in this line-up, though, as it is based on a more modern architecture than the earlier 9600 variants. For the first time, this makes driver optimizations for trilinear filtering possible, which results in much higher performance.


Radeon 9600 XT - 128/256 MB - 128bit - 500/600 MHz

Radeon 9600 Pro - 128/256 MB - 128 Bit - 400/600 MHz

Radeon 9600 - 64/128/256 MB - 128 Bit - 325/400 MHz

Radeon 9600SE - 64/128 MB - 64/128-bit - 325/365 MHz

Articles: ic/20030416/index.html ic/20031015/index.html

Radeon 9800

ATT’ flagship model of the past few years carries the internal designation R350.The main change from its predecessor, the Radeon 9700 (code name R300), is the increased clock speed, resulting in improved per- formance (especially when FSAA and AF are enabled). While other details were changed and improved as well, these aren’t really noticeable in practice. The chip is produced on an 0.15, process and consists of 107 million transistors. Its advantage over its smaller siblings lies in its 256-bit memory interface, giving it a higher memory bandwidth, and a full complement of eight pixel pipelines. During the product nin, ATI also introduced a 256MB ver- sion featuring DDR II video memory.

THG Graphics Card


Buyers Guide


Price Range

Lowest Price


Shader Model



XGI Volari V3



XGI Volari V3



NVIDIA GeForce FX 5200



NVIDIA GeForce FX 5200





eon 9550 SE





eon 9600 SE/LE

w wW W vW wW