SHIP DATE: ~September 1, 2001*

The Radeon 8500 (and the Radeon 8500LE, which is the exact same thing with lower clock speeds) is a neat card. It's also probably what pulled ATi, its manufacturer, out of the dark depths of anonymity and (shudder) integrated video. At the time of release, it seemed a vastly superior card on paper; it had hardware that made nVidia's GeForce3, the top dog then, cower in fear. However, ATi had not yet attained mastery of drivers, and the 8500 did have significant problems. Odd "ghosting" in random games, bizarre texture screwups, and worse performance than a card with hardware like that should have had all made for a general distrust of the card.

However, at this time of writing ATi's shaped up, and their drivers are much better than they were at the time of the Radeon 8500's release. This is sort of a moot point now that the Radeon 9500 (Pro) and Radeon 9700 (Pro) are out, but the 8500 still offers good performance at this time, and it's priced quite attractively. ATi has finally succeeded in doing what they really couldn't with the drivers they had before: making the GeForce3 effectively obsolete.

Much like its younger, quicker brothers based on the R300 core, the 8500 has plenty of specs and marketing jargon. So without further ado:

The GPU is sort of the graphics card's processor. In fact, that's exactly what it is. GPU stands for Graphics Processing Unit (or Graphical Processing Unit, it doesn't matter either way, really), and it basically does the majority of the video card's calculations.

- R200 core
R200 is the name of the core (well, the name of the design of it), just as NV20 is the name of the GeForce3's core. The R200 was a bit of a brute force attack against NV20; what it lacked in finesse and software-side competence, it more than made up for with raw clock speed and rendering horsepower. With a 2.2 gigatexel/sec fill rate and 8.8Gb/sec memory bandwidth, it certainly killed the GeForce3 on paper. Some of the more bizarre "features" of R200 will be explored under "misc.", such as HyperZ II and Smoothvision.

- 275MHz clock for regular, 250MHz clock for LE
The Radeon 8500 was the first Radeon to be built on a 150 nanometer process; before, all had been built on a 180 nanometer process. (150 nanometers is equal to .15 microns, and both are used commonly.) The GeForce3 Titanium 500, the top-end GeForce3, came with a core speed of 240MHz; even the LE beat it there. This is where some of that "brute force" I talked about starts to come in.

Every card needs memory. After all, the days of dipping into system memory are long gone, as increasingly complex textures insist upon as much as 64MB of memory-- and even if the hit to system memory wasn't enough, video cards are usually equipped with much faster memory than the motherboard can offer. Not to mention that the GPU accesses onboard memory much faster.

- 64MB or 128MB DDR (550MHz for regular, 500MHz for LE)
64MB, back then (if you're wondering, yes, it does feel odd saying "back then" referring to a time less than eighteen months ago), was almost always more than enough, and 128MB was overkill. Nevertheless, it was seen as a wise future investment; back when the Voodoo 2 came in 8MB and 12MB flavors, the 12MB cost more and didn't do anything, but games later emerged that used all 12MB of the memory to their advantage, and those who had purchased the 12MB versions turned out to be the smart ones. This being a very similar situation, the sales for the 128MB units were surprisingly high. The GeForce3 Titanium 500 (again, the top dog at the time) had 500MHz memory. The memory speeds are technically 250MHz and 275MHz, but they're DDR (not Dance Dance Revolution), so they're effectively 500MHz and 550MHz, and everyone uses those numbers as it is.

Special features of the design, or core concepts that Marketing can pass off as "features."

- HyperZ II
HyperZ II is an update of the original HyperZ. Fast memory is pricey and memory bandwidth is at a premium to begin with, so manufacturers have to come up with increasingly elaborate ways to save on it. The first piece of HyperZ II is "Z-compression," which compresses (losslessly, naturally) data going to the Z buffer by up to 4:1 (in optimal situations). The second is "Fast Z Clear," which allows the buffer to empty itself more quickly than it normally could. The third is "Hierarchical Z," and it makes a lot of sense as a concept: it simply doesn't bother working with pixels that won't be seen in an image. These are the same concepts as in the first HyperZ, but the algorithms to do these tasks have been improved (and are actually effective now), just as they were improved upon in HyperZ III.

- Smoothvision
Smoothvision is ATi's fancypants name for its rather advanced (at the time) full-screen anti-aliasing (FSAA) and anisotropic filtering. The Radeon 8500 improves on the GeForce3's FSAA capabilities, but a lot relies on the developer, as the 8500's FSAA is highly programmable. With fully programmable FSAA, anti-aliasing looks less, well, ugly; if it's not programmable, the card basically antialiases everything it can with the same redundant pattern, while programmable antialiasing lets the developer customize areas that may be heavily aliased. This is a Good ThingĀ®, but it does require quite a bit of work on the developer's end.

I expect you'll want to know how it performs compared to the cards of its day AND some more modern cards, so instead of swiping benchmarks, I'll provide URLs to reviews or comparisons that have good sets of them: is AnandTech's original review of the Radeon 8500, and shows how the card was limited by its drivers at first. is a later review on the card with newer drivers-- drivers that were not, sadly, made available to the end user for quite some time. is a Tom's Hardware Guide benchmark series that includes the Radeon 8500's performance on modern systems. I have to admit I'm not very fond of Tom's Hardware Guide, but I don't see how they could screw THIS up.

* Estimated. ATi wasn't good to its users about actual ship dates.