VideoLogic's PowerVR PCX1 was advertised as a "3dfx Upgrade". Quirky Brits. It didn't even support bilinear filtering. Perspective correction was all over the box though. Crap, really.

PCX2 was the second iteration, it upped the clock and added bilinear filtering. It still lacked many blending modes, like it only had 4 bits of intensity for light mapping (fugly), but on single texture games the filtering was arguably better than 3dfx's offerings. But lordy was it fast for it's time.

The reason that it needed a fast processor was to convert the scene data that games gave it to a language (crazy moon language, apparently) that the video card could understand.


PowerVR Series 2 was next. I love how they forgot about their first chip. Oh well, nVidia did it too with the truly bizarre NV1. Unlike the PCX2, it was not a big bag of suck when it came to blending modes. More importantly, especially for average framerates in complex scenes, the PVR2 introduced variable tile sizes. Most importantly, especially for the PC game space, the card converted standard 3D data to its bizarre moon language on chip, not in driver (yay). PC development, however, was sidetracked for Sega's Dreamcast, which chose the PowerVR series 2 for it's innards. They chose it for its low memory usage and low production costs.

Break for a moment: tile-based rendering. Okay, in a extremely general nutshell: Tile-based (region based) rendering is an entirely different way to think about 3D rendering. Rather than draw back to front and use the Z Buffer to figure out (roughly) what to texture and fill and whatnot, a region of the display is picked out and processed. Things that are not visible are not drawn at all. Very cool. It solves the easy side, though, not the hard side. The entire scene must still be transformed by the system processor. Only the drawing (rendering) of the scene is sped up.
Think of it this way: the integer part of the 3D rendering is potentially much faster, but the floating point part is untouched. While both are problems, the polygon count limitations of the Dreamcast are due to the 200 MHz SH4 processor, not the PowerVR Series 2. Keep in mind however that translucent polygons in view pretty much negate the bandwidth advantage in that region of the screen. Also of note, the Hitachi SH4, programmed well, is capable of about 1.3 MFLOPS. I don't know if anyone could reach that peak, but it is technically possible. The Hitachi SH4 is capable of processing 4 simultaneous 32 bit floating point calculations (hence 128 bit on the box of the Dreamcast). It is a wierd chip too, though. 64 bit memory bus, 32 bit address space, 16 bit instruction set (!) to conserve memory.

Course, this isn't a Dreamcast or SH4 node. Moving along...

By the time the PVR2 was available for the PC, no one cared. Had it been released when the Dreamcast was launched in Japan, it would have been the best card on the market. It was really a bad mamma jamma for a 100MHz oddball. The drivers kinda sucked, but the image quality was great. Plus, a few demo applications really showed it's potential. In particular, a D3D app that was 52 cards in a deck falling to the ground (meant to test fill rate) showed the card listed at 1 gigapixel. Astounding for a 100MHz card, and it showed that they were indeed onto something. Again, nobody cared.

The PVR3 was released, upping the clock again and making the moon language interpreter even faster. You may have heard of it in the Kryo/Kryo II. The Kryo II was the same chip with a higher clock speed.

I love it when a company has the balls to step in a different direction than the rest of the planet. They have been proven technically right, in my humble opinion. They just need to solve the other half of the equation and put in a T&L processor.


My prediction for the future? They will die an agonizing death. nVidia bought 3dfx only for GigaPixel. Gigapixel only had one thing - an idea for the internal rendering structure on a chip. Basically, it made the external interfaces look completely normal (in theory) and made the renderer tile based. When nVidia goes tile based, cash in your chips; they've already got the T&L part licked. I'll miss those guys. *sniff.

If you feel like voting this down, please tell me why. I don't understand much of this; if feels like a clique to me ;)