Shared memory architecture (SMA) is a way of cutting costs in computer system design by eliminating traditional video memory and using system memory (RAM) instead. This cost reduction comes at the expense of reduced performance.

In a typical computer system (non-SMA) with a separate graphics card, the graphics card will have its own (fast) memory on the card itself (usually 32-128MB). This memory is used to store textures and information about each pixel the card displays. Ideally, when the graphics processor needs some sort of information (textures, etc.), that information will be in the card's video memory. If it isn't, the card has to access the data from the system memory, which takes a LOT longer. This, obviously, results in decreased performance.

In a computer system with an SMA-based graphics system, there usually isn't a physical graphics card at all. Rather, most of the functions of the graphics card are integrated into the motherboard's chipset. This results in substantial cost savings, as it eliminates one of the more expensive components of the computer system. Generally, these integrated graphics subsystems are somewhat lacking in performance; they are not especially well-suited to graphics-intensive applications such as computer games. One of the main factors resulting in this reduced performance is SMA.

Rather than having separate (and therefore costly) video memory, the integrated graphics system will reserve a part of the main system memory for itself. The amount of system memory that is reserved is usually adjustable by the user (it generally ranges from 8-64MB). This reserved memory is treated by the computer as physically separate from the rest of the memory (call notes that the computer actually treats the framebuffer memory as a part of "normal" memory, which is how the computer manages to draw to it in the first place). The reserved framebuffer memory becomes unavailable for other things (such as the OS or applications), thus reducing the total amount of memory available to the system. This results in somewhat reduced system performance. The real performance decrease, though, results from the video subsytem having to use the system RAM. In addition to having a higher latency than video RAM has, system RAM provides less usable memory bandwidth to the graphics system since the graphics system is forced to compete for memory bandwidth with the CPU and other system devices.

In many situations, though, SMA's degraded performance is far outweighed by the resulting cost savings and reduced overall complexity. SMA can be sometimes be found in low-cost desktop systems (and in old Macs such as the IIsi and IIci (thx mkb!)), but it is most frequently used in laptops. People rarely buy laptops with the intention of doing anything particularly graphics-intensive, and SMA's performance is more than sufficent for most common applications. And since laptops rarely have extremely high-resolution displays, reserving a massive amount of system memory isn't necessary. Because of this, the amount of system memory that is unavailable to the OS is fairly inconsequential to the computer's overall performance.


spiregrain points out that I neglected to mention the Amiga, a classic example of SMA, whose SMA system knocked the socks off of the VRAM-using systems of its day.
call informes me that the Acorn Archimedes machines all had a shared architecture as well. He also mentions that SGI's O2 had a "unified memory architecture" , which is essentially SMA with a crossbar and a high bandwidth main memory system.

Log in or register to write something here or to contact authors.