'Trusted computing base' is a term used to describe the collection of computer components whose correct functioning is sufficient to ensure that security decisions in that computer system can be enforced.

The trusted computing base (abbreviated to TCB) also includes all hardware, firmware, software, procedural components and people responsible for the reliability and integrity of the those components. It includes all components whose failure would allow a security breach.

In modern systems a TCB is responsible for controlling and authenticating access by users and their programs to things like memory, files/filesystems, and any peripherals. It is also responsible for the system integrity checking, and the trusted communication path between the user and the computer. One example of this is CTRL+ALT+DELETE in Windows NT and 2000 - it cannot be intercepted by any other program, and therefore guarantees that you have Windows' attention.2

The concept of a TCB was first proposed in a study by James P. Anderson for the USAF entitled 'Computer Security Technology Planning', in 1972. In it, he proposed that the responsibility for the security of a given system be delegated to specific a subsystem, which could then be specially designed to resist the threats outlined in the specifications for that system.

The report also laid out some fundamental principles that a TCB must ascribe to:

  1. The making and enforcing of decisions must be tamper proof.

    If the TCB can be altered, compromised or influenced, it has no integrity, almost by definition. The idea of the TCB is to define the boundaries within which integrity must be maintained.

  2. The TCB must always be involved in security decisions.

    When system calls fail for no apparent reason, programs stop responding - this is why programs running with an active TCB have to be able to check their own permissions before interacting with the rest of the computer. Not only that, but the operating system would need to run checks on itself as well.

  3. The TCB must be small enough to be subject to analysis and testing, to assure that it is correct.

    Since anyone wishing to break the security of the system must undermine, bypass or break the TCB, it becomes a major target for malicious users. In addition, because failure of the decision-making components would cause severe problems for the system, they must be extremely reliable.

This last principal is the one most often ignored by modern systems. Components of modern operating systems such as memory management must be dealt with by programs running at all levels of authorisation. The solution to this is to make the memory management sub-system part of the TCB. This leads down the slippery slope to what is known as "TCB Bloat", where large subsystems of the OS, and even utilities, etcetera, end up in the TCB. Needless to say, this complicates testing and increases the cost of verifying the correctness of the TCB.

A modern TCB typically consists of

  • The kernel (operating system)
  • The necessary systems for the trusted communication path
  • Trusted shell/windowing system
  • Any configuration files that control system operation
  • Any program that:
    • is run with the privilege or access rights to alter the kernel or the configuration
    • Any program that is exclusively run by administrators with system-level privileges
    • Any program that must be run by the administrator while on the trusted communication path
      (for example, the ls command to view the contents of a directory)

...and that's just the software. The trusted hardware and software in Class C2 certified configuration for the IBM RS/6000 (PPC) stretches into the hundreds of components1.

A TCB is most often thought of as being a subsystem of large servers - the initial research work was done for the USAF, Electronic Systems Division (Air Force Systems Command), in 1972. Since then vendors such as Sun Microsystems, IBM and similarly huge companies have been selling the US DoD enormous mainframes, most of which had a built-in TCB.

But the definition of a TCB also includes PDAs and smartcards - at a personal level, a wallet/credit-card sized computer which never leaves your side is 'trusted'. This is one of the principles behind the (flawed) idea of introducing ID cards in the UK.

At a higher level, new research work on the Mach microkernel has lead to things like the Flask architecture, as implemented in SE Linux, which isolates security policy decision-making in a 'security server', and delegates the enforcement to other kernel subsystems.

The term has recently been employed by Microsoft and various hardware manufacturers to describe TCPA and Palladium, aka NGSCB (''Next Generation Secure Computing Base''). A key difference is that while a traditional TCB is meant to be trusted and controlled by its owners, Palladium et. al. is also intended to be trusted (and controlled, to a certain extent) by content providers such as Disney, and corporate entities such as Microsoft.


1: The RS/6000 Class C2 evaluated TCB components can be found at http://www.rs6000.ibm.com/idd500/usr/share/man/info/en_US/a_doc_lib/C2/tfm/appa_tcb.htm

2: ariels has correctly pointed out that the effectiveness of a SAK (Secure Attention Key) is dependent upon the assurance of physical security for the computer.

Sources:
Security Engineering by Ross Anderson
http://www.usenix.org/publications/library/proceedings/sec99/full_papers/helme/helme_html/node14.html. (from google cache)
http://www.its.bldrdoc.gov/fs-1037/dir-038/_5629.htm
http://www.cl.cam.ac.uk/~rja14/policy11/node22.html
http://csrc.nist.gov/publications/history/index.html
http://nim.cit.cornell.edu/usr/share/man/info/en_US/a_doc_lib/aixbman/admnconc/tcb.htm