Security engineering is the practice of designing and building systems which can withstand attacks. In addition to the obvious areas of study, such as operating system design, cryptography, access control, and audit, it makes use of knowledge from law, economics, physical security, EMSEC, and computer engineering. One basic premise of security engineering, as it is practiced today, is that it is impossible to make a secure system, and that budgets are limited. Thus, security engineering is about managing and controlling risk, rather than preventing it completely; this has been known to the banking security community for decades, and recently computer security companies finally figured it out as well.

A lot of security engineering research gets bogged down in mathematical formalism, leading to things like the Orange Book and Multics. And while subjects like cryptography are important, they really are just tools, rather than the whole picture. It is entirely possible to be an excellent security engineer without being a cryptographer, though it is probably necessary to understand cryptography. By which I mean that you must be able to apply cryptography appropriately and correctly, but it is not necessary to be able to design new algorithms. While a number of research groups are attempting to bridge all of the gaps between the disciplines that combine to form security engineering, there are still vast gulfs between the legal, accounting, and audit communities which define the requirements for a system, and technical communities that build them.

A good example of successful application of security engineering principles is the ATM network. Billions of dollars are transferred every day over this network, and large-scale fraud is remarkably rare. There is plenty of small-scale fraud, but it rarely effects the bottom line of the banks; for most banks the annual cost of vandalism exceeds their estimated losses to fraud. This can be credited to a wide array of controls that mesh to make fraud difficult, including cryptography over the network, tamper resistant hardware devices, a variety of procedural controls at the bank, and a strong audit regimen. The big metal case around the ATM itself isn't actually that big a deal; while an ATM is expensive, the loss of one or two isn't a huge deal. The thing that banks worry about is large scale card copying and PIN theft, the kind of case where dozens or hundreds of customer accounts are cleaned out. Not only is that a substantial monetary cost, but the publicity causes problems. When large scale fraud has occurred (such as in the UK in the 1980s), it was usually covered up by the banks and the police. Those failures were later traced back directly to the bank being overconfident in the security of their (completely insecure) systems.

It is a common argument that software engineering should not be called engineering, and I tend to agree. However, I am willing to argue that security engineering is (at its best) a true engineering discipline. Just like an civil engineer tries to design and build a bridge that will not fall down, a security engineer tries to build a system that will not fail. In neither case do you attempt to build a system that is guaranteed not to fail; in the case of the bridge, it would probably be large and very expensive, and similarly a perfectly secure system cannot be built in reasonable cost. An additional problem facing a security engineer is that most of the problems comes from malicious attackers; it is like trying to build a bridge that is going to get bombed on a weekly basis.

Most of the current knowledge in security engineering is only contained within policy and standards documents, and academic papers. One good introductory book is Ross Anderson's book "Security Engineering - a Guide to Building Dependable Distributed Systems"