In software security, like in many other fields, it is important to distinguish causes from effects. The execution of some actions may be controlled (allowed or disallowed), in an attempt to restrict their effects. Allowed actions are called permissions. Different parts of a program (subjects) can have different permissions, but no important effect should be reachable without a permission.

The effects a subject can cause is called its authority. A program is unsafe if its subjects may cause unwanted effects, otherwise it is safe. The principle of least authority states that, ideally, no subject should have authority beyond what is necessary to do its part of the work.

Two important aspects differentiate authority from permissions:

  1. To have authority, a subject not only needs a permission, but also the ability to exert the permission. For instance, you may very well be allowed to ask me the money back you lent me earlier, but you may not have the means to do that if you cannot find me. (*)
  2. You can have authority via a chain of permissions, some of which may belong to others. For instance, you may instead have permission (and be able) to command Mr. X, who knows where I live, and who has permission to persuade me to return the money. That way you can have an effect without having a permission that directly causes the effect. At work this is called a chain of command, but programmers usually call it "forwarding" or "proxying" or something similar.
On top of all that, you may also be able to get permissions you don't have, by exerting other permissions. This means that authority can get you permissions, which again could get you more authority and so on.

Static Safety Analysis

Static safety analysis is the art and science of predicting that a program is safe, basically by closely examining its intestines (code). Meteorologists, evolutionary biologists, and economists know that it can be very hard to predict the precise influence between causes and effects, and information scientists found that out too, so to make things a bit easier, we simply approximate the authority from the safe side. That means: if you're not sure that some unwanted effect can happen, just assume that it will, and declare the whole program unsafe.

Doing so may not be very accurate though. It's a bit like a meteorologist who would predict rain every day, to avoid being accused of optimism.

Relying on behaviour to improve the accuracy of safety analysis

To improve the accuracy of safety analysis, you can rely on restrictions in the behaviour of some of the subjects. For instance you may rely on it that Mr. X, after having emptied my pockets, will not spend all the money himself. In computer science this can make sense: having programmed Mr. X yourself may be a good reason to rely on his restricted behaviour. It can assure you that lending money to me is safe. (**)


(*) Some programming languages, like C++, make it possible for subjects to find any other subject that exists in the same memory space, by guessing its memory address. This approach can cause vulnerability to the confused deputy attack, described elsewhere.

(**) Warning: Do not try this at home! In real life, relying on someone else's behaviour restrictions is dangerous and should not be attempted without proper insurance!


References

Mark Miller's Ph.D. dissertation: "Robust Composition: Towards a Unified Approach to Access Control and Concurrency Control" can be found at:
http://www.erights.org/talks/thesis/index.html

Log in or register to write something here or to contact authors.