When applied by itself, rarely works. However, security through obscurity can be effective if applied in conjunction with normal security procedures. If you don't tell anyone about that login port, folks are less likely to try it. But you better have a password on it for the fool who happens upon it. See Making things foolproof results in better fools.

As I like to say "security through obscurity is no security at all."

There is no way to measure the number of security holes in existence that are protected by nothing more than an algorithm known only to the the author. One sure fact is that hackers do decipher some of these messes and find the exploits, for good or evil.

All public key cryptography systems that I am familiar with also rely on some form of obscurity. Everyone assumes that the algorithm to decrypt a message from its cipher text and public key is computationally infeasible. As far as I know, no public key system has proven the lack of existence of a "back-door."

Consider RSA as an example. It's so called strength comes from the difficulty of factoring large numbers, specifically the public key n=p*q, where p and q are prime. But it has some known weaknesses for otherwise valid values of p and q: p and q must have very different values, (p-1),(p+1),(q-1),(q+1) can not be products of small primes. RSA key generators check for these cases before issuing a key pair. But nobody knows what other weaknesses may exist, hidden in the obscurity of mathematics! An algorithm to quickly factor any p*q may exist, but humans did not create mathematics and can not search its blueprints for a hole. Quantum computers promise to crush RSA by changing the fundamental order in which the product of two primes can be factored.

Smart cards have me worried. They rely on public key cryptography to protect real money. Banks are willing to trust cryptography to store value in the real world. If a security hole is found who knows what will happen? And holes will be discovered. But will they be publicized or kept in obscurity? What would you do if you were working at the IBM lab that finally creates a working implementation of a quantum computer and could bring down an entire monetary infrastructure?

Maybe sometimes security through obscurity is the best we can hope for.

thecap is confused. Security through obscurity refers to concealing the mechanism of security. That's it.

It's not just public key cryptosystems that aren't proven secure. Neither are virtually all symmetric algorithms. But if you take RSA; it is an algorithm that has been around for many years and based on one of the oldest problems in mathematics (finding prime factors). I'd lay more money on RSA being secure than on Rjindael; but frankly with the amount of research and study that goes into both, I don't believe an exploitable weakness will occur in either any time soon.

By contrast, security through obscurity is when you don't really have any security at all, but you just hide the weakness. A canonical example is having your root account named fred instead, with an easy to remember password. Or writing your own encryption algorithm and using it with no peer review, and thinking it is more safe to use than an open algorithm. The concept of backdoor in the context of security through obscurity refers to intentionally laid backdoors. You seem be calling exploitable weaknesses in algorithms backdoors.

secondary damage = S = SED

security through obscurity

(alt. `security by obscurity') A term applied by hackers to most OS vendors' favorite way of coping with security holes -- namely, ignoring them, documenting neither any known holes nor the underlying security algorithms, trusting that nobody will find out about them and that people who do find out about them won't exploit them. This "strategy" never works for long and occasionally sets the world up for debacles like the RTM worm of 1988 (see Great Worm), but once the brief moments of panic created by such events subside most vendors are all too willing to turn over and go back to sleep. After all, actually fixing the bugs would siphon off the resources needed to implement the next user-interface frill on marketing's wish list -- and besides, if they started fixing security bugs customers might begin to expect it and imagine that their warranties of merchantability gave them some sort of right to a system with fewer holes in it than a shotgunned Swiss cheese, and then where would we be?

Historical note: There are conflicting stories about the origin of this term. It has been claimed that it was first used in the Usenet newsgroup comp.sys.apollo during a campaign to get HP/Apollo to fix security problems in its Unix-clone Aegis/DomainOS (they didn't change a thing). ITS fans, on the other hand, say it was coined years earlier in opposition to the incredibly paranoid Multics people down the hall, for whom security was everything. In the ITS culture it referred to (1) the fact that by the time a tourist figured out how to make trouble he'd generally gotten over the urge to make it, because he felt part of the community; and (2) (self-mockingly) the poor coverage of the documentation and obscurity of many commands. One instance of deliberate security through obscurity is recorded; the command to allow patching the running ITS system (escape escape control-R) echoed as $$^D. If you actually typed alt alt ^D, that set a flag that would prevent patching the system even if you later got it right.

--The Jargon File version 4.3.1, ed. ESR, autonoded by rescdsk.

Log in or registerto write something here or to contact authors.