As I like to say "security through obscurity is no security at all."

There is no way to measure the number of security holes in existence that are protected by nothing more than an algorithm known only to the the author. One sure fact is that hackers *do* decipher some of these messes and find the exploits, for good or evil.

All public key cryptography systems that I am familiar with also rely on some form of obscurity. Everyone assumes that the algorithm to decrypt a message from its cipher text and public key is computationally infeasible. As far as I know, no public key system has *proven* the lack of existence of a "back-door."

Consider RSA as an example. It's so called strength comes from the difficulty of factoring large numbers, specifically the public key n=p*q, where p and q are prime. But it has some *known* weaknesses for otherwise valid values of p and q: p and q must have very different values, (p-1),(p+1),(q-1),(q+1) can not be products of small primes. RSA key generators check for these cases before issuing a key pair. But nobody knows what other weaknesses may exist, hidden in the obscurity of mathematics! An algorithm to quickly factor any p*q may exist, but humans did not create mathematics and can not search its blueprints for a hole. Quantum computers promise to crush RSA by changing the fundamental order in which the product of two primes can be factored.

Smart cards have me worried. They rely on public key cryptography to protect real money. Banks are willing to trust cryptography to store value in the real world. If a security hole is found who knows what will happen? And holes will be discovered. But will they be publicized or kept in obscurity? What would you do if you were working at the IBM lab that finally creates a working implementation of a quantum computer and could bring down an entire monetary infrastructure?

Maybe sometimes security through obscurity is the best we can hope for.