A magical myth, like unicorns.

Actually, my theory is that every computer programmer is allowed one of these a year. Most of the time it gets wasted on some Hello world or other test program, but the look on someone's face when they see their 100,000 line program compile on the first try is priceless. Of course, then they go insane trying to figure out what null string the compiler accidentally got pointed to...

I think that programs often compile on the first try (maybe not often, but more than once a year for programmers). The key is in the difference between "compiling" and "working" (as pointed out by RimRod at the end of the writeup). I wonder how many 100,000 liners WORK on the first try...

Strangely enough, back in school when I was forced to take Cobol, it was virtually impossible to make mistakes in that language. Every single program I wrote in that language compiled and ran properly the first time.

Of course, I never used the language since. Not that I did not like programs that compile and run the first time. Just that I could never write anything useful in that language (that is not to say useful programs cannot be written in Cobol, rather the type of programs I write cannot).

It is actually an experience of enlightenment that programs tend not to compile the first time (and the second, third,...). The computer does not care about hurting your ego, it reminds you you have one and keeps you humble. And that is enlightenment.

Incidentally, I have been programming since 1965, and I still get compile errors. So don't feel too bad when you do. ;)

At least I have developed a method to minimize it, and to make it easy to find the problem. The trick I use is coding in tiny increments. Write a line or two of code (or maybe 10 when programming in assembly language, compile. Write another line or two, compile. And so on. When a problem occurs, I know exactly which line or two to look at.

This technique would have been impossible ten years ago when compilation easily took half an hour. But nowadays it takes seconds. Any time "wasted" on extra compilation is well worth the tearing out of hair trying to figure out which of the ten thousand lines of code is (are) wrong.

At the other end of the scale are functional languages with a strong type system like Haskell. At my programming languages class in university, we wrote precisely one significant program in Haskell -- an interpreter for a simple Scheme-like language (with proper static variable binding, of course).

I worked out a beautiful design for the program, then wrote the program itself. Trying to compile resulted in several hundred lines of errors. Turns out that Haskell is whitespace sensitive in ways I hadn't realised (á la Python). So I fixed that.

Then it didn't compile for less frivolous reasons. 2 typing errors (the compiler was unable to find types for expressions) corresponded to 2 bugs. I fixed the bugs, and the program compiled; this entailed rewriting some of the code.

It still didn't recompile, but for idiotic reasons. Fixing those resulted in a compilable program. Unimpressed, I ran it.

It worked, first time, except for a silly bug in the connection between symbol names and builtins. I fixed that, tested 2 examples, and submitted.

Almost all errors had been compile time errors, not run time. They were unambiguous (since they didn't depend on the particular input of a run), and very clear. Evidently strong typing, lazy evaluation, and functional programming rulez!

Log in or register to write something here or to contact authors.