Of course,
there is very little to stop someone from writing an interpreter for a compiled
language or a compiler for an interpreted language,
although there are
some fundamental differences in the architecture of the language that
might encourage either compiling or interpreting more.
The purpose of this
writeup is not to categorize languages
as compiled or interpreted, but to explain why you might
prefer to compile or interpret a program written in such a language.
There are many reasons why interpreting a typically
compiled language may not be practical.
For example, compiled languages typically do strong type
checking, where variables are declared, and their
types are well known once
the program is compiled, to guarantee there are no runtime type mismatch errors.
Also, there may be many forward
references in a compiled program that can't be easily
resolved without actually compiling it, or other sanity checks that would be too expensive to do at runtime.
Interpreters of these languages would either have to
partially compile the source, skip the checking,
or suffer a huge performance penalty.
Compilers also typically do optimization and other things that typically are not practical at runtime.
Variable and
function names between modules may need to be resolved
and linked to make a complete executable.
These and other
differences don't make it impossible to interpret a
compiled program, but they may make it impractical,
or at least, very slow. Anyway, you could always
compile to bytecode and interpret the bytecode.
There are many reasons why a typically interpreted
language would be impractical to compile.
Interpreted languages typically are weakly typed.
Variables can change type on a whim, if the language
even supports different types at all. Variables may come
into existence just by using them. Linking of names
between modules may change dynamically at runtime. The program might generate strings and feed them back
to the interpreter for evaluation. Sections of
code may even be replaced or added as the program runs.
It may be possible to write a compiler for an interpreted
language, but to do so, you just about have to include
the whole compiler (or interpreter) in the executable.
Sometimes the instructions for the interpreter are so simple
that the overhead of compiling them just isn't worth the
small optimizations gained from compiling.
Anyway, you could always interpret compiled bytecode.
This, of course, describes the extremes. (There are
always exceptions to extremes.) I'm sure there
are many languages that fall close enough to center
that they can be equally well (or poorly) compiled or
interpreted. Some programs in extreme languages could easily be compiled, while others in the same language are nearly impossible to compile--just depending on the
feature set they use. In languages with both compilers
and interpreters available, you might want to use one or
the other depending on what you are doing.
For a list of bytecompiled vs.
script languages, check out interpreted language.
Either way, it's still programming or coding or whatever
you wanna call it.
Note also that compiling doesn't necessarily generate
object code or executable; it may just emit bytecode or
even source code in a different language. If you want to
argue this, the first portable Unix C compiler emitted assembly
code, which was then fed to the assembler. Gcc still does this,
although many compilers today do produce machine code directly.
CFront (mentioned above) really just used the local C compiler as a glorified assembler, as if C was no more than portable assembly language.