The previous answers give pretty much the explanation, though mostly
from a pragmatic angle, for as much as the question makes sense, as excellently explained by Raphael's answer.
Adding to this answer, we should note that, nowadays, C compilers are
written in C. Of course, as noted by Raphael their output and its
performance may depend, among other things, on the CPU it is running
on. But it also depends on the amount of optimization done by the
compiler. If you write in C a better optimizing compiler for C
(which you then compile with the old one to be able to run it), you
get a new compiler that makes C a faster language than it was
before. So, what is the speed of C? Note that you can even compile the new
compiler with itself, as a second pass, so that it compiles more
efficiently, though still giving the same object code. And the full employment theorem shows that their is no end to such improvements (thanks to Raphael for the pointer).
But I think it may be worthwhile trying to formalize the issue, as it
illustrate very well some fundamental concepts, and particularly
denotational versus operational view of things.
What is a compiler?
A compiler $C_{S\to T}$, abbreviated to $C$ if there is no ambiguity,
is a realization of a computable function $\mathcal C_{S\to T}$ that
will translate a program text $P_{:S}$ computing a function $\mathcal P$,
written in a source language $S$ into program text $P_{:T}$ written in a
target language $T$, that is supposed to compute the same function $\mathcal P$.
From a semantic point of view, i.e. denotationally, it does not
matter how this compiling function $\mathcal C_{S\to T}$ is computed, i.e., what realization $C_{S\to T}$ is
chosen. It could even be done by a magic oracle. Mathematically, the
function is simply a set of pairs $\{(P_{:S},P_{:T})\mid P_S\in S \wedge
P_T\in T\}$.
The semantic compiling function $\mathcal C_{S\to T}$ is correct if
both $P_S$ and $P_T$ compute the same function $\mathcal P$. But this formalization
applies as well to an incorrect compiler. The only point is that
whatever is implemented achieves the same result independently of the
implementation means. What matters
semantically is what is done by the compiler, not how (and how fast) it is done.
Actually getting $P_{:T}$ from $P_{:S}$ is an operational issue, that must
be solved. This is why the compiling function $\mathcal C_{S\to T}$ must be a computable function.
Then any language with Turing power, no matter how slow, is sure to be able to produce code as efficient as any other language, even if it may do so less efficiently.
Refining the argument, we probably want the compiler to have good
efficiency, so that the translation can be performed in reasonable
time. So the performance of the compiler program matters for users, but it has
no impact on semantics. I am saying performance, because the theoretical
complexity of some compilers can be much higher than one would expect.
About bootstrapping
This will illustrate the distinction, and show a practical application.
It is now common place to first implement a language $S$ with an
interpreter $I_S$, and then write a compiler $C_{S\to T\,:S}$ in the
language $S$ itself. This compiler $C_{S\to T\,:S}$ can be run with the
interpreter $I_S$ to translate any program $P_{:S}$ into a program $P_{:T}$.
So we do have a running compiler from language $S$ to (machine?)
language $T$, but it is very slow, if only because it runs on top of
an interpreter.
But you can use this compiling facility to compile the compiler
$C_{S\to T\,:S}$, since it is written in language $S$, and thus you get a
compiler $C_{S\to T\,:T}$ written in the target language $T$. If you assume, as often
the case, that $T$ is a language that is more efficiently interpreted
(machine native, for example), then you get a faster version of your
compiler running directly in language $T$. It does exactly the same
job (i.e. produces the same target programs), but it does it more
efficiently.
2@babou Your comment is cleverly worded but it obfuscates the real intent behind the question. Julia also has a runtime written in C which makes the question more like "How can a human running on foot and equipped with some high-tech device move faster than a human?" That isn't necessarily impossible but it's a significant engineering challenge. If the question is strictly about compilers, then of course it's easy to imagine hypothetical compilers written in Python/Ruby/Smalltalk/VBA that can compile highly optimized native code. – Praxeolitic – 2017-12-30T04:51:48.383
@Praxeolitic Why believe that the real intent is what you try to make it, and which does not seem well expressed in your example. Furthermore, there is no reason to believe that the would-be concept of programming language speed has much meaning. Possibly when considering specific implementations. But the real point here is that, when you have a language with Turing power, you have as much power as you will ever get to compute whatever you need (if computable at all) and in particular to implement a compiler generating code with any level of performance achievable by whatever means. – babou – 2017-12-30T15:47:15.980
@Praxeolitic My initial comment takes, on a trivial example, an even more abstract point of view: can a device build another device better than itself at some given task? Note that neither the task nor the meaning of "better" need be specified. And the answer is that all that matters is the building ability of the building device, independently of its own ability at the given task. This is also the point of view I am taking more formally in my answer below, restricted to compilers. – babou – 2017-12-30T16:18:33.670
1@babou What I'm saying is when a question shows a mismatch between intent and literal wording, it's better to clarify the misunderstanding, answer the intended question, and address the literal question if it makes any sense. I suppose it takes inferring, but it was clear that OP's question was "This implies doing A and then B is faster than just doing B. How is that possible?" and the appropriate answer is "Actually, that isn't implied." The reason for my comment is that I feel SO gives too many hyper-literal answers in general. Those answers have good info but they're often not helpful. – Praxeolitic – 2018-01-01T18:51:36.067
@Praxeolitic Your point is well taken, and I can say that my habit is generally to try to guess the real question hiding behind the miswording. But it is often difficult, and, unlike you, I do not see any such situation here. Well it is clearly a naive question, but not obvious in what way the user is being naive. He might actually ask what I answered to, or he might not even understand properly the differences between a compiler and an interpreter. The given answers actually pretty much assume the same. – babou – 2018-01-01T21:48:54.303
The language in which the compiler is written influences the speed of code generation, but not the speed of the generated code. (I assume the compiler is written in a language conceptually capable of any computing task.) – PJTraill – 2018-02-05T12:31:26.577
It can also simply indicate that an author of C benchmarking program was a lamer. This idea is also supported by fact that even Java Mandel {brot fractal ?} benchmarking program significantly outperforms C version (0.68 duration). And in general Java can't execute things faster than C, because of JVM overhead (even with JIT compiler) and GC overhead. So this clearly shows that there must be some serious design/code flaw in C version of Mandel. Other things being equal - C wins. Just Fortran can be a competitor, because it is ancestor of C and is optimized for numeric computations – Agnius Vasiliauskas – 2019-02-19T12:38:30.037
You could write a compiler for C on a steam-powered Turing machine. It would take ages to compile but the code it produced would run on the target machine at the speed of the target machine. – chasly - supports Monica – 2020-06-22T07:48:58.247
6Closely related question. – Raphael – 2015-08-22T20:48:42.243
1Also, be aware that such microbenchmarks are not very representative. – Raphael – 2015-08-23T09:57:36.650
410How can a car, which is a man-made object, move faster than a man? – babou – 2015-08-23T10:25:18.167
21According to the table, Python is slower than C. Do you think that it is impossible to write a C compiler in Python that generates the same code as your favourite C compiler? And what language is that written in anyway? – Carsten S – 2015-08-23T16:29:02.630
7babou's comment was spot-on, but I don't think we need multiple versions of the same. – Raphael – 2015-08-25T14:07:00.387
14
A related thought. Many compilers are self hosting, meaning they're written in their own language (often plus some assembly) and compiled with the previous version of itself. Yet compilers get better and faster. Mind blow.
– Schwern – 2015-08-25T21:21:23.4931@Schwern It's not too difficult to see how that might work. Write new compiler and compile using old compiler. We end up with a better, yet slower compiler. Then compiler the new compiler again using itself. Now we have a better, faster compiler. – John Gowers – 2015-08-29T15:44:03.207
@Raphael Especially given that babou's comment is already a "multiple version" of my answer! – David Richerby – 2015-08-29T22:02:35.317
Mandel is the algorithm listed that is most often outperformed by languages that aren't C. That seems to hint to me that the C implementation wasn't as optimal as the other languages'. – gil_bz – 2015-09-11T15:21:28.217
Please google "Compiler" "Assembler" and "Interpreter". Maybe your question makes more sense if you relate it to interpreter instead of compiler. – Micka – 2015-09-11T16:14:06.623
I would like to raise a comparison between C++ and Javascript
– Bakudan – 2015-09-13T15:32:34.360It's really complicated to compare the execution speed of two languages. – Celeritas – 2015-10-15T08:27:27.630