I did not have the time to read all answers in detail ... but I was amused.
There was a similar controversy in the sixties and early seventies (computer science history often repeats itself): can high level languages be compiled to produce code as efficient as the machine code, well, say assembly code, produced manually by a programmer. Everyone knows a programmer is much smarter than any program and can come up with very smart optimisation (thinking actually mostly of what is now called peephole optimization). This is of course irony on my part.
There was even a concept of code expansion : the ratio of the size of the code produced by a compiler to the size of code for the same program produced by a good programmer
(as if there had been too many of these :-). Of course the idea was that this ratio was always greater than 1. The languages of the time were Cobol and Fortran 4, or Algol 60 for the intellectuals. I believe Lisp was not considered.
Well there were some rumors that someone had produced a compiler that could sometimes get an expansion ratio of 1 ... until it simply became the rule that compiled code was much better than hand written code (and more reliable too).
People were worried about code size in those times (small memories) but the same goes for speed, or energie consumption. I will not go into the reasons.
Weird features, dynamic features of a language do not matter. What matters is how they are used, whether they are used. Performance, in whatever unit (code size, speed, energy, ...) is often dependent on very small parts of programs.
Hence there is a good chance that facilities that give expressive power will not really get in the way. With good programming practice, advanced facilities are used only in a disciplined way, to imagine new structures (that was the lisp lesson).
The fact that a language does not have static typing has never meant that programs written in that language are not statically typed. On the other hand it might be that the type system a program uses is not yet sufficiently formalized for a type checker to exist now.
There have been, in the discussion, several references to worst case analysis
("halting problem", PERL parsing). But worst case analysis is mostly
irrelevant. What matters is what happens in most cases or in useful
cases ... however defined or understood or experienced. Here comes
another story, directly related to program optimisation. It took place
a long time ago in a major university in Texas, between a PhD student
and his advisor (who was later elected in one of the national
academies). As I recall, the student was insistent on studying an
analysis/optimisation problem the advisor had shown to be
untractable. Soon they were no longer on speaking terms. But the
student was right: the problem was tractable enough in most practical
cases so that the dissertation he produced became reference work.
And to comment further on the statement that Perl parsing is not
computable, whatever is meant by that sentence, there is a similar
problem with ML, which is a remarkably well formalized language. Type
checking complexity in ML is a double exponential in the lenght of the
program. That is a very precise and formal result in worst case
complexity ... which does not matter at all. Afaik, ML users are still
waiting for a practical program that will explode the type checker.
In many cases, as it was before, human time and competence is scarcer than computing power.
The real problem of the future will be to evolve our languages to integrate new knowledge, new programming forms, without having to rewrite all the legacy software that is still used.
If you look at mathematics, it is a very large body of knowledge. The languages used to express it, notations and concepts has evolved over the centuries. It is easy to write old theorems with the new concepts. We do adapt the main proofs, but do not bother for lots of results.
But in the case of programming, we might have to rewrite all the proofs from scratch (programs are proofs). It may be that what we really need is very high level and evolvable programming languages. Optimizer designers will be happy to follow.
There are dynamic languages allowing optional type annotations (e.g.: Clojure). From what I know the performance associated with the type-annotated functions is equivalent to when a language would be statically typed. – Pedro Rolo – 2016-05-13T11:24:41.283
This question is a typical initiator of religous wars. And as we can see from the answers, we're having one, albeit a very civilized one. – Andrej Bauer – 2013-01-06T09:55:42.200
I think your question is very badly phrased. "C++ like" - what is like? If C++ does in in 1ns, and ruby does it in 1ms, is that "like" enough? No, a language missing optimization paths can never be as fast as one that knows everything that there is to know. Many of the answers are hypothetical compilers that know some things about your code. But plain Ruby/Python does not guarantee you anything those assume. You need to check at runtime, by design. And even if it's just a single instruction more, you have just proven that this code is slower - by just one instruction - slower nonetheless. – mmlac – 2017-06-07T03:39:31.490
1Can you rephrase the question so it encourages answers backed by proper evidence over others? – Raphael – 2012-03-28T23:29:45.300
@Raphael I've gone ahead and edited. I think my edit doesn't fundamentally change the meaning of the question, but makes it less inviting of opinion pieces. – Gilles 'SO- stop being evil' – 2012-03-29T00:19:18.083
1
In particular, I think you need to fix one (or a few) concrete performance measures. Runtime? Space usage? Energy usage? Developer time? Return of investment? Note also this question on our meta that concerns this question (or rather its answers).
– Raphael – 2012-03-29T00:20:56.050