Timing Trials, or, the Trials of Timing: Experiments with Scripting and User-Interface Languages
By Brian W. Kernighan (Bell Laboratories) and Christopher J. Van Wyk (Department of Mathematics and Computer Science).
This paper describes some basic experiments to see how fast various popular script- ing and user-interface languages run on a spectrum of representative tasks. We found enormous variation in performance, depending on many factors, some uncontrollable and even unknowable. There seems to be little hope of predicting performance in other than a most general way; if there is a single clear conclusion, it is that no benchmark result should ever be taken at face value. A few general principles hold:
- Compiled code usually runs faster than interpreted code: the more a program has been “compiled” before it is executed, the faster it will run.
- Memory-related issues and the effects of memory hierarchies are pervasive: how memory is managed, from hardware caches to garbage collection, can change runtimes dramatically. Yet users have no direct control over most aspects of memory manage- ment.
- The timing services provided by programs and operating systems are woefully inad- equate. It is difficult to measure runtimes reliably and repeatably even for small, purely computational kernels, and it becomes significantly harder when a program does much I/O or graphics.
Although each language shines in some situations, there are visible and sometimes surprising deficiencies even in what should be mainstream applications. We encountered bugs, size limitations, maladroit features, and total mysteries for every language.
This paper describes experiments to compare the performance of scripting languages (like Awk, Perl, and Tcl) and interface-building languages (like Tck/Tk, Java, and Visual Basic) on a set of representative computational tasks. We found this challenging, with more difficulties and fewer clear-cut results than we had expected.
- timing_trials.pdf (PDF)