IronPython 2.0 and Jython 2.5 performance compared to Python 2.5
My previous post covering the performance problems I’ve been experiencing with IronPython raised some questions about whether the low performance was an effect peculiar to my system, or to my program — the OWL BASIC compiler — where the problem was first noticed. To briefly recap, I’d determined that IronPython was around 100× slower that CPython on the same program.
Since then, I’ve had time to reproduce the results with a small and completely unremarkable Python program, and also to run the tests on a different system. I had suspected that in the OWL BASIC compiler, my Python visitor implementation, which is used in applying transformations to the abstract syntax tree, was to blame. I set about condensing a tree visitor down to a small example, but I never got that far. It is sufficient to simply build a large binary tree to demonstrate the dramatic differences in the performance characteristics of the three main Python implementations.
Here is that test program, which just builds a simple binary tree of objects to the requested depth.
class Node(object): counter = 0 def __init__(self, children): Node.counter += 1 self._children = children def make_tree(depth): if depth > 1: return Node ([make_tree(depth - 1), make_tree(depth - 1)]) else: return Node() def main(argv=None): if argv is None: argv = sys.argv depth = int(argv) if len(argv) > 1 else 10 root = make_tree(depth) print Node.counter return 0 if __name__ == '__main__': import sys sys.exit(main())
The program builds a binary tree to the depth supplied as the only command line argument, or ten if one is not supplied. It counts the number of nodes as they a built. Remember that the merits or otherwise of this program are not the point! The point is the performance difference between the Python implementations when it is run.
My benchmarking approach has been to run this script five times for each tree depth from a depth of one, upwards to 22, or until my patience was exhausted. I’ve taken the minimum time from each run of five. Since there is a non-linear relationship between the depth of the tree and the number of nodes contained therein, logarithmic axes are used in all the charts that follow.
64 bit Windows Vista x64
Here are the results for the first test machine – with dual quad-core 1.86 GHz Xeons with 4 GB RAM running Vista x64, testing IronPython 22.214.171.124 on .NET 2.0, Jython 2.5rc2 on Java Hotspot 1.6.0 and Python 2.5.2.
In Figure 1 we see that above 1000 nodes or so (tree depth of 10) performance for IronPython begin to degrade rapidly. CPython holds out for another two orders of magnitude before the significant costs begin to be felt . Its interesting to see that although Jython is in the middle of the pack, it scales much better than CPython, surpassing it at around half-a-million nodes (tree depth of 19).
In my application — a compiler — virtual machine (VM) start-up time is important; however, in many long-running applications this is not the case, so it is interesting to subtract VM start-up time from each series, which we see in Figure 2, below.
Below 100 tree nodes, there is a lot of noise in these measurements. Above 100 nodes its easy to see that the blue IronPython curve is at least two chart divisions above the red CPython curve — that’s two orders of magnitude or 100× slower, and getting relatively worse as the size of the tree increases.
32 bit Windows XP x86
Responses to my earlier article suggested that trying IronPython 2.0.1 with Ngen’ed binaries on x86 may make a difference. Well, to cut a long story short, it doesn’t, but here are the details. These tests were run on a 900 MHz Pentium M Centrino laptop with 768 MB RAM, and so cannot be directly compared with those above, although its notable that a one year old workstation is only twice as fast as a five year old laptop. Moore’s law certainly isn’t delivering here!
On x86, IronPython is still 100× slower than CPython, and Jython still scales better. It seems the essence of this benchmark is not dependent on which hardware or CLR platform it is run.
I’ll close by re-presenting the data in the x86 benchmarks as multiples of CPython performance, because it dramatically demonstrates the different responses to the scale of the problem size for IronPython and Jython. Again we see Jython catching up with CPython at a tree depth of 19, just we saw on x64. and IronPython delivering 6000× worse than CPython at a tree depth depth of 15. A tree of this size with thirty-thousand nodes is very similar in scale to the AST tree sizes found in the OWL BASIC during compilation of large programs.
- IronPython can be very slow, even on programs in the microbenchmark category, which are doing standard operations such as building trees. Presumably there are still significant optimizations to be made in IronPython to bring its performance closer to that of the other Python implementations. Hopefully, this example and the measurements can contribute to that improvement.
- Jython may scale better than Python if your application exercises Python in similar ways to this benchmark. Speculatively, that could have implications for projects such as SCons, which build large in-memory graphs.
- I suppose if nothing else we have demonstrated in passing that Java can be faster than C for some non-trivial programs (like a Python interpreter) running a trivial program, like this benchmark.