Archive

Archive for May, 2009

IronPython hammers CPython when not mutating class attributes

May 22nd, 2009 6 comments

Earlier today I posted the second article in what is turning out to be a short series in the investigation into why the performance of IronPython is around 100× slower than CPython, when running the front-end of my OWL BASIC compiler.

The most informative comment was from Curt Hagenlocher who works on IronPython in the Visual Studio Managed Languages group at Microsoft.

Curt suggested,

Try storing the counter as a global variable instead of a class-level member of Node — I think you’ll notice a dramatic improvement.

The modified benchmark program looks like this:

counter = 0

class Node(object):
    
    def __init__(self, children):
        global counter
        counter += 1
        self._children = children
                
def make_tree(depth):
    if depth > 1:
        return Node ([make_tree(depth - 1), make_tree(depth - 1)])
    else:
        return Node([])
        
def main(argv=None):
    global counter
    if argv is None:
        argv = sys.argv
    depth = int(argv[1]) if len(argv) > 1 else 10
       
    root = make_tree(depth)
    print counter
    return 0
        
if __name__ == '__main__':
    import sys
    sys.exit(main())

A dramatic improvement!

Well, Curt wasn’t wrong. This made a phenomenal difference with IronPython completing in only 12% of the time taken by CPython – over 8× faster with a binary tree depth of 20.

Let’s look in detail at the results. All results are from a dual quad-core 1.86 GHz Xeon with 4 GB RAM, and as before each benchmark was run five times, and the shortest time of the five taken.

The three test environments are:

  1. Python 2.5.2 x86 32-bit
  2. Jython 2.5rc2 on Java Hotspot 1.6 32-bit
  3. IronPython 2.0 on .NET 2.0 x64
The relative performance of the three main Python implementations on a benchmark that uses a global counter, rather than mutating a class attribute.

Figure 1. The relative performance of the three main Python implementations on a benchmark that uses a global counter, rather than mutating a class attribute.

Here we can see how IronPython’s performance has been improved hugely by this simple change. Although startup time dominates for the smaller problem size, now both Jython and IronPython surpass CPython at around half-a-million nodes.

Removing start-up time, which may be irrelevant for long-running processes, gives us the following chart:

The performance of the three main Python implementations excluding start-up time.

Figure 2. The performance of the three main Python implementations excluding start-up time.

Again there is a lot of noise in the data below 1000 nodes, but it is clear that Jython scales better than IronPython, which in turn is scaling better than CPython.

Up until now I’ve been using a log-log scale in the charts because of the wide variation in performance between the different implementations, but now the performance gap is much closer, it’s difficult to get a sense of just how much faster IronPython is on the modified benchmark. Let’s throw in a log-linear plot to help us appreciate what’s going on:

Figure 3. A linear representation of the same data as in Figure 2, to highlight the performance multiple between IronPython and CPython in the larger tests.

Figure 3. A linear representation of the same data as in Figure 2, to highlight the performance multiple between IronPython and CPython in the larger tests.

It’s perhaps easier to see now that IronPython is doing in 14 seconds what takes CPython 114 seconds to achieve!

Finally, let’s plot those results as we did before, as multiples of CPython performance:

Figure 4. Execution time of Jython and IronPython as multiples of CPython performance.

Figure 4. Execution time of Jython and IronPython as multiples of CPython performance.

It is easy to see that in this chart, once we pass half-a-million tree nodes (a tree depth of 19) that both Jython and IronPython are significantly beating CPython.

Explanation

Curt Hagenlocher offers the explanation in a comment in the thread on Reddit.

In this particular case, IronPython is slow because of the update to Node.counter. Currently, any update to a class will increment the version number for the class, which will have the effect of invalidating any rules compiled for that class. Effectively, the same rules are getting compiled over and over again. Moving the counter to a global should result in performance on par with that of CPython.

which is absolutely correct, except that he’s underselling the relative gain. IronPython is not only on a par with CPython, it can outperform it by a factor of eight!

With this knowledge in hand, I can now approach optimization of my OWL BASIC compiler, which lies back at the start of this illuminating tale.

Conclusion

  • Avoiding mutation of Python class attributes can have significant benefits for IronPython performance.
  • Both IronPython and Jython scale better than CPython by this benchmark, and have superior performance for large trees of nodes.
Categories: .NET, computing, IronPython, Jython, Python, software Tags:

IronPython 2.0 and Jython 2.5 performance compared to Python 2.5

My previous post covering the performance problems I’ve been experiencing with IronPython raised some questions about whether the low performance was an effect peculiar to my system, or to my program — the OWL BASIC compiler — where the problem was first noticed. To briefly recap, I’d determined that IronPython was around 100× slower that CPython on the same program.

Since then, I’ve had time to reproduce the results with a small and completely unremarkable Python program, and also to run the tests on a different system. I had suspected that in the OWL BASIC compiler, my Python visitor implementation, which is used in applying transformations to the abstract syntax tree, was to blame. I set about condensing a tree visitor down to a small example, but I never got that far. It is sufficient to simply build a large binary tree to demonstrate the dramatic differences in the performance characteristics of the three main Python implementations.

The benchmark

Here is that test program, which just builds a simple binary tree of objects to the requested depth.

class Node(object):
    counter = 0
    
    def __init__(self, children):
        Node.counter += 1
        self._children = children
                
def make_tree(depth):
    if depth > 1:
        return Node ([make_tree(depth - 1), make_tree(depth - 1)])
    else:
        return Node([])
        
def main(argv=None):
    if argv is None:
        argv = sys.argv
    depth = int(argv[1]) if len(argv) > 1 else 10
       
    root = make_tree(depth)
    print Node.counter
    return 0
        
if __name__ == '__main__':
    import sys
    sys.exit(main())

The program builds a binary tree to the depth supplied as the only command line argument, or ten if one is not supplied. It counts the number of nodes as they a built. Remember that the merits or otherwise of this program are not the point! The point is the performance difference between the Python implementations when it is run.

My benchmarking approach has been to run this script five times for each tree depth from a depth of one, upwards to 22, or until my patience was exhausted. I’ve taken the minimum time from each run of five. Since there is a non-linear relationship between the depth of the tree and the number of nodes contained therein, logarithmic axes are used in all the charts that follow.

64 bit Windows Vista x64

Here are the results for the first test machine – with dual quad-core 1.86 GHz Xeons with 4 GB RAM running Vista x64, testing IronPython 2.0.0.0 on .NET 2.0, Jython 2.5rc2 on Java Hotspot 1.6.0 and Python 2.5.2.

Create time for a binary tree including Python virtual machine startup on Windows Vista x64 with 1.86 GHz Xeon processors.

Figure 1. Creation time for a binary tree including Python virtual machine startup on Windows Vista x64 with 1.86 GHz Xeon processors.

In Figure 1 we see that above 1000 nodes or so (tree depth of 10) performance for IronPython begin to degrade rapidly. CPython holds out for another two orders of magnitude before the significant costs begin to be felt . Its interesting to see that although Jython is in the middle of the pack, it scales much better than CPython, surpassing it at around half-a-million nodes (tree depth of 19).

In my application — a compiler — virtual machine (VM) start-up time is important; however, in many long-running applications this is not the case, so it is interesting to subtract VM start-up time from each series, which we see in Figure 2, below.

By subtracting VM start-up time, we get a picture more interesting for long-running processes.

By subtracting VM start-up time, we get a picture more interesting for long-running processes.

Below 100 tree nodes, there is a lot of noise in these measurements. Above 100 nodes its easy to see that the blue IronPython curve is at least two chart divisions above the red CPython curve — that’s two orders of magnitude or 100× slower, and getting relatively worse as the size of the tree increases.

32 bit Windows XP x86

Responses to my earlier article suggested that trying IronPython 2.0.1 with Ngen’ed binaries on x86 may make a difference. Well, to cut a long story short, it doesn’t, but here are the details. These tests were run on a 900 MHz Pentium M Centrino laptop with 768 MB RAM, and so cannot be directly compared with those above, although its notable that a one year old workstation is only twice as fast as a five year old laptop. Moore’s law certainly isn’t delivering here!

The performance profiles are very similar with IronPython 2.0.1 on x86.

The performance profiles are very simular with IronPython 2.0.1 on x86.

On x86, IronPython is still 100× slower than CPython, and Jython still scales better. It seems the essence of this benchmark is not dependent on which hardware or CLR platform it is run.

I’ll close by re-presenting the data in the x86 benchmarks as multiples of CPython performance, because it dramatically demonstrates the different responses to the scale of the problem size for IronPython and Jython. Again we see Jython catching up with CPython at a tree depth of 19, just we saw on x64. and IronPython delivering 6000× worse than CPython at a tree depth depth of 15. A tree of this size with thirty-thousand nodes is very similar in scale to the AST tree sizes found in the OWL BASIC during compilation of large programs.

Performance of IronPython and Jython as multiples of CPython performance.

Performance of IronPython and Jython as multiples of CPython performance.

Conclusions

  • IronPython can be very slow, even on programs in the microbenchmark category, which are doing standard operations such as building trees. Presumably there are still significant optimizations to be made in IronPython to bring its performance closer to that of the other Python implementations. Hopefully, this example and the measurements can contribute to that improvement.
  • Jython may scale better than Python if your application exercises Python in similar ways to this benchmark. Speculatively, that could have implications for projects such as SCons, which build large in-memory graphs.
  • I suppose if nothing else we have demonstrated in passing that Java can be faster than C for some non-trivial programs (like a Python interpreter) running a trivial program, like this benchmark.

Dismal performance with IronPython

May 17th, 2009 2 comments

Significant claims have been made about the performance of IronPython, notably back at its inception in 2004 when Jim Hugunin, creator of both IronPython and its cousin Jython, presented a paper on IronPython performance at PyCon. Since then, there have been numerous claims to IronPython’s supremacy over CPython in the performance stakes. The IronPython Performance Report reiterates that IronPython can turn in a good performance. According to Hugenin the standard line we’ll see is,

“IronPython is fast – up to 1.8x faster than CPython on the standard pystone benchmark.

But do these claims stand up in the face of real-world Python code?

The claims of good performance are based on synthetic micro-benchmarks which don’t accurately reflect balance of coding techniques found in complete programs.

At this point I’d like to offer my own quote:

“IronPython can be slow – 10x to 100x slower than CPython on real-world code and it has been observed to be up to 6000x slower.

Now, the unfortunate thing about real-world code is that it hasn’t been hand-crafted to highlight the performance characteristics of some aspect of the language implementation like your typical micro-benchmark. Its been written, most likely without any attention to performance. This is especially in the case of a prototype, as in my case.

Is my code really that bad?

Over the past few years I’ve been working on a hobby project called OWL BASIC, which is a compiler for the BBC BASIC lanaguage targeting the .NET Common Language Runtime (CLR). At the outset of the project I decided to write the compiler itself in Python, for much the same reasons as PyPy is written in Python — hackability. I planned specifically to use IronPython so I could benefit from access to useful .NET libraries such a Reflection.Emit, for generating Common Intermediate Language (CIL) assemblies.

During the course of developing OWL BASIC, as the code has become more complex, I’ve been consitently disappointed by the negative delta between the promise and reality of IronPython’s performance. The poor performance of IronPython has negated, for me, one of the main advantages of developing in Python — the rapid edit-run cycle.

Was my code really so inefficient? Performance was never a goal of the project, but the underwhelming performance has threatened the viability of my approach and made me question the wisdom of my chosen route of writing a compiler in Python.

The absence of profiling tools for IronPython led me down the road of getting at least the compiler front-end to work on CPython, so I could use the standard Python profilers. Fortunately, my code was portable (its just regular Python) and so I determined that with a few inconsequential tweaks to my code, the entire compiler front-end can be run on the trinity of CPython, Jython and IronPython.

I was, to say the least, somewhat surprised by the results.

The evidence: performance of CPython, Jython and IronPython

The investigation into the relative performance of the three main Python implementations was centered on running my unmodified, unprofiled and unoptimized compiler front-end on third-party BBC BASIC source-code (Acornsoft Sphinx Adventure) and measuring the execution time. All tests runs were performed five times, and the minimum time of the five chosen. Variance between the readings on successive runs was small. The following Python implementations were on the test-bench:

  • Python 2.5.1 (x86)
  • Jython 2.5rc2 on Java HotSpot 1.6.0_10-rc
  • IronPython 2.0 on .NET 2.0 (x64)

All tests were run on a eight-core 1.86 GHz Xeon with 4 GB RAM running Vista Ultimate x64.

The following chart shows the results of running the compiler over the source code for Sphinx Adventure.

ipy_performance/absolute_performance.png

The absolute performance of CPython, Jython and IronPython on my code

Frankly, I was astounded. IronPython was left in the dust, not only by CPython but also Jython! Overall, CPython was 93× faster on the exact same code. The IronPython hyperbole, for now I could see that was what is was, had led to me expect numbers similar to those for CPython, although I had perhaps more realistic expectations that performance would be similar to Jython.

At this point I assumed I’d hit some corner case in which IronPython was performing relatively badly. I’d had a similar experience early on in the project with code from the PLY parsing package causing IronPython 1.1 to perform badly, but I’d worked around the issue by modifying parts of PLY to use pickling rather than eval-ing large list literals for the cached parsing tables. [Its worth noting in passing that this problem still exists with the IronPython and PLY combination - I'll publish my solution in another post].

I decided to dig in a little more detail and collect some timing data on the sequence of top level calls the compiler front-end makes to parse the source code followed by a sequence of transformations to the abstract syntax tree (AST). The chart below shows these top level calls, in the order in which the occur during compilation:

ipy_performance/performance_ratio.png

The ratio of performance of CPython to both IronPython and Jython when running the OWL BASIC compiler

As you can see, I have had to resort to a logarithmic scale in order to convey the huge variation in the performance of IronPython relative to CPython, ranging from buildParser with a multiple of 3.8 to convertSubroutinesToProcedures with a multiple of over 6400. Even if we ignore this outlier (maybe we are suffering from a garbage collection – but notice that Jython is also relatively much slower on this function) we can see that Jython is typically 5× to 10× slower than CPython whereas IronPython is typically 10× to 100× slower than CPython.

Notice also that there is a marked decline in the performance of IronPython from the parse function onwards; the common factor in these operations are that they are all transformations to the AST, and my AST node classes are instantiated by a Python metaclass, although its pure speculation on my part that metaclasses are the cause of this performance drop.

Conclusion

On my program at least, Jython is 6× slower than CPython and IronPython is nearly 100× slower. If you’re suffering from poor performance with IronPython, it may be well worth your time checking performance of your code on the other Python implementations, if that is an option for you.

Now need to find the root cause and boil my problem down to a short example which can form the basis of a bug report to the IronPython team. Given that the problem is pervasive in my code, that won’t be hard.

See you at EuroPython 2009 !