Lately I've been doing refactoring and extending an optimization project with a lot of technical debt (I'll write more about the problem in the future).
Sadly this project was started in python despite maybe python not being the finest language to this kind of optimization problem, so I when I started working on it the whole performance budged was already exhausted, and I wanted to be sure that I didn't make it worse.
Python assert keyword
One of the techniques I employed was to aggressively add assert statements everywhere. So every time I thought "this assumption should be true" I wrote an assertion that checked this assumption.
Python assert looks like that:
assert expression assert expression, message
above assert will raise an exception if expression is False.
However since these assertions were often e.g. comparing large numpy arrays, these asserts had bad performance impact on the application.
So here enters python optimized mode, python optimized mode can be triggered by:
- Starting the interpreter with -O switch;
- Setting the PYTHONOPTIMIZE environment flag
python optimized mode does basically two things:
- asserts are discarded during code loading (so they are not executed, and not visible in the bytecode);
- __debug__ variable is set to False, also any if expressions that depend on this variable can be discarded;
So I could add as many assertions as I wanted, and still get decent performance on production.
Enabling the optimized mode might have unexpected consequences on large projects, this is (sadly) an interpreter flag, and some modules:
- Use asserts as pre-condition checks instead of checking invariants (especially pre-conditions in user supplied input);
- Some modules expect docstrings to be available (which is a problem for -OO option which strips docstrings from bytecode)
Luckily my project wasn't big, and had only well behaved dependencies.
The __debug__ variable
Then I needed to generate extra statistics for some optimization strategies these included some non trivial timing code, and some other logic.
The optimization's module input contains no personal data (and really no data unrelated to the optimization problem), so I could store reasonably representative set of input problems, and run them as part of integration tests.
So I decided to wrap all the stats code in if __debug__: blocks, like that:
program flow if __debug__: Gather stats program flow
Now by __default__ debug is True however in the optimized mode, __debug__ is false, and all statements that the compiler that depend only on __debug__ variable can be removed from the bytecode, so we (once again) don't loose any performance on production system.
Depending on your Python version interpreter might or might not remove e.g. if __debug__ and other_variable, so I just stuck with:
if __debug__: if other_variable: do_stuff()
Python optimized mode caveeats
These techniques worked well, and I managed to deliver working, correct, fast-enough program.
However there is a major caveeat in using the optimized mode, in the optimized mode you end up using different code during tests and on production, and if you use pytest then running tests in optimized mode is non trivial.
Using pytest and optimized mode is hard as pytest mostly relies on rewriting the assert statement itself, and if python compiler removes all asserts your tests don't check anything.
I have actually ran into a problem where too much code was in if __debug__ and everything worked up until I started to testing it in the production container.