Optimizing Jint part 7: Rewriting the interpreter

So when you start to hit the wall trying to get things faster using the current infrastructure, you can always level the playing field by re-thinking the problem and finding a better way to approach the problem. A recent big change was to refactor how the JavaScript AST tree is being interpreted.

Some history

Jint was using two separate AST node executors, the statement interpreter and expression interpreter. Engine delegated calls to execute correct logic for each type of statement and expression, basically running a big recursion down the AST tree, jumping to correct handler based on node type.

This didn't however offer much ways to help with optimization though, it only gave nice separation of logic by separating statements and their interpretation by having distinct method on either class. There was no easy way to cache or understand possibilities as you were at some point of deep recursion with little knowledge of surrounding context.

Stateful and more intelligent AST to the rescue

As somewhat big refactoring, contained in this pull request, the recursive switch-casing was transformed to recursive AST node wrappers. This allowed us to have state and preparation for execution. Node execution state can now hold beforehand prepared and more efficient invocation setup.

The basic idea is to have node container derived either from JintExpression or JintStatement and override protected virtual EvaluateInternal or ExecuteInternal. Better yet, ideally node can also override GetValue that should resolve the value for the node.

For example literal expression results can now be easily prepared for efficient returning, knowing their data won't change. Here's an example of such prepared AST node, JintLiteralExpression:

The Jint wrapper pre-calculates returned value and can not only react both normal Evalute call via it's EvaluateInternal but also give better performing alternative to GetValue that would otherwise delegate to Engine's more generic method that has to do a lot more work to determine how to handle result.

Going forward

As the groundwork is now in place, it should be easier to add more smarts to interpretation based on context, maybe even detect hot paths that could benefit from pre-calculating and caching more things. Hopefully this refactored structure will also help while developing more ES6 features.


Popular posts from this blog

Optimizing Jint part 6: DictionarySlim is coming to town

Running docker-compose against WSL2