Generalized string search improvement for needles with a small or numerically similar alphabet

I have been on a bit of a pointless optimization kick lately, and decided to see what I could do with string search. Most of the fast string search algorithms work on the principle of a sliding window for the purpose of skipping characters which don’t need to be checked. The best of these use a fair (fair being relative) amount of storage and extra cycles in the loop to make sure they are skipping as many characters as possible.

I am sure this has been done before, but I haven’t seen it. In the code below I have implemented an unintrusive extra level of skips to the well known Boyer-Moore-Horspool algorithm. Basically, each character of the key is AND’ed together and the result stored. If the result is zero, which happens if the alphabet is large/sparse enough, the extra checks are conditionalized away. In the event that the result is non-zero, we very quickly check for mismatches in the haystack by AND’ing the haystack character being checked against our previous result, and checking to see if the result of that is equal to our previous result. If the two results are equal, we have just checked a potentially matching character, and we need to fall back to our regular checking routine. In some cases we will match a non-matching character and our efforts will have been wasted, but in other cases we will have determined a non-match and be able to skip the full length of the needle in just a few instructions.

Original source for Boyer-Moore-Horspool lifted from Wikipedia.

boyer-moore-horspool.c
boyer-moore-horspool-sjg.c

Below are some quickie results from my X3220, compiled with GCC 4.2.1.


$ gcc -O2 boyer-moore-horspool.c -o boyer-moore-horspool
$ gcc -O2 boyer-moore-horspool-sjg.c -o boyer-moore-horspool-sjg
$ time ./boyer-moore-horspool
61
28
16
./boyer-moore-horspool 5.53s user 0.00s system 99% cpu 5.530 total
$ time ./boyer-moore-horspool-sjg
61
28
16
./boyer-moore-horspool-sjg 5.21s user 0.00s system 99% cpu 5.210 total

$ gcc -O3 -mtune=nocona boyer-moore-horspool.c -o boyer-moore-horspool
$ gcc -O3 -mtune=nocona boyer-moore-horspool-sjg.c -o boyer-moore-horspool-sjg
$ time ./boyer-moore-horspool
61
28
16
./boyer-moore-horspool 5.28s user 0.00s system 99% cpu 5.282 total
$ time ./boyer-moore-horspool-sjg
61
28
16
./boyer-moore-horspool-sjg 5.02s user 0.01s system 99% cpu 5.034 total

The, dare I say “elegant”, thing about this addition is that it could relatively easily be applied to many other string search algorithms and completely conditionalized away from the inner loop if the results are going to be ineffectual.

Virtual machine opcode dispatch experimentation

I was reading The case for virtual register machines recently and decided to do a bit of experimentation with different opcode dispatch methods. Apparently, up to 60% of the cpu time burned by common virtual machines is due to branch mispredicts. This is rather a silly problem to have in the context of opcode dispatch, considering the VM knows quite readily exactly where it will be branching to for each VM instruction. As a result, there is really no reason for the mispredicts apart from the fact that we can’t actually tell the cpu what we know. Since there is no useful mechanism of any sort (at least on all x86 cpu’s that I know of) to say to the cpu, branch at foo will go to bar (short of JIT’ing everything, which can indirectly solve the branch mispredicts which happen at the opcode dispatch stage), the best we can really hope to do is attempt to seed the branch predictor with past branch information that will hopefully prove useful in the future. This proves to be somewhat problematic, as different cpu’s have branch predictors implemented in different ways and with different capabilities, and varying mispredict penalties. You also tend to burn cycles and space over more direct implementations, you just have to find the algorithm that lets you come out ahead due to increased prediction accuracy.

To really figure out what is going to work best for a full blown VM, I think you need to start at the beginning. The paper above referenced a couple of different ways that opcode dispatch is typically accomplished, but I wanted to write my own test cases and figure out exactly what would work the best, and more importantly, what definitely was not going to work, so that I could avoid wasting time on it in the future. These are more important simply because the faster running algorithms will very likely be somewhat dependent on the number of opcodes a VM implements and the frequency with which it executes opcodes repeatedly or in the same order.

My preliminary test cases are on github here: http://github.com/evilsjg/hacks/tree/master/troa/test/dispatch, and the runtime results with various compilers on various CPU’s is here: http://github.com/evilsjg/hacks/tree/master/troa/test/dispatch/RESULTS.

As you can see, the “goto direct” version is the fastest in every case by a relatively healthy margin. To qualify these results I implemented the same method of dispatch as the goto direct case in the Lua VM. Much to my dismay it was consistently (~10%) slower than the switch-based dispatch that is standard in Lua. After quickly realizing it was purely a function of opcode count, 5 in my tests vs 38 in Lua, I modified my Lua patch to be more like the goto direct 2 example. Runtimes are not provided in the RESULTS file for this, but it was marginally slower than the goto direct case. After this, Lua was consistently faster (up to around 30%) on some of my test hardware, and marginally slower on others. Making minor changes to the breadth or depth of the nested if or switch statements expanded into each opcode had minor changes one way or the other on all processors tested. Typically, faster on my Xeon 3220 meant slower on my Athlon XP 2500+, and vice versa, but by differing magnitudes. The Xeon gets faster, faster than the Athlon slows.

The entry point to my post about this on the Lua list can be found here.

There is obviously performance to be had here, probably quite a bit of performance. My next bit of testing will focus on expanded (# of opcodes) versions of the faster test cases, with more realistic opcode distribution. In terms of algorithmic improvements, I am going to try grouping opcodes in various ways adding the group identifier to the opcode itself, so that the dispatch data structures can nest like switch (group) { case n: switch (op) { } * n }. I am also going to play with the concept of simple opcode or group ordering rules. The compiler frontend of any VM follows some set of rules, intended or not for generating the opcodes that the VM executes. Even with a VM implementation that does not enforce those rules, and allows opcode execution in any order, knowing the likely order will no doubt be useful for optimization.

A requirement in my mind early on for the TROA VM was the easy evaluation of expressions on vectors or streams in the language to make extensive use of SIMD possible inside the VM. This concept is being weighted right up to the top of my list after having done this opcode dispatch testing. Even in the basic unoptimized case where your opcode operates on its vector/stream serially, there is still potential for double-digit overall program performance improvement due to the reduction in opcode dispatches.

Receiving asynchronous notifications of database changes , part 2

As I trudge forward with this explanation please keep in mind that Epidemic is a proof of concept. I’m not saying that it wouldn’t work fine in production as-is, but odds are it will bring down your entire infrastructure AND club all of your pet baby seals.

Let’s quickly step through the code, starting with a simple entry point mail_server.py. After pulling in the required classes and whatnot, the first thing this piece of code does is instance SQLNotifyDispatch which inherits SQLNotify. These two classes form the core of the code that does all the heavy lifting we wanted to avoid in our frontend. It cannot even fathom what to do, however, without a little help. Rather than telling it what tables to watch, and also implementing code to take action when something happens to a watched table, the framework has been designed to allow one to simply implement the code that takes action, and let it inform SQLNotifyDispatch what exactly needs to be watched.

This brings us to mail/server.py, in which I will focus on the mail_users class. This class performs actions when modifications are made to the mail_users table. The classname does not necessarily need to mimic the table name, as can be seen if you look back at mail_server.py. First, the mail_users class is instanced, and then it is registered with the dispatch object we created before. You will notice the first argument to the Register method is ‘mail_users’, this is where the name of the table to watch is defined. Back to the mail_users class, now that we’ve let the Dispatch end of things know what table to watch, we need to let it know what columns we are interested in. This is where the GetCol method of the registered class comes in. As you can see its implementation is very simple, return [‘gid’, ‘dba’, ‘username’, ‘quota_bytes’]. This tells Dispatch that we are interested in changes to those 4 columns, changes to other columns in the table are fantastic, but we don’t need to be notified of them.

The real meat is contained in backend/sql.py Digging down through SQLNotifyDispatch and into SQLNotify, you will notice a number of static definitions in the SQLNotify class, such as TableCreate, Function, Trigger and Rule. This is the heart of the whole operation. When SQLNotify is told the table and columns to watch, it crafts a table to log changes to, and a function/trigger/rule trio specific to the table being watched that serves two purposes. The first, it ensures that any operations happening on the watched table get stuffed into the log table. Second, it fires off PostgreSQL’s NOTIFY, to let us (SQLNotify) know that a change has happened. That’s right, it lets “US” know, because once this is all registered with the database it is out of our hands. This means that SQLNotify only has to make these changes the first time it watches a database. It also means, and much more importantly, that no matter what happens to “US”, the application which setup the watching, and is acting on changes, no changes will be lost. The application could crash, no matter, as soon as we come back up, we can poke into the log tables and see what has happened while we were away. It is possible in some cases to receive notification of changes a bit later than one would like, but you will always, always know if changes were made.

When a change does happen, PostgreSQL fires off NOTIFY as dictated by a rule the SQLNotify class created for us. As a result of this our application is asynchronously notified of the change (that’s right, no polling!). When SQLNotifyDispatch receives one of these notifications, it calls the appropriate method in our worker class, INSERT(), UPDATE() or DELETE(), dependant upon what the operation was that happened in the database. As you can see in mail/server.py, those three methods do various operations, such as creating directories on the filesystem, setting up or changing quota’s, or deleting directories.

It is all a just a bit more complicated than what is typically done when this type of functionality is needed, I admit. My personal experience dictates, however, that this type of approach actually ends up being much simpler and easier to maintain down the road when one starts to grow an infrastructure.

Those who are tied to MySQL are not prevented from using a nearly identical solution. With the introduction of MySQL 5.0, it is all very possible save the asynchronous notification features. While elegant and conducive to good performance, are certainly not required. Keep in mind that polling log tables which are consistently pruned is in general going to be faster than checking a datestamp or even an indexed “updated” boolean column on a very large table.

Receiving asynchronous notifications of database changes

The following is directly pertaining to a PostgreSQL-based internet mail solution, but be not discouraged. Most, if not all, of the techniques can be applied elsewhere, and I intend to explain how.

For about six months in 2003 I worked for a company based out of Spearfish, SD, called Altaire Enterprises, Inc., a small floundering dialup ISP. This is where my first real experiences with PostgreSQL took place, up until this time I had been a die-hard user of MySQL for all of my database needs, whether it be as a backing store for a web application or otherwise. The largest project I took on while I was there was the implementation of a database backed mail system. That is to say, all mail accounts were entirely virtual, no system accounts, and all data associated with them was stored in PostgreSQL. It could just as easily have been LDAP, but it wasn’t, and that isn’t the point of this little ditty I’m writing now. Architecting the mail system was the easy part, as I found, plenty of mail applications are perfectly happy to talk to PostgreSQL. There are two hard parts. Performance and Management.

I will get to performance later. Management comprises more than one would think at first. Obviously, you need some sort of frontend or tools to add users, domains, etc. to your mail database. In this case it was a collection of C applications, rather than the typical web frontend, because they could be executed by the internal billing system (platypus). In consideration of the scope of this text, how the data is being entered is moot.

There is a flip-side to management, and specifically putting information into the database in this scenario. There are cases where you need to know when modifications are made to that data, so that you can perform operations on disk, for instance. Such operations may be setting quota’s, or creating a users Maildir if your MTA doesn’t handle that for you. The typical way to do this is to have your frontend perform that action as well, which is logical in a small installation, and is exactly the method used at Altaire. The C applications performed whatever on-disk operations were necessary.

What happens when your (web) fronted is hosted on a different server, though? I suppose you could export a set of web services, or similar, from the mailserver, allowing your frontend to connect to it and perform the necessary operations. What then if you have 10 mail servers? The frontend has to decide which one to connect to, and does its thing, fine, but what if one of those web services runners dies, what do you do? Write additional logic to record failures, and play them back later? Oh lords, there must be a better way. Yes, yes.. Of course there is. There is another method that has been used time and again, it is proven and reliable. My first major experience with it was during the time I helped architect ITMom.com, a web hosting provider, back in 99/00. You create an addition column in the tables that will require external operations, and when modifications happen, you toggle that field on with your frontend. You can then have a runner that polls those tables watching for modifications, and performing the correct operation when one is found. This works well, with a number of drawbacks. One, you are polling. Two, you are littering your carefully constructed, optimized and normalized schema with columns whose sole purpose is to notify external applications of changes. You could easily avoid this by logging changes to seperate tables, sure, but now we come to three. You are stilly relying on your frontend to dictate what actions the backend should take.

Would it not be better to just let the management application do what it should do best, and insulate it from the underlying technical details? This becomes even more important in an organization where the developer(s), database administrator(s) and system administrator(s) are all different people.

Enter Epidemic, a proof-of-concept framework to easily make such things possible.

To Be Continued…