C++ doesn't define overflow on signed types, so use unsigned types instead.
Luckily all right shifts were unsigned anyway.
Some signed extension was happening (handling remainders after processing
8-byte chunks) but should still be there.
Caught by debug build.
We don't follow origin precisely in normalizing the token (converting a
zero to something else). We probably should, to allow direct import of
a database.
Rather than converting to unsigned longs for the fractional computations,
do them it bytes. The overhead of allocating longs will be larger than
the computation, given that tokens are usually short (8 bytes), and
our bytes type stores them inline.
Origin uses abstract types for Token; for two reasons:
1. To create a distinction between tokens for keys and tokens
that represent the end of the range
2. To use different implementations for tokens belonging to different
partitioners.
Using abstract types carries a penalty of indirection, more complex
memory management, and performance. We can eliminate it by using
a concrete type, and defer any differences in the implementation
to the partitioner. End-of-range token representation is folded into
the token class.