Functions are vectors! This perspective lets us apply the tools of linear algebra to computational problems from image and geometry processing to machine learning and light transport—and provides a natural explanation for Fourier series.
Let's explore: https://thenumb.at/Functions-are-Vectors
Notices by thenumbat@mastodon.gamedev.place
-
Embed this notice
thenumbat@mastodon.gamedev.place's status on Sunday, 07-Apr-2024 06:36:51 JST TheNumbat -
Embed this notice
thenumbat@mastodon.gamedev.place's status on Monday, 17-Jul-2023 15:40:19 JST TheNumbat New post on neural graphics primitives: exploring how various network architectures behave when encoding an image. https://thenumb.at/Neural-Graphics/
In conversation from mastodon.gamedev.place permalink Attachments
-
-
Domain not in remote thumbnail source whitelist: thenumb.at
Exploring Neural Graphics Primitives
from Max SlaterNeural fields have quickly become an interesting and useful application of machine learning to computer graphics. The fundamental idea is quite straightforward: we can use neural models to represent all sorts of digital signals, including images, light fields, signed distance functions, and volumes. Neural Networks Compression A Basic Network Activations Input Encodings Positional Encoding Instant Neural Graphics Primitives More Applications SDFs NeRFs References Neural Networks This post assumes a basic knowledge of neural networks, but let’s briefly recap how they work. A network with \(n\) inputs and \(m\) outputs approximates a function from \(\mathbb{R}^n \mapsto \mathbb{R}^m\) by feeding its input through a sequence of layers. The simplest type of layer is a fully-connected layer, which encodes an affine transform: \(x \mapsto Ax + b\), where \(A\) is the weight matrix and \(b\) is the bias vector. Layers other than the first (input) and last (output) are considered hidden layers, and can have arbitrary dimension. Each hidden layer is followed by applying a non-linear function \(x \mapsto f(x)\), known as the activation. This additional function enables the network to encode non-linear behavior. For example, a network from \(\mathbb{R}^3 \mapsto \mathbb{R}^3\), including two hidden layers of dimensions 5 and 4, can be visualized as follows. The connections between nodes represent weight matrix entries: Or equivalently in symbols (with dimensions): \[\begin{align*} h_{1(5)} &= f(A_{1(5\times3)}x_{(3)} + b_{1(5)})\\ h_{2(4)} &= f(A_{2(4\times5)}h_{1(5)} + b_{2(4)})\\ y_{(3)} &= A_{3(3\times4)}h_{2(4)} + b_{3(3)} \end{align*}\] This network has \(5\cdot3 + 5 + 4\cdot5 + 4 + 3\cdot4 + 3 = 59\) parameters that define its weight matrices and bias vectors. The process of training finds values for these parameters such that the network approximates another function \(g(x)\). Suitable parameters are typically found via stochastic gradient descent. Training requires a loss function measuring how much \(\text{NN}(x)\) differs from \(g(x)\) for all \(x\) in a training set. In practice, we might not know \(g\): given a large number of input-output pairs (e.g. images and their descriptions), we simply perform training and hope that the network will faithfully approximate \(g\) for inputs outside of the training set, too. We’ve only described the most basic kind of neural network: there’s a whole world of possibilities for new layers, activations, loss functions, optimization algorithms, and training paradigms. However, the fundamentals will suffice for this post. Compression When generalization is the goal, neural networks are typically under-constrained: they include far more parameters than strictly necessary to encode a function mapping the training set to its corresponding results. Through regularization, the training process hopes to use these ‘extra’ degrees of freedom to approximate the behavior of \(g\) outside the training set. However, insufficient regularization makes the network vulnerable to over-fitting—it may become extremely accurate on the training set, yet unable to handle new inputs. For this reason, researchers always evaluate neural networks on a test set of known data points that are excluded from the training process. However, in this post we’re actually going to optimize for over-fitting! Instead of approximating \(g\) on new inputs, we will only care about reproducing the right results on the training set. This approach allows us to use over-constrained networks for (lossy) data compression. Let’s say we have some data we want to compress into a neural network. If we can parameterize the input by assigning each value a unique identifier, that gives us a training set. For example, we could parameterize a list of numbers by their index: \[[1, 1, 2, 5, 15, 52, 203] \mapsto \{ (0,1), (1,1), (2,2), (3,5), (4,15), (5,52), (6,203) \}\] …and train a network to associate the index \(n\) with the value \(a[n]\). To do so, we can simply define a loss function that measures the squared error of the result: \((\text{NN}(n) - a[n])^2\). We only care that the network produces \(a[n]\) for \(n\) in 0 to 6, so we want to “over-fit” as much as possible. But, where’s the compression? If we make the network itself smaller than the data set—while being able to reproduce it—we can consider the network to be a compressed encoding of the data. For example, consider this photo of a numbat: Perth Zoo The image consists of 512x512 pixels with three channels (r,g,b) each. Hence, we could naively say it contains 786,432 parameters. If a network with fewer parameters can reproduce it, the network itself can be considered a compressed version of the image. More rigorously, each pixel can be encoded in 3 bytes of data, so we’d want to be able to store the network in fewer than 768 kilobytes—but reasoning about size on-disk would require getting into the weeds of mixed precision networks, so let’s only consider parameter count for now. A Basic Network Let’s train a network to encode the numbat. To create the data set, we will associate each pixel with its corresponding \(x,y\) coordinate. That means our training set will consist of 262,144 examples of the form \((x,y) \mapsto (r,g,b)\). Before training, we’ll normalize the values such that \(x,y \in [-1,1]\) and \(r,g,b \in [0,1]\). \[\begin{align*} (-1.0000,-1.0000) &\mapsto (0.3098, 0.3686, 0.2471)\\ (-1.0000, -0.9961) &\mapsto (0.3059, 0.3686, 0.2471)\\ &\vdots\\ (0.9961, 0.9922) &\mapsto (0.3333, 0.4157, 0.3216)\\ (0.9961, 0.9961) &\mapsto (0.3412, 0.4039, 0.3216) \end{align*}\] Clearly, our network will need to be a function from \(\mathbb{R}^2 \mapsto \mathbb{R}^3\), i.e. have two inputs and three outputs. Just going off intuition, we might include three hidden layers of dimension 128. This architecture will only have 33,795 parameters—far fewer than the image itself. \[\mathbb{R}^2 \mapsto_{fc} \mathbb{R}^{128} \mapsto_{fc} \mathbb{R}^{128} \mapsto_{fc} \mathbb{R}^{128} \mapsto_{fc} \mathbb{R}^3\] Our activation function will be the standard non-linearity ReLU (rectified linear unit). Note that we only want to apply the activation after each hidden layer: we don’t want to clip our inputs or outputs by making them positive. \[\text{ReLU}(x) = \max\{x, 0\}\] Finally, our loss function will be the mean squared error between the network output and the expected color: \[\text{Loss}(x,y) = (\text{NN}(x,y) - \text{Image}_{xy})^2\] Now, let’s train. We’ll do 100 passes over the full data set (100 epochs) with a batch size of 1024. After training, we’ll need some way to evaluate how well the network encoded our image. Hopefully, the network will now return the proper color given a pixel coordinate, so we can re-generate our image by simply evaluating the network at each pixel coordinate in the training set and normalizing the results. And, what do we get? After a bit of hyperparameter tweaking (learning rate, optimization schedule, epochs), we get… Epochs: 100 Well, that sort of worked—you can see the numbat taking shape. But unfortunately, by epoch 100 our loss isn’t consistently decreasing: more training won’t make the result significantly better. Activations So far, we’ve only used the ReLU activation function. Taking a look at the output image, we see a lot of lines: various activations jump from zero to positive across these boundaries. There’s nothing fundamentally wrong with that—given a suitably large network, we could still represent the exact image. However, it’s difficult to recover high-frequency detail by summing functions clipped at zero. Sigmoids Because we know the network should always return values in \([0,1]\), one easy improvement is adding a sigmoid activation to the output layer. Instead of teaching the network to directly output an intensity in \([0,1]\), this will allow the network to compute values in \([-\infty,\infty]\) that are deterministically mapped to a color in \([0,1]\). We’ll apply the logistic function: \[\text{Sigmoid}(x) = \frac{1}{1 + e^{-x}}\] For all future experiments, we’ll use this function as an output layer activation. Re-training the ReLU network: Epochs: 100 The result looks significantly better, but it’s still got a lot of line-like artifacts. What if we just use the sigmoid activation for the other hidden layers, too? Re-training again… Epochs: 100 That made it worse. The important observation here is that changing the activation function can have a significant effect on reproduction quality. Several papers have been published comparing different activation functions in this context, so let’s try some of those. Sinusoids Implicit Neural Representations with Periodic Activation Functions This paper explores the usage of periodic functions (i.e. \(\sin,\cos\)) as activations, finding them well suited for representing signals like images, audio, video, and distance fields. In particular, the authors note that the derivative of a sinusoidal network is also a sinusoidal network. This observation allows them to fit differential constraints in situations where ReLU and TanH-based models entirely fail to converge. The proposed form of periodic activations is simply \(f(x) = \sin(\omega_0x)\), where \(\omega_0\) is a hyperparameter. Increasing \(\omega_0\) should allow the network to encode higher frequency signals. Let’s try changing our activations to \(f(x) = \sin(2\pi x)\): Epochs: 100 Well, that’s better than the ReLU/Sigmoid networks, but still not very impressive—an unstable training process lost much of the low frequency data. It turns out that sinusoidal networks are quite sensitive to how we initialize the weight matrices. To account for the extra scaling by \(\omega_0\), the paper proposes initializing weights beyond the first layer using the distribution \(\mathcal{U}\left(-\frac{1}{\omega_0}\sqrt{\frac{6}{\text{fan\_in}}}, \frac{1}{\omega_0}\sqrt{\frac{6}{\text{fan\_in}}}\right)\). Retraining with proper weight initialization: Epochs: 100 Now we’re getting somewhere—the output is quite recognizable. However, we’re still missing a lot of high frequency detail, and increasing \(\omega_0\) much more starts to make training unstable. Gaussians Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs This paper analyzes a variety of new activation functions, one of which is particularly suited for image reconstruction. It’s a gaussian of the form \(f(x) = e^{\frac{-x^2}{\sigma^2}}\), where \(\sigma\) is a hyperparameter. Here, a smaller \(\sigma\) corresponds to a higher bandwidth. The authors demonstrate good results using gaussian activations: reproduction as good as sinusoids, but without dependence on weight initialization or special input encodings (which we will discuss in the next section). So, let’s train our network using \(f(x) = e^{-4x^2}\): Epochs: 100 Well, that’s better than the initial sinusoidal network, but worse than the properly initialized one. However, something interesting happens if we scale up our input coordinates from \([-1,1]\) to \([-16,16]\): Epochs: 100 The center of the image is now reproduced almost perfectly, but the exterior still lacks detail. It seems that while gaussian activations are more robust to initialization, the distribution of inputs can still have a significant effect. Beyond the Training Set Although we’re only measuring how well each network reproduces the image, we might wonder what happens outside the training set. Luckily, our network is still just a function \(\mathbb{R}^2\mapsto\mathbb{R}^3\), so we can evaluate it on a larger range of inputs to produce an “out of bounds” image. Another purported benefit of using gaussian activations is sensible behavior outside of the training set, so let’s compare. Evaluating each network on \([-2,2]\times[-2,2]\): ReLU Sinusoid Gaussian That’s pretty cool—the gaussian network ends up representing a sort-of-edge-clamped extension of the image. The sinusoid turns into a low-frequency soup of common colors. The ReLU extension has little to do with the image content, and based on running training a few times, is very unstable. Input Encodings So far, our results are pretty cool, but not high fidelity enough to compete with the original image. Luckily, we can go further: recent research work on high-quality neural primitives has relied on not only better activation functions and training schemes, but new input encodings. When designing a network, an input encoding is essentially just a fixed-function initial layer that performs some interesting transformation before the fully-connected layers take over. It turns out that choosing a good initial transform can make a big difference. Positional Encoding The current go-to encoding is known as positional encoding, or otherwise positional embedding, or even sometimes fourier features. This encoding was first used in a graphics context by the original neural radiance fields paper. It takes the following form: \[x \mapsto \left[x, \sin(2^0\pi x), \cos(2^0\pi x), \dots, \sin(2^L\pi x), \cos(2^L\pi x) \right]\] Where \(L\) is a hyperparameter controlling bandwidth (higher \(L\), higher frequency signals). The unmodified input \(x\) may or may not be included in the output. When using this encoding, we transform our inputs by a hierarchy of sinusoids of increasing frequency. This brings to mind the fourier transform, hence the “fourier features” moniker, but note that we are not using the actual fourier transform of the training signal. So, let’s try adding a positional encoding to our network with \(L=6\). Our new network will have the following architecture: \[\mathbb{R}^2 \mapsto_{enc6} \mathbb{R}^{30} \mapsto_{fc} \mathbb{R}^{128} \mapsto_{fc} \mathbb{R}^{128} \mapsto_{fc} \mathbb{R}^{128} \mapsto_{fc} \mathbb{R}^3\] Note that the sinusoids are applied element-wise to our two-dimensional \(x\), so we end up with \(30\) input dimensions. This means the first fully-connected layer will have \(30\cdot128\) weights instead of \(2\cdot128\), so we are adding some parameters. However, the additional weights don’t make a huge difference in of themselves. The ReLU network improves dramatically: Epochs: 100 The gaussian network also improves significantly, now only lacking some high frequency detail: Epochs: 100 Finally, the sinusoidal network… fails to converge! Reducing \(\omega_0\) to \(\pi\) lets training succeed, and produces the best result so far. However, the unstable training behavior is another indication that sinusoidal networks are particularly temperamental. Epochs: 100 As a point of reference, saving this model’s parameters in half precision (which does not degrade quality) requires only 76kb on disk. That’s smaller than a JPEG encoding of the image at ~111kb, though not quite as accurate. Unfortunately, it’s still missing some fine detail. Instant Neural Graphics Primitives Instant Neural Graphics Primitives with a Multiresolution Hash Encoding This paper made a splash when it was released early this year and eventually won a best paper award at SIGGRAPH 2022. It proposes a novel input encoding based on a learned multi-resolution hashing scheme. By combining their encoding with a fully-fused (i.e. single-shader) training and evaluation implementation, the authors are able to train networks representing gigapixel-scale images, SDF geometry, volumes, and radiance fields in near real-time. In this post, we’re only interested in the encoding, though I wouldn’t say no to instant training either. The Multiresolution Grid We begin by breaking up the domain into several square grids of increasing resolution. This hierarchy can be defined in any number of dimensions \(d\). Here, our domain will be two-dimensional: Note that the green and red grids also tile the full image; their full extent is omitted for clarity. The resolution of each grid is a constant multiple of the previous level, but note that the growth factor does not have to be two. In fact, the authors derive the scale after choosing the following three hyperparameters: \[\begin{align*} L &:= \text{Number of Levels} \\ N_{\min} &:= \text{Coarsest Resolution} \\ N_{\max} &:= \text{Finest Resolution} \end{align*}\] And compute the growth factor via: \[b := \exp\left(\frac{\ln N_{\max} - \ln N_{\min}}{L - 1}\right)\] Therefore, the resolution of grid level \(\ell\) is \(N_\ell := \lfloor N_{\min} b^\ell \rfloor\). Step 1 - Find Cells After choosing a hierarchy of grids, we can define the input transformation. We will assume the input \(\mathbf{x}\) is given in \([0,1]\). For each level \(\ell\), first scale \(\mathbf{x}\) by \(N_\ell\) and round up/down to find the cell containing \(\mathbf{x}\). For example, when \(N_\ell = 2\): Given the bounds of the cell containing \(\mathbf{x}\), we can then compute the integer coordinates of each corner of the cell. In this example, we end up with four vertices: \([0,0], [0,1], [1,0], [1,1]\). Step 2 - Hash Coordinates We then hash each corner coordinate. The authors use the following function: \[h(\mathbf{x}) = \bigoplus_{i=1}^d x_i\pi_i\] Where \(\oplus\) denotes bit-wise exclusive-or and \(\pi_i\) are large prime numbers. To improve cache coherence, the authors set \(\pi_1 = 1\), as well as \(\pi_2 = 2654435761\) and \(\pi_3 = 805459861\). Because our domain is two-dimensional, we only need the first two coefficients: \[h(x,y) = x \oplus (2654435761 y)\] The hash is then used as an index into a hash table. Each grid level has a corresponding hash table described by two more hyperparameters: \[\begin{align*} T &:= \text{Hash Table Slots} \\ F &:= \text{Features Per Slot} \end{align*}\] To map the hash to a slot in each level’s hash table, we simply modulo by \(T\). Each slot contains \(F\) learnable parameters. During training, we will backpropagate gradients all the way to the hash table entries, dynamically optimizing them to learn a good input encoding. What do we do about hash collisions? Nothing—the training process will automatically find an encoding that is robust to collisions. Unfortunately, this makes the encoding difficult to scale down to very small hash table sizes—eventually, simply too many grid points are assigned to each slot. Finally, the authors note that training is not particularly sensitive to how the hash table entries are initialized, but settle on an initial distribution \(\mathcal{U}(-0.0001,0.0001)\). Step 3 - Interpolate Once we’ve retrieved the \(2^d\) hash table slots corresponding to the current grid cell, we linearly interpolate their values to define a result at \(\mathbf{x}\) itself. In the two dimensional case, we use bi-linear interpolation: interpolate horizontally twice, then vertically once (or vice versa). The authors note that interpolating the discrete values is necessary for optimization with gradient descent: it makes the encoding function continuous. Step 4 - Concatenate At this point, we have mapped \(\mathbf{x}\) to \(L\) different vectors of length \(F\)—one for each grid level \(\ell\). To combine all of this information, we simply concatenate the vectors to form a single encoding of length \(LF\). The encoded result is then fed through a simple fully-connected neural network with ReLU activations (i.e. a multilayer perceptron). This network can be quite small: the authors use two hidden layers with dimension 64. Results Let’s implement the hash encoding and compare it with the earlier methods. The sinusoidal network with positional encoding might be hard to beat, given it uses only ~35k parameters. We will first choose \(L = 16\), \(N_{\min} = 16\), \(N_{\max} = 256\), \(T = 1024\), and \(F = 2\). Combined with a two hidden layer, dimension 64 MLP, these parameters define a model with 43,395 parameters. On disk, that’s about 90kb. Epochs: 30 That’s a good result, but not obviously better than we saw previously: when restricted to 1024-slot hash tables, the encoding can’t be perfect. However, the training process converges impressively quickly, producing a usable image after only three epochs and fully converging in 20-30. The sinusoidal network took over 80 epochs to converge, so fast convergence alone is a significant upside. The authors recommend a hash table size of \(2^{14}-2^{24}\): their use cases involve representing large volumetric data sets rather than 512x512 images. If we scale up our hash table to just 4096 entries (a parameter count of ~140k), the result at last becomes difficult to distinguish from the original image. The other techniques would have required even more parameters to achieve this level of quality. Epochs: 30 The larger hash maps result in a ~280kb model on disk, which, while much smaller than the uncompressed image, is over twice as large as an equivalent JPEG. The hash encoding really shines when representing much higher resolution images: at gigapixel scale it handily beats the other methods in reproduction quality, training time, and model size. (These results were compared against the reference implementation using the same parameters—the outputs were equivalent, except for my version being many times slower!) More Applications So far, we’ve only explored ways to represent images. But, as mentioned at the start, neural models can encode any dataset we can express as a function. In fact, current research work primarily focuses on representing geometry and light fields—images are the easy case. Neural SDFs One relevant way to represent surfaces is using signed distance functions, or SDFs. At every point \(\mathbf{x}\), an SDF \(f\) computes the distance between \(\mathbf{x}\) and the closest point on the surface. When \(\mathbf{x}\) is inside the surface, \(f\) instead returns the negative distance. Hence, the surface is defined as the set of points such that \(f(\mathbf{x}) = 0\), also known as the zero level set of \(f\). SDFs can be very simple to define and combine, and when combined with sphere tracing, can create remarkable results in a few lines of code. Hugh Kennedy Because SDFs are simply functions from position to distance, they’re easy to represent with neural models: all of the techniques we explored for images also apply when representing distance fields. Traditionally, graphics research and commercial tools have favored explicit geometric representations like triangle meshes—and broadly still do—but the advent of ML models is bringing implicit representations like SDFs back into the spotlight. Luckily, SDFs can be converted into meshes using methods like marching cubes and dual contouring, though this introduces discretization error. Implicit Neural Representations with Periodic Activation Functions When working with neural representations, one may not even require proper distances—it may be sufficient that \(f(\mathbf{x}) = 0\) describes the surface. Techniques in this broader class are known as level set methods. Another SIGGRAPH 2022 best paper, Spelunking the Deep, defines relatively efficient closest point, collision, and ray intersection queries on arbitrary neural surfaces. Neural Radiance Fields In computer vision, another popular use case is modelling light fields. A light field can be encoded in many ways, but the model proposed in the original NeRF paper maps a 3D position and 2D angular direction to RGB radiance and volumetric density. This function provides enough information to synthesize images of the field using volumetric ray marching. (Basically, trace a ray in small steps, adding in radiance and attenuating based on density.) \[x, y, z, \theta, \phi \mapsto R, G, B, \sigma\] Because NeRFs are trained to match position and direction to radiance, one particularly successful use case has been using a set of 2D photographs to learn a 3D representation of a scene. Though NeRFs are not especially conducive to recovering geometry and materials, synthesizing images from entirely new angles is relatively easy—the model defines the full light field. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis There has been an explosion of NeRF-related papers over the last two years, many of which choose different parameterizations of the light field. Some even ditch the neural encoding entirely. Using neural models to represent light fields also helped kickstart research on the more general topic of differentiable rendering, which seeks to separately recover scene parameters like geometry, materials, and lighting by reversing the rendering process via gradient descent. Neural Radiance Cache In real-time rendering, another exciting application is neural radiance caching. NRC brings NeRF concepts into the real-time ray-tracing world: a model is trained to encode a radiance field at runtime, dynamically learning from small batches of samples generated every frame. The resulting network is used as an intelligent cache to estimate incoming radiance for secondary rays—without recursive path tracing. Real-time Neural Radiance Caching for Path Tracing References This website attempts to collate all recent literature on neural fields: Neural Fields in Visual Computing and Beyond Papers referenced in this article: Implicit Neural Representations with Periodic Activation Functions Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis Instant Neural Graphics Primitives with a Multiresolution Hash Encoding Spelunking the Deep: Guaranteed Queries on General Neural Implicit Surfaces via Range Analysis PlenOctrees For Real-time Rendering of Neural Radiance Fields Real-time Neural Radiance Caching for Path Tracing More particularly cool papers: Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D Shapes Learning Smooth Neural Functions via Lipschitz Regularization A Level Set Theory for Neural Implicit Evolution under Explicit Flows Implicit Neural Spatial Representations for Time-dependent PDEs Intrinsic Neural Fields: Learning Functions on Manifolds
-
-
Embed this notice
thenumbat@mastodon.gamedev.place's status on Monday, 17-Jul-2023 15:40:00 JST TheNumbat Spherical integration is ubiquitous in graphics - and a common source of confusion for beginners. Short new post explaining where that sin(θ) comes from: https://thenumb.at/Spherical-Integration/
In conversation from mastodon.gamedev.place permalink Attachments
-
https://cdn.masto.host/mastodongamedevplace/media_attachments/files/109/656/741/258/633/340/original/9bd50461eadfa202.png -
No result found on File_thumbnail lookup.
Spherical Integration
from Max SlaterOr, where does that \(\sin\theta\) come from? Integrating functions over spheres is a ubiquitous task in graphics—and a common source of confusion for beginners. In particular, understanding why integration in spherical coordinates requires multiplying by \(\sin\theta\) takes some thought. The Confusion So, we want to integrate a function \(f\) over the unit sphere. For simplicity, let’s assume \(f = 1\). Integrating \(1\) over any surface computes the area of that surface: for a unit sphere, we should end up with \(4\pi\). Integrating over spheres is much easier in the eponymous spherical coordinates, so let’s define \(f\) in terms of \(\theta, \phi\): \(\theta\) ranges from \(0\) to \(\pi\), representing latitude, and \(\phi\) ranges from \(0\) to \(2\pi\), representing longitude. Naively, we might try to integrate \(f\) by ranging over the two parameters: \[\begin{align*} \int_{0}^{2\pi}\int_0^\pi 1\, d\theta d\phi &= \int_0^{2\pi} \theta\Big|_0^\pi\, d\phi\\ &= \int_0^{2\pi} \pi\, d\phi \\ &= \pi \left(\phi\Big|_0^{2\pi}\right) \\ &= 2\pi^2 \end{align*}\] That’s not \(4\pi\)—we didn’t integrate over the sphere! All we did was integrate over a flat rectangle of height \(\pi\) and width \(2\pi\). One way to conceptualize this integral is by adding up the differential area \(dA\) of many small rectangular patches of the domain. Each patch has area \(dA = d\theta d\phi\), so adding them up results in the area of the rectangle. What we actually want is to add up the areas of patches on the sphere, where they are smaller. In the limit (small \(d\theta,d\phi\)), the spherical patch \(d\mathcal{S}\) is a factor of \(\sin\theta\) smaller than the rectangular patch \(dA\).1 Intuitively, the closer to the poles the patch is, the smaller its area. When integrating over the sphere \(\mathcal{S}\), we call the area differential \(d\mathcal{S} = \sin\theta\, d\theta d\phi\).2 Let’s try using it: \[\begin{align*} \iint_\mathcal{S} 1\,d\mathcal{S} &= \int_{0}^{2\pi}\int_0^\pi \sin\theta\, d\theta d\phi\\ &= \int_0^{2\pi} (-\cos\theta)\Big|_0^\pi\, d\phi\\ &= \int_0^{2\pi} 2\, d\phi \\ &= 2 \left(\phi\Big|_0^{2\pi}\right) \\ &= 4\pi \end{align*}\] It works! If you just wanted the intuition, you can stop reading here. But why \(\sin\theta\)? It’s illustrative to analyze how \(\sin\theta\) arises from parameterizing the cartesian (\(x,y,z\)) sphere using spherical coordinates. In cartesian coordinates, a unit sphere is defined by \(x^2 + y^2 + z^2 = 1\). It’s possible to formulate a cartesian surface integral based on this definition, but it would be ugly. \[\iint_{x^2+y^2+z^2=1} f dA = \text{?}\] Instead, we can perform a change of coordinates from cartesian to spherical coordinates. To do so, we will define \(\Phi : \theta,\phi \mapsto x,y,z\): $$ \begin{align*} \Phi(\theta,\phi) = \begin{bmatrix}\sin\theta\cos\phi\\ \sin\theta\sin\phi\\ \cos\theta\end{bmatrix} \end{align*} $$ The function \(\Phi\) is a parameterization of the unit sphere \(\mathcal{S}\). We can check that it satisfies \(x^2+y^2+z^2=1\) regardless of \(\theta\) and \(\phi\): \[\begin{align*} |\Phi(\theta,\phi)|^2 &= (\sin\theta\cos\phi)^2 + (\sin\theta\sin\phi)^2 + \cos^2\theta\\ &= \sin^2\theta(\cos^2\phi+\sin^2\phi) + \cos^2\theta \\ &= \sin^2\theta + \cos^2\theta\\ &= 1 \end{align*}\] Applying \(\Phi\) to the rectangular domain \(\theta\in[0,\pi],\phi\in[0,2\pi]\) in fact describes all of \(\mathcal{S}\), giving us a much simpler parameterization of the integral. To integrate over \(d\theta\) and \(d\phi\), we also need to compute how they relate to \(d\mathcal{S}\). Luckily, there’s a formula that holds for any parametric surface \(\mathbf{r}(u,v)\) describing a three-dimensional domain \(\mathcal{R}\):3 \[d\mathcal{R} = \left\lVert \frac{\partial \mathbf{r}}{\partial u} \times \frac{\partial \mathbf{r}}{\partial v} \right\rVert du dv\] The two partial derivatives represent tangent vectors on \(\mathcal{R}\) along the \(u\) and \(v\) axes, respectively. Intuitively, each tangent vector describes how a \(u,v\) patch is stretched along the corresponding axis when mapped onto \(\mathcal{R}\). The magnitude of their cross product then computes the area of the resulting parallelogram. Finally, we can actually apply the change of coordinates: \[\begin{align*} \iint_\mathcal{S} f\, d\mathcal{S} &= \int_0^{2\pi}\int_0^\pi f(\Phi(\theta,\phi)) \left\lVert \frac{\partial\Phi}{\partial\theta} \times \frac{\partial\Phi}{\partial\phi} \right\rVert d\theta d\phi \end{align*}\] We just need to compute the area term: \[\begin{align*} \frac{\partial\Phi}{\partial\theta} &= \begin{bmatrix}\cos\phi\cos\theta & \sin\phi\cos\theta & -\sin\theta\end{bmatrix}\\ \frac{\partial\Phi}{\partial\phi} &= \begin{bmatrix}-\sin\phi\sin\theta & \cos\phi\sin\theta & 0\end{bmatrix}\\ \frac{\partial\Phi}{\partial\theta} \times \frac{\partial\Phi}{\partial\phi} &= \begin{bmatrix}\sin^2\theta\cos\phi & \sin^2\theta\sin\phi & \cos^2\phi\cos\theta\sin\theta + \sin^2\phi\cos\theta\sin\theta\end{bmatrix}\\ &= \begin{bmatrix}\sin^2\theta\cos\phi & \sin^2\theta\sin\phi & \cos\theta\sin\theta \end{bmatrix}\\ \left\lVert\frac{\partial\Phi}{\partial\theta} \times \frac{\partial\Phi}{\partial\phi}\right\rVert &= \sqrt{\sin^4\theta(\cos^2\phi+\sin^2\phi) + \cos^2\theta\sin^2\theta}\\ &= \sqrt{\sin^2\theta(\sin^2\theta + \cos^2\theta)}\\ &= \lvert\sin\theta\rvert \end{align*}\] Since \(\theta\in[0,\pi]\), we can say \(\lvert\sin\theta\rvert = \sin\theta\). That means \(d\mathcal{S} = \sin\theta\, d\theta d\phi\)! Our final result is the familiar spherical integral: \[\iint_\mathcal{S} f dS = \int_0^{2\pi}\int_0^\pi f(\theta,\phi) \sin\theta\, d\theta d\phi\] Footnotes Interestingly, if we knew the formula for the area of a spherical patch, we could do some slightly illegal math to derive the area form: \[\begin{align*} dA &= (\phi_1-\phi_0)(\cos\theta_0-\cos\theta_1)\\ &= ((\phi + d\phi) - \phi)(\cos(\theta)-\cos(\theta+d\theta))\\ &= d\phi d\theta \frac{(\cos(\theta)-\cos(\theta+d\theta))}{d\theta}\\ &= d\phi d\theta \sin\theta \tag{Def. derivative} \end{align*}\] ↩ In graphics, also known as differential solid angle, \(d\mathbf{\omega} = \sin\theta\, d\theta d\phi\). ↩ Even more generally, the scale is related to the inner product on the tangent space of the surface. This inner product is defined by the first fundamental form \(\mathrm{I}\) of the surface, and the scale factor is \(\sqrt{\det\mathrm{I}}\), regardless of dimension. For surfaces immersed in 3D, this simplifies to the cross product mentioned above. Changes of coordinates that don’t change dimensionality have a more straightforward scale factor: it’s the determinant of their Jacobian.4 ↩ These topics are often more easily understood using the language of exterior calculus, where integrals can be uniformly expressed regardless of dimension. I will write about it at some point. ↩
-
-
Embed this notice
thenumbat@mastodon.gamedev.place's status on Monday, 17-Jul-2023 15:39:25 JST TheNumbat Hash tables are a core component of most programs, but their many implementations aren't created equal.
By benchmarking a variety of strategies, I promote Robin Hood Linear Probing and Two-Way Chaining—both perform much better than std::unordered_map.
https://thenumb.at/HashtablesIn conversation from mastodon.gamedev.place permalink Attachments
-
No result found on File_thumbnail lookup.
Optimizing Open Addressing
from Max SlaterYour default hash table should be open-addressed, using Robin Hood linear probing with backward-shift deletion. When prioritizing deterministic performance over memory efficiency, two-way chaining is also a good choice. Code for this article may be found on GitHub. Open Addressing vs. Separate Chaining Benchmark Setup Discussion Separate Chaining Linear Probing Quadratic Probing Double Hashing Robin Hood Linear Probing Two Way Chaining Unrolling, Prefetching, and SIMD Benchmark Data Open Addressing vs. Separate Chaining Most people first encounter hash tables implemented using separate chaining, a model simple to understand and analyze mathematically. In separate chaining, a hash function is used to map each key to one of \(K\) buckets. Each bucket holds a linked list, so to retrieve a key, one simply traverses its corresponding bucket. Given a hash function drawn from a universal family, inserting \(N\) keys into a table with \(K\) buckets results in an expected bucket size of \(\frac{N}{K}\), a quantity also known as the table’s load factor. Assuming (reasonably) that \(\frac{N}{K}\) is bounded by a constant, all operations on such a table can be performed in expected constant time. Unfortunately, this basic analysis doesn’t consider the myriad factors that go into implementing an efficient hash table on a real computer. In practice, hash tables based on open addressing can provide superior performance, and their limitations can be worked around in nearly all cases. Open Addressing In an open-addressed table, each bucket only contains a single key. Collisions are handled by placing additional keys elsewhere in the table. For example, in linear probing, a key is placed in the first open bucket starting from the index it hashes to. Unlike in separate chaining, open-addressed tables may be represented in memory as a single flat array. A priori, we should expect operations on flat arrays to offer higher performance than those on linked structures due to more coherent memory accesses.1 However, if the user wants to insert more than \(K\) keys into an open-addressed table with \(K\) buckets, the entire table must be resized. Typically, a larger array is allocated and all current keys are moved to the new table. The size is grown by a constant factor (e.g. 2x), so insertions occur in amortized constant time. Why Not Separate Chaining? In practice, the standard libraries of many languages provide separately chained hash tables, such as C++’s std::unordered_map. At least in C++, this choice is now considered to have been a mistake—it violates C++’s principle of not paying for features you didn’t ask for. Separate chaining still has some purported benefits, but they seem unconvincing: Separately chained tables don’t require any linear-time operations. First, this isn’t true. To maintain a constant load factor, separate chaining hash tables also have to resize once a sufficient number of keys are inserted, though the limit can be greater than \(K\). Most separately chained tables, including std::unordered_map, only provide amortized constant time insertion. Second, open-addressed tables can be incrementally resized. When the table grows, there’s no reason we have to move all existing keys right away. Instead, every subsequent operation can move a fixed number of old keys into the new table. Eventually, all keys will have been moved and the old table may be deallocated. This technique incurs overhead on every operation during a resize, but none will require linear time. When each (key + value) entry is allocated separately, entries can have stable addresses. External code can safely store pointers to them, and large entries never have to be copied. This property is easy to support by adding a single level of indirection. That is, allocate each entry separately, but use an open-addressed table to map keys to pointers-to-entries. In this case, resizing the table only requires copying pointers, rather than entire entries. Lower memory overhead. It’s possible for a separately chained table to store the same data in less space than an open-addressed equivalent, since flat tables necessarily waste space storing empty buckets. However, matching high-quality open addressing schemes (especially when storing entries indirectly) requires a load factor greater than one, degrading query performance. Further, individually allocating nodes wastes memory due to heap allocator overhead and fragmentation. In fact, the only situation where open addressing truly doesn’t work is when the table cannot allocate and entries cannot be moved. These requirements can arise when using intrusive linked lists, a common pattern in kernel data structures and embedded systems with no heap. Benchmark Setup For simplicity, let us only consider tables mapping 64-bit integer keys to 64-bit integer values. Results will not take into account the effects of larger values or non-trivial key comparison, but should still be representative, as keys can often be represented in 64 bits and values are often pointers. We will evaluate tables using the following metrics: Time to insert \(N\) keys into an empty table. Time to erase \(N\) existing keys. Time to look up \(N\) existing keys. Time to look up \(N\) missing keys.2 Average probe length for existing & missing keys. Maximum probe length for existing & missing keys. Memory amplification in a full table (total size / size of data). Each open-addressed table was benchmarked at 50%, 75%, and 90% load factors. Every test targeted a table capacity of 8M entries, setting \(N = \lfloor 8388608 * \text{Load Factor} \rfloor - 1\). Fixing the table capacity allows us to benchmark behavior at exactly the specified load factor, avoiding misleading results from tables that have recently grown. The large number of keys causes each test to unpredictably traverse a data set much larger than L3 cache (128MB+), so the CPU3 cannot effectively cache or prefetch memory accesses. Table Sizes All benchmarked tables use exclusively power-of-two sizes. This invariant allows us to translate hashes to array indices with a fast bitwise AND operation, since: h % 2**n == h & (2**n - 1) Power-of-two sizes also admit a neat virtual memory trick,4 but that optimization is not included here. However, power-of-two sizes can cause pathological behavior when paired with some hash functions, since indexing simply throws away the hash’s high bits. An alternative strategy is to use prime sized tables, which are significantly more robust to the choice of hash function. For the purposes of this post, we have a fixed hash function and non-adversarial inputs, so prime sizes were not used.5 Hash Function All benchmarked tables use the squirrel3 hash function, a fast integer hash that (as far as I can tell) only exists in this GDC talk on noise-based RNG. Here, it is adapted to 64-bit integers by choosing three large 64-bit primes: uint64_t squirrel3(uint64_t at) { constexpr uint64_t BIT_NOISE1 = 0x9E3779B185EBCA87ULL; constexpr uint64_t BIT_NOISE2 = 0xC2B2AE3D27D4EB4FULL; constexpr uint64_t BIT_NOISE3 = 0x27D4EB2F165667C5ULL; at *= BIT_NOISE1; at ^= (at >> 8); at += BIT_NOISE2; at ^= (at > 8); return at; } I make no claims regarding the quality or robustness of this hash function, but observe that it’s cheap, it produces the expected number of collisions in power-of-two tables, and it passes smhasher when applied bytewise. Other Benchmarks Several less interesting benchmarks were also run, all of which exhibited identical performance across equal-memory open-addressed tables and much worse performance for separate chaining. Time to look up each element by following a Sattolo cycle (2-4x).6 Time to clear the table (100-200x). Time to iterate all values (3-10x). Discussion Separate Chaining To get our bearings, let’s first implement a simple separate chaining table and benchmark it at various load factors. Separate Chaining50% Load100% Load200% Load500% Load ExistingMissingExistingMissingExistingMissingExistingMissing Insert (ns/key) 115 113 120 152 Erase (ns/key) 110 128 171 306 Lookup (ns/key) 28 28 35 40 50 68 102 185 Average Probe 1.25 0.50 1.50 1.00 2.00 2.00 3.50 5.00 Max Probe 7 7 10 9 12 13 21 21 Memory Amplification 2.50 2.00 1.75 1.60 These are respectable results, especially lookups at 100%: the CPU is able to hide much of the memory latency. Insertions and erasures, on the other hand, are pretty slow—they involve allocation. Technically, the reported memory amplification in an underestimate, as it does not include heap allocator overhead. However, we should not directly compare memory amplification against the coming open-addressed tables, as storing larger values would improve separate chaining’s ratio. std::unordered_map Running these benchmarks on std::unordered_map produces results roughly equivalent to the 100% column, just with marginally slower insertions and faster clears. Using squirrel3 with std::unordered_map additionally makes insertions slightly slower and lookups slightly faster. Further comparisons will be relative to these results, since exact performance numbers for std::unordered_map depend on one’s standard library distribution (here, Microsoft’s). Linear Probing Next, let’s implement the simplest type of open addressing—linear probing. In the following diagrams, assume that each letter hashes to its alphabetical index. The skull denotes a tombstone. Insert: starting from the key’s index, place the key in the first empty or tombstone bucket. Lookup: starting from the key’s index, probe buckets in order until finding the key, reaching an empty non-tombstone bucket, or exhausting the table. Erase: lookup the key and replace it with a tombstone. The results are surprisingly good, considering the simplicity: Linear Probing(Naive)50% Load75% Load90% Load ExistingMissingExistingMissingExistingMissing Insert (ns/key) 46 59 73 Erase (ns/key) 21 36 47 Lookup (ns/key) 20 86 35 124 47 285 Average Probe 0.50 6.04 1.49 29.5 4.46 222 Max Probe 43 85 181 438 1604 2816 Memory Amplification 2.00 1.33 1.11 For existing keys, we’re already beating the separately chained tables, and with lower memory overhead! Of course, there are also some obvious problems—looking for missing keys takes much longer, and the worst-case probe lengths are quite scary, even at 50% load factor. But, we can do much better. Erase: Rehashing One simple optimization is automatically recreating the table once it accumulates too many tombstones. Erase now occurs in amortized constant time, but we already tolerate that for insertion, so it shouldn’t be a big deal. Let’s implement a table that re-creates itself after erase has been called \(\frac{N}{2}\) times. Compared to naive linear probing: Linear Probing+ Rehashing50% Load75% Load90% Load ExistingMissingExistingMissingExistingMissing Insert (ns/key) 46 58 70 Erase (ns/key) 23 (+10.4%) 27 (-23.9%) 29 (-39.3%) Lookup (ns/key) 20 85 35 93 (-25.4%) 46 144 (-49.4%) Average Probe 0.50 6.04 1.49 9.34 (-68.3%) 4.46 57.84 (-74.0%) Max Probe 43 85 181 245 (-44.1%) 1604 1405 (-50.1%) Memory Amplification 2.00 1.33 1.11 Clearing the tombstones dramatically improves missing key queries, but they are still pretty slow. It also makes erase slower at 50% load—there aren’t enough tombstones to make up for having to recreate the table. Rehashing also does nothing to mitigate long probe sequences for existing keys. Erase: Backward Shift Actually, who says we need tombstones? There’s a little-taught, but very simple algorithm for erasing keys that doesn’t degrade performance or have to recreate the table. When we erase a key, we don’t know whether the lookup algorithm is relying on our bucket being filled to find keys later in the table. Tombstones are one way to avoid this problem. Instead, however, we can guarantee that traversal is never disrupted by simply removing and reinserting all following keys, up to the next empty bucket. Note that despite the name, we are not simply shifting the following keys backward. Any keys that are already at their optimal index will not be moved. For example: After implementing backward-shift deletion, we can compare it to linear probing with rehashing: Linear Probing+ Backshift50% Load75% Load90% Load ExistingMissingExistingMissingExistingMissing Insert (ns/key) 46 58 70 Erase (ns/key) 47 (+105%) 82 (+201%) 147 (+415%) Lookup (ns/key) 20 37 (-56.2%) 36 88 46 135 Average Probe 0.50 1.50 (-75.2%) 1.49 7.46 (-20.1%) 4.46 49.7 (-14.1%) Max Probe 43 50 (-41.2%) 181 243 1604 1403 Memory Amplification 2.00 1.33 1.11 Naturally, erasures become significantly slower, but queries on missing keys now have reasonable behavior, especially at 50% load. In fact, at 50% load, this table already beats the performance of equal-memory separate chaining in all four metrics. But, we can still do better: a 50% load factor is unimpressive, and max probe lengths are still very high compared to separate chaining. You might have noticed that neither of these upgrades improved average probe length for existing keys. That’s because it’s not possible for linear probing to have any other average! Think about why that is—it’ll be interesting later. Quadratic Probing Quadratic probing is a common upgrade to linear probing intended to decrease average and maximum probe lengths. In the linear case, a probe of length \(n\) simply queries the bucket at index \(h(k) + n\). Quadratic probing instead queries the bucket at index \(h(k) + \frac{n(n+1)}{2}\). Other quadratic sequences are possible, but this one can be efficiently computed by adding \(n\) to the index at each step. Each operation must be updated to use our new probe sequence, but the logic is otherwise almost identical: There are a couple subtle caveats to quadratic probing: Although this specific sequence will touch every bucket in a power-of-two table, other sequences (e.g. \(n^2\)) may not. If an insertion fails despite there being an empty bucket, the table must grow and the insertion is retried. There actually is a true deletion algorithm for quadratic probing, but it’s a bit more involved than the linear case and hence not reproduced here. We will again use tombstones and rehashing. After implementing our quadratic table, we can compare it to linear probing with rehashing: Quadratic Probing+ Rehashing50% Load75% Load90% Load ExistingMissingExistingMissingExistingMissing Insert (ns/key) 47 54 (-7.2%) 65 (-6.4%) Erase (ns/key) 23 28 29 Lookup (ns/key) 20 82 31 (-13.2%) 93 42 (-7.3%) 163 (+12.8%) Average Probe 0.43 (-12.9%) 4.81 (-20.3%) 0.99 (-33.4%) 5.31 (-43.1%) 1.87 (-58.1%) 18.22 (-68.5%) Max Probe 21 (-51.2%) 49 (-42.4%) 46 (-74.6%) 76 (-69.0%) 108 (-93.3%) 263 (-81.3%) Memory Amplification 2.00 1.33 1.11 As expected, quadratic probing dramatically reduces both the average and worst case probe lengths, especially at high load factors. These gains are not quite as impressive when compared to the linear table with backshift deletion, but are still very significant. Reducing the average probe length improves the average performance of all queries. Erasures are the fastest yet, since quadratic probing quickly finds the element and marks it as a tombstone. However, beyond 50% load, maximum probe lengths are still pretty concerning. Double Hashing Double hashing purports to provide even better behavior than quadratic probing. Instead of using the same probe sequence for every key, double hashing determines the probe stride by hashing the key a second time. That is, a probe of length \(n\) queries the bucket at index \(h_1(k) + n*h_2(k)\) If \(h_2\) is always co-prime to the table size, insertions always succeed, since the probe sequence will touch every bucket. That might sound difficult to assure, but since our table sizes are always a power of two, we can simply make \(h_2\) always return an odd number. Unfortunately, there is no true deletion algorithm for double hashing. We will again use tombstones and rehashing. So far, we’ve been throwing away the high bits our of 64-bit hash when computing table indices. Instead of evaluating a second hash function, let’s just re-use the top 32 bits: \(h_2(k) = h_1(k) >> 32\). After implementing double hashing, we may compare the results to quadratic probing with rehashing: Double Hashing+ Rehashing50% Load75% Load90% Load ExistingMissingExistingMissingExistingMissing Insert (ns/key) 70 (+49.6%) 80 (+48.6%) 88 (+35.1%) Erase (ns/key) 31 (+33.0%) 43 (+53.1%) 47 (+61.5%) Lookup (ns/key) 29 (+46.0%) 84 38 (+25.5%) 93 49 (+14.9%) 174 Average Probe 0.39 (-11.0%) 4.39 (-8.8%) 0.85 (-14.7%) 4.72 (-11.1%) 1.56 (-16.8%) 16.23 (-10.9%) Max Probe 17 (-19.0%) 61 (+24.5%) 44 (-4.3%) 83 (+9.2%) 114 (+5.6%) 250 (-4.9%) Memory Amplification 2.00 1.33 1.11 Double hashing succeeds in further reducing average probe lengths, but max lengths are more of a mixed bag. Unfortunately, most queries now traverse larger spans of memory, causing performance to lag quadratic hashing. Robin Hood Linear Probing So far, quadratic probing and double hashing have provided lower probe lengths, but their raw performance isn’t much better than linear probing—especially on missing keys. Enter Robin Hood linear probing. This strategy drastically reins in maximum probe lengths while maintaining high query performance. The key difference is that when inserting a key, we are allowed to steal the position of an existing key if it is closer to its ideal index (“richer”) than the new key is. When this occurs, we simply swap the old and new keys and carry on inserting the old key in the same manner. This results in a more equitable distribution of probe lengths. This insertion method turns out to be so effective that we can entirely ignore rehashing as long as we keep track of the maximum probe length. Lookups will simply probe until either finding the key or exhausting the maximum probe length. Implementing this table, compared to linear probing with backshift deletion: Robin Hood(Naive)50% Load75% Load90% Load ExistingMissingExistingMissingExistingMissing Insert (ns/key) 53 (+16.1%) 80 (+38.3%) 124 (+76.3%) Erase (ns/key) 19 (-58.8%) 31 (-63.0%) 77 (-47.9%) Lookup (ns/key) 19 58 (+56.7%) 31 (-11.3%) 109 (+23.7%) 79 (+72.2%) 147 (+8.9%) Average Probe 0.50 13 (+769%) 1.49 28 (+275%) 4.46 67 (+34.9%) Max Probe 12 (-72.1%) 13 (-74.0%) 24 (-86.7%) 28 (-88.5%) 58 (-96.4%) 67 (-95.2%) Memory Amplification 2.00 1.33 1.11 Maximum probe lengths improve dramatically, undercutting even quadratic probing and double hashing. This solves the main problem with open addressing. However, performance gains are less clear-cut: Insertion performance suffers across the board. Existing lookups are slightly faster at low load, but much slower at 90%. Missing lookups are maximally pessimistic. Fortunately, we’ve got one more trick up our sleeves. Erase: Backward Shift When erasing a key, we can run a very similar backshift deletion algorithm. There are just two differences: We can stop upon finding a key that is already in its optimal location. No further keys could have been pushed past it—they would have stolen this spot during insertion. We don’t have to re-hash—since we stop at the first optimal key, all keys before it can simply be shifted backward. With backshift deletion, we no longer need to keep track of the maximum probe length. Implementing this table, compared to naive robin hood probing: Robin Hood+ Backshift50% Load75% Load90% Load ExistingMissingExistingMissingExistingMissing Insert (ns/key) 52 78 118 Erase (ns/key) 31 (+63.0%) 47 (+52.7%) 59 (-23.3%) Lookup (ns/key) 19 33 (-43.3%) 31 74 (-32.2%) 80 108 (-26.1%) Average Probe 0.50 0.75 (-94.2%) 1.49 1.87 (-93.3%) 4.46 4.95 (-92.6%) Max Probe 12 12 (-7.7%) 24 25 (-10.7%) 58 67 Memory Amplification 2.00 1.33 1.11 We again regress the performance of erase, but we finally achieve reasonable missing-key behavior. Given the excellent maximum probe lengths and overall query performance, Robin Hood probing with backshift deletion should be your default choice of hash table. You might notice that at 90% load, lookups are significantly slower than basic linear probing. This gap actually isn’t concerning, and explaining why is an interesting exercise.7 Two-Way Chaining Our Robin Hood table has excellent query performance, and exhibited a maximum probe length of only 12—it’s a good default choice. However, another class of hash tables based on Cuckoo Hashing can provably never probe more than a constant number of buckets. Here, we will examine a particularly practical formulation known as Two-Way Chaining. Our table will still consist of a flat array of buckets, but each bucket will be able to hold a small handful of keys. When inserting an element, we will hash the key with two different functions, then place it into whichever corresponding bucket has more space. If both buckets happen to be filled, the entire table will grow and the insertion is retried. You might think this is crazy—in separate chaining, the expected maximum probe length scales with \(O(\frac{\lg N}{\lg \lg N})\), so wouldn’t this waste copious amounts of memory growing the table? It turns out adding a second hash function reduces the expected maximum to \(O(\lg\lg N)\), for reasons explored elsewhere. That makes it practical to cap the maximum bucket size at a relatively small number. Let’s implement a flat two-way chained table. As with double hashing, we can use the high bits of our hash to represent the second hash function. Two-WayChainingCapacity 2Capacity 4Capacity 8 ExistingMissingExistingMissingExistingMissing Insert (ns/key) 265 108 111 Erase (ns/key) 24 34 45 Lookup (ns/key) 23 23 30 30 39 42 Average Probe 0.03 0.24 0.74 2.73 3.61 8.96 Max Probe 3 4 6 8 13 14 Memory Amplification 32.00 4.00 2.00 Since every key can be found in one of two possible buckets, query times become essentially deterministic. Lookup times are especially impressive, only slightly lagging the Robin Hood table and having no penalty on missing keys. The average probe lengths are very low, and max probe lengths are strictly bounded by two times the bucket size. The table size balloons when each bucket can only hold two keys, and insertion is very slow due to growing the table. However, capacities of 4-8 are reasonable choices for memory unconstrained use cases. Overall, two-way chaining is another good choice of hash table, at least when memory efficiency is not the highest priority. In the next section, we’ll also see how lookups can even further accelerated with prefetching and SIMD operations. Remember that deterministic queries might not matter if you haven’t already implemented incremental table growth. Cuckoo Hashing Cuckoo hashing involves storing keys in two separate tables, each of which holds one key per bucket. This scheme will guarantee that every key is stored in its ideal bucket in one of the two tables. Each key is hashed with two different functions and inserted using the following procedure: If at least one table’s bucket is empty, the key is inserted there. Otherwise, the key is inserted anyway, evicting one of the existing keys. The evicted key is transferred to the opposite table at its other possible position. If doing so evicts another existing key, go to 3. If the loop ends up evicting the key we’re trying to insert, we know it has found a cycle and hence must grow the table. Because Cuckoo hashing does not strictly bound the insertion sequence length, I didn’t benchmark it here, but it’s still an interesting option. Unrolling, Prefetching, and SIMD So far, the lookup benchmark has been implemented roughly like this: for(uint64_t i = 0; i < N; i++) { assert(table.find(keys[i]) == values[i]); } Naively, this loop runs one lookup at a time. However, you might have noticed that we’ve been able to find keys faster than the CPU can load data from main memory—so something more complex must be going on. In reality, modern out-of-order CPUs can execute several iterations in parallel. Expressing the loop as a dataflow graph lets us build a simplified mental model of how it actually runs: the CPU is able to execute a computation whenever its dependencies are available. Here, green nodes are resolved, in-flight nodes are yellow, and white nodes have not begun: In this example, we can see that each iteration’s root node (i++) only requires the value of i from the previous step—not the result of find. Therefore, the CPU can run several find nodes in parallel, hiding the fact that each one takes ~100ns to fetch data from main memory. Unrolling There’s a common optimization strategy known as loop unrolling. Unrolling involves explicitly writing out the code for several semantic iterations inside each actual iteration of a loop. When each inner pseudo-iteration is independent, unrolling explicitly severs them from the loop iterator dependency chain. Most modern x86 CPUs can keep track of ~10 simultaneous pending loads, so let’s try manually unrolling our benchmark 10 times. The new code looks like the following, where index_of hashes the key but doesn’t do the actual lookup. uint64_t indices[10]; for(uint64_t i = 0; i < N; i += 10) { for(uint64_t j = 0; j < 10; j++) { indices[j] = table.index_of(keys[i + j]); } for(uint64_t j = 0; j < 10; j++) { assert(table.find_index(indices[j]) == values[i + j]); } } This isn’t true unrolling, since the inner loops introduce dependencies on j. However, it doesn’t matter in practice, as computing j is never a bottleneck. In fact, our compiler can automatically unroll the inner loop for us (clang in particular does this). Lookups (ns/find)NaiveUnrolled Separate Chaining (100%) 35 34 Linear (75%) 36 35 Quadratic (75%) 31 31 Double (75%) 38 37 Robin Hood (75%) 31 31 Two-Way Chaining (x4) 30 32 Explicit unrolling has basically no effect, since the CPU was already able to saturate its pending load buffer. However, manual unrolling now lets us introduce software prefetching. Software Prefetching When traversing memory in a predictable pattern, the CPU is able to preemptively load data for future accesses. This capability, known as hardware prefetching, is the reason our flat tables are so fast to iterate. However, hash table lookups are inherently unpredictable: the address to load is determined by a hash function. That means hardware prefetching cannot help us. Instead, we can explicitly ask the CPU to prefetch address(es) we will load from in the near future. Doing so can’t speed up an individual lookup—we’d immediately wait for the result—but software prefetching can accelerate batches of queries. Given several keys, we can compute where each query will access the table, tell the CPU to prefetch those locations, and then start actually accessing the table. To do so, let’s make table.index_of prefetch the location(s) examined by table.find_indexed: In open-addressed tables, we prefetch the slot the key hashes to. In two-way chaining, we prefetch both buckets the key hashes to. Importantly, prefetching requests the entire cache line for the given address, i.e. the aligned 64-byte chunk of memory containing that address. That means surrounding data (e.g. the following keys) may also be loaded into cache. Lookups (ns/find)NaiveUnrolledPrefetched Separate Chaining (100%) 35 34 26 (-22.6%) Linear (75%) 36 35 23 (-36.3%) Quadratic (75%) 31 31 22 (-27.8%) Double (75%) 38 37 33 (-12.7%) Robin Hood (75%) 31 31 22 (-29.4%) Two-Way Chaining (x4) 30 32 19 (-40.2%) Prefetching makes a big difference! Separate chaining benefits from prefetching the unpredictable initial load.8 Linear and quadratic tables with a low average probe length greatly benefit from prefetching. Double-hashed probe sequences quickly leave the cache line, making prefetching a bit less effective. Prefetching is ideal for two-way chaining: keys can always be found in a prefetched cache line—but each lookup requires fetching two locations, consuming extra load buffer slots. SIMD Modern CPUs provide SIMD (single instruction, multiple data) instructions that process multiple values at once. These operations are useful for probing: for example, we can load a batch of \(N\) keys and compare them with our query in parallel. In the best case, a SIMD probe sequence could run \(N\) times faster than checking each key individually. Let’s implement SIMD-accelerated lookups for linear probing and two-way chaining, with \(N=4\): Lookups (ns/find)NaiveUnrolledPrefetchedMissing Linear (75%) 36 35 23 88 SIMD Linear (75%) 39 (+10.9%) 42 (+18.0%) 27 (+19.1%) 98 (-21.2%) Two-Way Chaining (x4) 30 32 19 30 SIMD Two-Way Chaining (x4) 28 (-6.6%) 30 (-4.6%) 17 (-9.9%) 19 (-35.5%) Unfortunately, SIMD probes are slower for the linearly probed table—an average probe length of \(1.5\) meant the overhead of setting up an (unaligned) SIMD comparison wasn’t worth it. However, missing keys are found faster, since their average probe length is much higher (\(29.5\)). On the other hand, two-way chaining consistently benefits from SIMD probing, as we can find any key using at most two (aligned) comparisons. However, given a bucket capacity of four, lookups only need to query 1.7 keys on average, so we only see a slight improvement—SIMD would be a much more impactful optimization at higher capacities. Still, missing key lookups get significantly faster, as they previously had to iterate through all keys in the bucket. Benchmark Data Updated 2023/04/05: additional optimizations for quadratic, double hashed, and two-way tables. The linear and Robin Hood tables use backshift deletion. The two-way SIMD table has capacity-4 buckets and uses AVX2 256-bit SIMD operations. Insert, erase, find, find unrolled, find prefetched, and find missing report nanoseconds per operation. Average probe, max probe, average missing probe, and max missing probe report number of iterations. Clear and iterate report milliseconds for the entire operation. Memory reports the ratio of allocated memory to size of stored data. Table Insert Erase Find Unrolled Prefetched Avg Probe Max Probe Find DNE DNE Avg DNE Max Clear (ms) Iterate (ms) Memory Chaining 50 115 110 28 26 21 1.2 7 28 0.5 7 113.1 16.2 2.5 Chaining 100 113 128 35 34 26 1.5 10 40 1.0 9 118.1 13.9 2.0 Chaining 200 120 171 50 50 37 2.0 12 68 2.0 13 122.6 14.3 1.8 Chaining 500 152 306 102 109 77 3.5 21 185 5.0 21 153.7 23.0 1.6 Linear 50 46 47 20 20 15 0.5 43 37 1.5 50 1.1 4.8 2.0 Linear 75 58 82 36 35 23 1.5 181 88 7.5 243 0.7 2.1 1.3 Linear 90 70 147 46 45 29 4.5 1604 135 49.7 1403 0.6 1.2 1.1 Quadratic 50 47 23 20 20 16 0.4 21 82 4.8 49 1.1 5.1 2.0 Quadratic 75 54 28 31 31 22 1.0 46 93 5.3 76 0.7 2.1 1.3 Quadratic 90 65 29 42 42 30 1.9 108 163 18.2 263 0.6 1.0 1.1 Double 50 70 31 29 28 23 0.4 17 84 4.4 61 1.1 5.6 2.0 Double 75 80 43 38 37 33 0.8 44 93 4.7 83 0.8 2.2 1.3 Double 90 88 47 49 49 42 1.6 114 174 16.2 250 0.6 1.0 1.1 Robin Hood 50 52 31 19 19 15 0.5 12 33 0.7 12 1.1 4.8 2.0 Robin Hood 75 78 47 31 31 22 1.5 24 74 1.9 25 0.7 2.1 1.3 Robin Hood 90 118 59 80 79 41 4.5 58 108 5.0 67 0.6 1.2 1.1 Two-way x2 265 24 23 25 17 0.0 3 23 0.2 4 17.9 19.6 32.0 Two-way x4 108 34 30 32 19 0.7 6 30 2.7 8 2.2 3.7 4.0 Two-way x8 111 45 39 39 28 3.6 13 42 9.0 14 1.1 1.5 2.0 Two-way SIMD 124 46 28 30 17 0.3 1 19 1.0 1 2.2 3.7 4.0 Footnotes Actually, this prior shouldn’t necessarily apply to hash tables, which by definition have unpredictable access patterns. However, we will see that flat storage still provides performance benefits, especially for linear probing, whole-table iteration, and serial accesses. ↩ To account for various deletion strategies, the find-missing benchmark was run after the insertion & erasure tests, and after inserting a second set of elements. This sequence allows separate chaining’s reported maximum probe length to be larger for missing keys than existing keys. Looking up the latter set of elements was also benchmarked, but results are omitted here, as the tombstone-based tables were simply marginally slower across the board. ↩ AMD 5950X @ 4.4 GHz + DDR4 3600/CL16, Windows 11 Pro 22H2, MSVC 19.34.31937. No particular effort was made to isolate the benchmarks, but they were repeatable on an otherwise idle system. Each was run 10 times and average results are reported. ↩ Virtual memory allows separate regions of our address space to refer to the same underlying pages. If we map our table twice, one after the other, probe traversal never has to check whether it needs to wrap around to the start of the table. Instead, it can simply proceed into the second mapping—since writes to one range are automatically reflected in the other, this just works. ↩ You might expect indexing a prime-sized table to require an expensive integer mod/div, but it actually doesn’t. Given a fixed set of possible primes, a prime-specific indexing operation can be selected at runtime based on the current size. Since each such operation is a modulo by a constant, the compiler can translate it to a fast integer multiplication. ↩ A Sattolo traversal causes each lookup to directly depend on the result of the previous one, serializing execution and preventing the CPU from hiding memory latency. Hence, finding each element took ~100ns in every type of open table: roughly equal to the latency of fetching data from RAM. Similarly, each lookup in the low-load separately chained table took ~200ns, i.e. \((1 + \text{Chain Length}) * \text{Memory Latency}\). ↩ First, any linearly probed table requires the same total number of probes to look up each element once—that’s why the average probe length never improved. Every hash collision requires moving some key one slot further from its index, i.e. adds one to the total probe count. Second, the cost of a lookup, per probe, can decrease with the number of probes required. When traversing a long chain, the CPU can avoid cache misses by prefetching future accesses. Therefore, allocating a larger portion of the same total probe count to long chains can increase performance. Hence, linear probing tops this particular benchmark—but it’s not the most useful metric in practice. Robin Hood probing greatly decreases maximum probe length, making query performance far more consistent even if technically slower on average. Anyway, you really don’t need a 90% load factor—just use 75% for consistently high performance. ↩ Is the CPU smart enough to automatically prefetch addresses contained within the explicitly prefetched cache line? I’m pretty sure it’s not, but that would be cool. ↩ -
https://cdn.masto.host/mastodongamedevplace/media_attachments/files/110/051/451/900/143/420/original/f0c33d49c8c57bb6.png
-