Solstat: A statistical approximation library

Cover Image for Solstat: A statistical approximation library
Waylon Jepsen
Waylon Jepsen
Colin Roberts
Colin Roberts

Numerical Approximations

There are many useful mathematical functions that engineers use in designing applications. This body of knowledge can be more widely described as approximation theory for which there are many great resources. An example of functions that need approximation and are also particularly useful to us at Primitive are those relating to the Gaussian (or normal) distribution. Gaussians are fundamental to statistics, probability theory, and engineering (e.g., the Central Limit Theorem).

At Primitive, our RMM-01 trading curve relies on the (standard) Gaussian probability density function (PDF) ϕ(x)=12πex22\phi(x)=\frac{1}{\sqrt{2\pi}} e^\frac{-x^2}{2}, the cumulative probability distribution (CDF) Φ\Phi, and its inverse Φ1\Phi^{-1}. These specific functions appear due to how Brownian motion appears in the pricing of Black-Scholes European options. The Black-Scholes model assumes geometric Brownian motion to get a concrete valuation of an option over its maturity.

solstat is a Solidity library that approximates these Gaussian functions. It was built to achieve a high degree of accuracy when computing Gaussian approximations within compute constrained environments on the blockchain. Solstat is open source and available for anyone to use. An interesting use case being showcased by the team at asphodel is for designing drop rates, spawn rates, and statistical distributions that have structured randomness in onchain games.

In the rest of this article we'll dive deep into function approximations, their applications, and methodology.

Approximating Functions on Computers

The first step in evaluating complicated functions with a computer involves determining whether or not the function can be evaluated "directly", i.e. with instructions native to the processing unit. All modern processing units provide basic binary operations of addition (possibly subtraction) and multiplication. In the case of simple functions like f(x)=mx+bf(x)=mx+b where mm, xx, and bb are integers, computing an output can be done efficiently and accurately.

Complex functions like the Gaussian PDF ϕ(x)\phi(x) come with their own unique set of challenges. These functions cannot be evaluated directly because computers only have native opcodes or logical circuits/gates that handle simple binary operations such as addition and subtraction. Furthermore, integer types are native to computers since their mapping from bits is canonical, but decimal types are not ubiquitous. There can be no exponential or logarithmic opcodes for classical bit CPUs as they would require infinitely many gates. There is no way to represent arbitrary real numbers without information loss in computer memory.

This begs the question: How can we compute ϕ(x)\phi(x) with this restrictive set of tools? Fortunately, this problem is extremely old, dating back to the human desire to compute complicated expressions by hand. After all, the first "computers" were people! Of course, our methodologies have improved drastically over time.

What is the optimal way of evaluating arbitrary functions in this specific environment? Generally, engineers try to balance the "best possible scores" given the computation economy and desired accuracy. If constrained to a fixed amount of numerical precision (e.g., max error of 101810^{-18}), what is the least amount of:

  • (Storage) How much storage is needed (e.g., to store coefficients)?
  • (Computation) How many total clock cycles are available for the processor to perform?

What is the best reasonable approximation for a fixed amount of storage/computational use (e.g., CPU cycles or bits)?

  • (Absolute accuracy) Over a given input domain, what is the worst-case in the approximation compared to the actual function?
  • (Relative accuracy) Does the approximation perform well over a given input domain relative to the magnitude of the range of our function?

The above questions are essential to consider when working with the Ethereum blockchain. Every computational step that is involved in mutating the machine's state will have an associated gas cost. Furthermore, DeFi protocols expect to be accurate down to the wei, which means practical absolute accuracy down to 101810^{-18} ETH is of utmost importance. Precision to 101810^{-18} is near the accuracy of an "atom's atom", so reaching these goals is a unique challenge.

Our Computer's Toolbox

Classical processing units deal with binary information at their core and have basic circuits implemented as logical opcodes. For instance, an add_bits opcode is just a realization of the following digital circuit:

These gates are adders because they define an addition operation over binary numbers. These full adders can be strung together with a carry-out pin for higher adders. For example, a ripple carry adder can be implemented this way, and extended to an arbitrary size, such as the 256bit requirements in Ethereum.

Note that adders introduce an error called overflow. Using add_4bits to add 0001 to 1111, the storage space necessary to hold a large number is exhausted. This case must be handled within the program. Fortunately for Ethereum 256bit numbers, this overflow is far less of an issue due to the magnitude of the numbers expressed (225610772^{256}\approx 10^{77}). For perspective, to overflow 256bit addition one would need to add numbers on the order of the estimated number of atoms in the universe (1079\approx 10^{79}). Furthermore, the community best practices for handling overflows in the EVM are well understood.

At any rate, repeated addition can be used to build multiplication and repeated multiplication to get integer powers. In math/programming terms:

3x=multiply(x,3)=add(x,add(x,x))2 additions3\cdot x =\operatorname{multiply}(x,3)=\underbrace{\operatorname{add}(x,\operatorname{add}(x,x))}_{2\textrm{ additions}}

and for powers:

x3=pow(x,3)=multiply(x,multiply(x,x))2 multiplications.x^3=\operatorname{pow}(x,3)=\underbrace{\operatorname{multiply}(x,\operatorname{multiply}(x,x))}_{2\textrm{ multiplications}}.

Subtraction and division can also be defined for gate/opcode-level integers. Subtraction has similar overflow issues to addition. Division behaves in a way that returns the quotient and remainder. This can be extended to integer/decimal representations of rational numbers (e.g., fractions) using floating-point or fixed-point libraries like ABDK, and the library in Solmate. Depending on the implementation, division can be more computationally intensive than multiplication.

More Functionality

With extensions of division and multiplication, negative powers can be constructed such that:

x1=1x=divide(1,x).x^{-1}=\frac{1}{x}=\operatorname{divide}(1,x).

None of these abstractions allow computers to express infinite precision with arbitrarily large accuracy. There can never be an exact binary representation of irrational numbers like π\pi, ee, or 2\sqrt{2}. Numbers like 2\sqrt{2} can be represented precisely in computer algebra systems (CAS), but this is unattainable in the EVM at the moment.

Without computer algebra systems, quick and accurate algorithms for computing approximations of functions like x\sqrt{x} must be developed. Interestingly x\sqrt{x} arises in the infamous approximation from Quake lll, which is an excellent example of an approximation optimization yielding a significant performance improvement.

Rational Approximations

The EVM provides access to addition, multiplication, subtraction, and division operations. With no other special-case assumptions as in the Quake square root algorithm, the best programs on the EVM can do is work directly with sums of rational functions of the form:

Pm,n(x)=α0+α1x+α2x2++αmxmβ0+β1x+β2x2++βnxn.P_{m,n}(x)=\frac{\alpha_0 +\alpha_1 x + \alpha_2 x^2 + \cdots + \alpha_m x^m}{\beta_0 + \beta_1 x + \beta_2 x^2 + \cdots + \beta_n x^n}.

The problem is that most functions are not rational functions! EVM programs need a way to determine the coefficients α\alpha and β\beta for a rational approximation. A good analogy can be made to polynomial approximations and power series.

Using our Small Toolbox

When dealing with approximations, an excellent place to start is to ask the following questions: Why is an approximation needed? What existing solutions already exist, and what is the methodology they employ? How many digits of accuracy are needed? The answers to these questions provide solid baseline to formulate approximation specifications.

Transcendental or special functions are analytical functions not expressed by a rational function with finite powers. Some examples of transcendental functions are the exponential function exp(x)\exp(x), the inverse logarithm ln(x)\ln(x), and exponentiation. However, if the target function being approximated has some nice properties (e.g., it is differentiable), it can be locally approximated with a polynomial. This is seen in the context of Taylor's theorem and more broadly as the Stone-Weierstrass theorem.

Power Series

Polynomials (like PN(x)P_N(x) below) are a useful theoretical tool that also allow for function approximations.

PN(x)=n=0Nanxn=a0+a1x+a2x2+a3x3++aNxNP_N(x)=\sum_{n=0}^N a_nx^n=a_0+a_1x+a_2x^2+a_3x^3+\cdots + a_N x^N

Only addition, subtraction, and multiplication are needed. There is no need for division implementations on the processor. More generally, an infinite polynomial called a power series can be written by specifying an infinite set of coefficients {a0,a1,a2,}\{a_0,a_1,a_2,\dots\} and combining them as:

n=0anxn.\sum_{n=0}^\infty a_n x^n.

A specific way to get a power series approximation for a function ff around some point x0x_0 is by using Taylor's theorem to define the series by:

n=0f(n)(x0)n!(xx0)n=f(x0)+f(x0)x+f(x0)2!x2+f(x0)3!x3+...\sum_{n=0}^\infty \frac{f^{(n)}(x_0)}{n!}(x-x_0)^n = f(x_0) + f'(x_0)x + \frac{f''(x_0)}{2!}x^2 + \frac{f'''(x_0)}{3!}x^3 +...

Intuitively, the Taylor series approximations are built by constructing the best "tangent polynomial," for example, the 1st order Taylor approximation of ff is the tangent line approximation to ff at x0x_0

f(x)f(x0)+f(x0)(xx0)=f(x0)slopex+f(x0)f(x0)x0yintercept.f(x)\approx f(x_0)+f'(x_0)(x-x_0)=\underbrace{f'(x_0)}_{\textrm{slope}}x+\underbrace{f(x_0)-f'(x_0)x_0}_{y-\textrm{intercept}}.

For exp(x)\exp(x), there is the resulting series

exp(x)=n=0xnn!\exp(x)=\sum_{n=0}^\infty \frac{x^n}{n!}

when approximating around x0=0x_0=0.

Since polynomials can locally approximate transcendental functions, the question remains where to center the approximations.

The infinite series is precisely equal to the function exp(x)\exp(x), and by truncating the series at some finite value, say NN, there is a resulting polynomial approximation:

exp(x)n=0Nxnn!=1+x+x22++xNN!.\exp(x)\approx\sum_{n=0}^N \frac{x^n}{n!} = 1+x+\frac{x^2}{2}+\cdots + \frac{x^N}{N!}.

The function ϕ(x)\phi(x) can be written by scaling the input and output of exp\exp by

2πϕ(2x)=exp(x2)=n=0N(x)nn!.\sqrt{2\pi}\phi(\sqrt{2}x)=\exp(-x^2)=\sum_{n=0}^N \frac{(-x)^n}{n!}.

This demonstrates what various orders of polynomial approximation look like compared to the function itself.

This solves the infinity problem and now these polynomials can be obtained procedurally at least for functions that are NN times differentiable. In theory the Taylor polynomial can be as accurate as needed. For example, increase NN. However, there are some restrictions to keep in mind. For instance, since factorials !! grow exceptionally fast, there may not be enough precision to support numbers like 1N!\frac{1}{N!}. 20!>101820!>10^{18}, so for tokens with 18 digits, the highest order polynomial approximation for exp(x)\exp(x) on Ethereum can only have degree 19. Furthermore, polynomials have some unique properties:

  1. (No poles) Polynomials never have vertical asymptotes.
  2. (Unboundedness) Non-constant polynomials always approach either infinity or negative infinity as x±x\to \pm\infty.

An excellent example of this failure is the function ϕ(x)\phi(x) which asymptotically approaches 0 as x±x\to \pm \infty. Polynomials don't do well approximating this! In the case of another even simpler function f(x)=1xf(x)=\frac{1}{x}, this function can be approximated by polynomials away from x=0x=0, but doing so is a bit tedious. Why do this when division can be used to compute f(x)f(x)? It's more expensive and decentralized application developers must be frugal when using the EVM.

Laurent Series

Polynomial approximations are a good start, but they have some problems. Succinctly, there are ways to more accurately approximate functions with poles or those that are bounded.

This form of approximation is rooted in complex analysis. In most cases, any real-valued function f(x)=yf(x)=y can instead allow the inputs zz and outputs to be complex f(z)=wf(z)=w. This small change enables the Laurent series expression for functions f(z)f(z). A Laurent series includes negative powers, and, in general, looks like:

f(z)=n=anzn=+a21z2+a11z+a0+a1z+a2z2+f(z) = \sum_{n=-\infty}^{\infty}a_nz^n = \cdots+ a_{-2}\frac{1}{z^2} + a_{-1} \frac{1}{z} + a_{0} + a_1z + a_2z^2+ \cdots

For a function like f(x)=1xf(x)=\frac{1}{x} the Laurent series is specified by the list of coefficients a1=1a_{-1}=1 and an=0a_n=0 for n1n\neq 1. Whatever precision intended is the precision of the division algorithm if we implement f(z)f(z) as a Laurent series!

The idea of the Laurent series is immensely powerful, but it can be economized further by writing down an approximate form of the function slightly differently.

Rational Approximations

"If you sat down long enough and thought about ways to rearrange addition, subtraction, multiplication, and division in the context of approximations, you would probably write down an expression close to this":

Pm,n(x)=α0+α1x+α2x2++αmxmβ0+β1x+β2x2++βnxn.P_{m,n}(x)=\frac{\alpha_0 + \alpha_1 x + \alpha_2 x^2 + \cdots + \alpha_m x^m}{\beta_0+\beta_1 x + \beta_2 x^2 + \cdots +\beta_n x^n}.

Specific ways to arrange the fundamental operations can benefit particular applications. For example, there are ways to determine coefficients α\alpha and β\beta that do not run into the issue of being smaller than the level of precision offered by the machine. Fewer total operations are needed, resulting in less total storage use for the coefficients simultaneously.

Aside from computational efficiency, another benefit of using rational functions is the ability to express degenerate function behavior such as singularities (poles/infinities), boundedness, or asymptotic behavior. Qualitatively, the functions exp(x2)=2πϕ(2x)\exp(-x^2)=\sqrt{2\pi}\phi(\sqrt{2}x) and 11+x2\frac{1}{1+x^2} look very similar on all of R\R and the approximation fares far better than 1x21-x^2 outside of a narrow range. See the labeled curves in the following figure.

Continued fraction approximations

The degree of accuracy for a given approximation should be selected based on need and with respect to environmental constraints. The approximations in solstat are an economized continued fraction expansion:

a0+xa1+xa2+xa3+x   a_0+\frac{x}{a_1+\frac{x}{a_2+\frac{x}{a_3+\frac{x}{~~~\ddots}}}}

This is typically a faster way to compute the value of a function. It's also a way of definining the Golden Ratio (the most irrational number):

φ=1+11+11+11+1   \varphi = 1+\frac{1}{1+\frac{1}{1+\frac{1}{1+\frac{1}{~~~\ddots}}}}

Finding continued fraction approximations can be done analytically or from Taylor coefficients. There are some specific use cases for functions that have nice recurrence relations (e.g., factorials) since they work well algebraically with continued fractions. The implementation for solstat is based on these types of approximations due to some special relationships defined later.

Finding and Transforming Between Approximations

Thus far this article has not discussed getting these approximations aside from the case of the Taylor series. In each case, an approximation consists of a list of coefficients (e.g., Taylor coefficients {a0,a1,}\{a_0,a_1,\dots\}) and a map to some expression with finitely many primitive function calls (e.g., a polynomial a0+a1x+a_0+a_1x+\cdots).

The cleanest analytical example is the Taylor series since the coefficients for well-behaved functions can be found by computing derivatives by hand. When this isn't possible, results can be computed numerically using finite difference methods, e.g., the first-order central difference, in order to extract these coefficients:

f(x)f(x+h/2)f(xh/2)hf'(x)\approx \frac{f(x+h/2)-f(x-h/2)}{h}

However, this can be impractical when the coefficients approach the machine precision level. Also, Laurent series coefficients are determined the same way.

Similarly, the coefficients of a rational function (or Pade) approximation can be determined using an iterative algorithm (i.e., akin to Newton's method). Choose the order mm and nn of the numerator and denominator polynomials to find the coefficients. From there, many software packages have built-in implementations to find these coefficients efficiently, or a solver can be implemented to do something like Wynn's epsilon algorithm or the minimax approximation algorithm.

All of the aforementioned approximations can be transformed into one another depending on the use case. Most of these transformations (e.g., turning a polynomial approximation into a continued fraction approximation) amount to solving a linear problem or determining coefficients through numerical differentiation. Try different solutions and see which is best for a given application. This can take some trial and error. Theoretically, these algorithms seek to determine the approximation with a minimized maximal error (i.e., minimax problems).

Breaking up the approximations

Functions f ⁣:XYf\colon X \to Y also come along with domains of definition XX. Intuitively, the error for functions with bounded derivatives has an absolute error proportional to the domain size. When trying to approximate ff over all of XX, the smaller the set XX, the better. It only takes n+1n+1 points to define a polynomial of degree nn. This means a domain XX with n+1n+1 points can be perfectly computed with a polynomial.

For domains with infinitely many points, reducing the measure of the region approximated over is still beneficial especially when trying to minimize absolute error. For more complicated functions (especially those with large derivative(s)), breaking up the domain XX into rr different subdomains can be helpful.

For example, in the case of X=[0,1]X=[0,1], a 5th degree polynomial approximation for f ⁣:[0,1]Rf\colon [0,1]\to \R has max absolute error 10410^{-4}. After splitting the domain into r=2r=2 even-sized pieces, the result is f1 ⁣:[0,1/2]Rf_1\colon [0,1/2]\to \R and f2 ⁣:[1/2,1]Rf_2\colon [1/2,1] \to \R, which is used in the original algorithms to determine two distinct sets of coefficients for their approximations. In the domain of interest, f1f_1 and f2f_2 only have 10610^{-6} in error. Yet, if extended f1f_1 outside of [0,1/2][0,1/2], the error will have increased to 10210^{-2}. Each piece of the function is optimized purely for its reduced domain.

Breaking domains into smaller pieces allows for piecewise approximations that can be better than any non-piecewise implementation. At some point, piecewise approximations require so many conditional checks that it can be a headache, but this can also be incredibly efficient. Classic examples of piecewise approximations are piecewise linear approximations and (cubic) splines.

Ethereum Environment

In the Ethereum blockchain, every transaction that updates the world state costs gas based on how many computational steps are required to compute the state transition. This constraint puts pressure on smart contract developers to write efficient code. Onchain storage itself also has an associated cost!

Furthermore, most tokens on the blockchain occupy 256 bits of total storage for the account balance. Account balances can be thought of as uint256 values. Fixed point math is required for accurate pricing to occur on smart contract based exchanges. These libraries take the uint256 and recast it as a wad256 value which assumes there are 18 decimal places in the integer expansion of the uint256. As a result, the most accurate (or even "perfect") approximations onchain are always precise to 18 decimal places.

Consequently, it is of great importance to be considerate of the EVM when making approximations onchain. All of the techniques above can be used to make approximations accurate near 101810^{-18} in precision and economical simultaneously. To get full 101810^{-18} precision, the computation for rational approximations would need coefficients with higher than 256bit precision and the associated operations.

Solstat Implementation

A continued fraction aproximation of the Gaussian distribution is performed in Gaussian.sol. Solmate is used for fixed point operations alongside a custom library for units called Units.sol. The majority of the logic is located in Gaussian.sol.

First, a collection of constants used for the approximation is defined alongside custom errors. These constants were found using a special technique to obtain a continued fraction approximation that is for a related function called the gamma function (or more specifically, the incomplete gamma function). By changing specific inputs/parameters to the incomplete gamma function, the error function can be obtained. The error function is a shift and scaling away from being the Gaussian CDF Φ(x)\Phi(x).

Gaussian

The gaussian contract implements a number of functions important to the gaussian distributions. Importantly all of these implementations are for a mean μ=0\mu = 0 and variance σ2=1\sigma^2 = 1.

These implementations are based on the Numerical Recipes textbook and its C implementation. Numerical Recipes cites the original text by Abramowitz and Stegun, Handbook of Mathematical Functions, which can be read to understand these functions and the implications of their numerical approximations more thoroughly. This implementation is also differentially tested with the javascript Gaussian library, which implements the same algorithm.

Cumulative Distribution Function

The implementation of the CDF aproximation algorithm takes in a random variable xx as a single parameter. The function depends on helper functions known as the error function erf which has a special symmetry allowing for the approximation of the function on half the domain R\R.

erfc(x)=2erfc(x)\operatorname{erfc}(-x) = 2 - \operatorname{erfc}(x)

It is important to use symmetry when possible!

Furthermore, it has the other properties:

erfc()=2\operatorname{erfc}(-\infty) = 2
erfc(0)=1\operatorname{erfc}(0) = 1
erfc()=0\operatorname{erfc}(\infty) = 0

The reference implementation for the error function can be found on p221 of Numerical Recipes in section C 2e. This page is a helpful resource.

Probability Density Function

The library also supports an approximation of the Probability Density Function(PDF) which is mathematically interpeted as Z(x)=1σ2πe(xμ)22σ2Z(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{\frac{-(x - \mu)^2}{2\sigma^2}}. This implementation has a maximum error bound of of 1.21071.2\cdot 10^{-7} and can be refrenced here. The Gaussian PDF is even and symmetric about the yy-axis.

Percent Point Function / Quantile Function

Aproximation algorithms for the Percent Point Function (PPF), sometimes known as the inverse CDF or the quantile function, are also implemented. The function is mathmatically defined as D(x)=μσ2ierfc(2x)D(x) = \mu - \sigma\sqrt{2}\operatorname{ierfc}(2x), has a maximum error of 1.21071.2\cdot 10^{-7}, and depends on the inverse error function ierf which is defined by

ierfc(erfc(x))=erfc(ierfc(x))=x\operatorname{ierfc}(\operatorname{erfc}(x)) = \operatorname{erfc}(\operatorname{ierfc}(x))=x

and has a domain in the interval 0<x<20 < x < 2 along with some unique properties:

ierfc(0)=\operatorname{ierfc}(0) = \infty
ierfc(1)=0\operatorname{ierfc}(1) = 0
ierfc(2)=\operatorname{ierfc}(2) = - \infty

Invariant

Invariant.sol is a contract used to compute the invariant of the RMM-01 trading function such that yy is computed in:

y=KΦ(Φ1(1x)στ)+ky = K\Phi(\Phi^{⁻¹}(1-x) - \sigma\sqrt{\tau}) + k

This can be interpretted graphically with the following image:

Notice the need to compute the normal CDF of a quantity. For a more detailed perspective on the trading function, take a look at the RMM-01 whitepaper.

Solstat Versions

Solstat is one of Primitive's first contributions to improving the libraries availible in the Ethereum ecosystem. Future improvements and continued maintenance are planned as new techniques emerge.

Differential Testing

Differential testing by Foundry was critical for the development of Solstat. A popular technique, differential testing seeds inputs to different implementations of the same application and detects differences in the execution. Differential testing is an excellent complement to traditional software testing as it is well suited to detect semantic errors. This library used differential testing against the javascript Gaussian library to detect anomalies and varying bugs. Because of differential testing we can be confident in the performance and implementation of the library.