How to distribute points on the surface of a sphere as evenly as possibly is an incredibly important problem in maths, science and computing, and mapping the Fibonacci lattice onto the surface of a sphere via equal-area projection is an extremely fast and effective approximate method to achieve this. I show that with only minor modifications it can be made even better.

# Introduction

The problem of how to evenly distribute points on a sphere has a very long history and is one of the most studied problems in the mathematical literature associated with spherical geometry. It is of critical importance in many areas of mathematics, physics, chemistry including  numerical analysis, approximation theory, coding theory, crystallography, electrostatics, computer graphics, viral morphology to name just a few.

Unfortunately, with the exception of a few special cases (namely the platonic solids) it is not possible to exactly equally distribute points on the sphere. Furthermore, the solution to this problem is critically dependent on the criteria used to judge the uniformity. There are many criteria in use, and they  include:

• Packing and covering
• Convex hulls, Voronoi cells and Delaunay  triangles,
• Riesz $s$-energy kernels
• Cubature and Determinants

Repeating this point as it is crucial: there is usually no single optimal solution  to this question, because an optimal solution based on one criteria is often not an optimal point distribution for another.  For example, in this post, we will also find that optimising for packing does not necessarily produce an optimal convex hull, and vice-versa.

For sake of brevity, this post focuses on just two of these: the minimum packing distance and convex hull / Delaunay mesh measures (volume and area).

Section 1 will show how we can modify the canonical Fibonacci lattice to consistently produce a better packing distribution.

Section 2 will show how we can modify the canonical Fibonacci lattice that produces better convex hull measures (volume and surface area).

# Section 1. Optimizing Packing Distance

This is often referred to as “Tammes’s problem” due to the botanist Tammes, who searched for an explanation of the surface structure of pollen grains. The packing criterion asks us to maximize the smallest neighboring distance among the $N$ points. That is,

$$d_N = \min_{i \neq j} | x_i – x_j |$$

This value decreases at a rate $~ 1/\sqrt{N}$, so it is useful to define the normalized distance, and also the asymptotic limit of the normalized distance as

$$d^*_N = \sqrt{N} d_N ,\quad \quad d^* = \lim_{N \rightarrow \infty} d^*_N$$

The Fibonacci Lattice

One very elegant solution is modeled after nodes appearing in nature such as the seed distribution on the head of a sunflower or a pine cone, a phenomenon known as spiral phyllotaxis. Coxeter demonstrated these arrangements are fundamentally related to the Fibonacci sequence, $F_k =\{1, 1, 2, 3, 5, 8, 13, …\}$ and the golden ratio $\phi = (1+\sqrt{5})/2$.

There are two similar definitions of the spherical Fibonacci lattice point set in the literature. The original one is strictly only defined for $N$ equal to one of the terms of the Fibonacci sequence, $F_m$ and is very well studied in number theory.

$$t_i = \left( \frac{i}{F_m}, \frac{i F_{m-1}}{F_m} \right) \quad \textrm{for }\; 0 \leq i \leq N-1$$

Whilst the second one generalises this to arbitrary $N$, and is used more frequently in computing:

$$t_i = \left( \frac{i}{N}, \frac{i }{\phi} \right) \quad \textrm{for }\; 0 \leq i \leq N \tag{1}$$

where

$$\phi = \frac{1+\sqrt{5}}{2} = \lim_{n \rightarrow \infty} \left( \frac{F_{n+1}}{F_n} \right)$$

These points sets map to the unit square $[0, 1]^2$ which can then be mapped to the sphere by the cylindrical equal area projection:

$$(x,y) \rightarrow (\theta, \phi) : \quad \left( \cos^{-1}(2x-1) – \pi/2, 2\pi y \right)$$

$$(\theta,\phi) \rightarrow (x,y,z) : \quad \left (\cos\theta \cos\phi, \cos \theta \sin \phi, \sin \theta \right)$$

You can find numerous versions of python code for this at “Evenly distributing points on a sphere“.

Even though spherical Fibonacci point sets are not the globally best distribution of samples on a sphere, (because their solutions do not coincide with the platonic solids for $n=4,6,8,12,20$), they yield excellent sampling properties and are extremely simple to construct in contrast to other more sophisticated spherical sampling schemes.

As the mapping from the unit square to the surface of the sphere is done via an area-preserving projection, one can expect that if the original points are evenly distributed then they will also quite well evenly distributed on the sphere’s surface. However, this does not mean that it is provably optimal.

This mapping suffers from two distinct but inter-related problems.

The first is that this mapping is area-preserving, not distance preserving. Given that in our case, our objective constraint is maximizing the minimum pairwise distance separation between points, then it is not guaranteed that such distance-based constraints and relationships will hold after the projection.

The second, and from a pragmatic point of view possibly the trickiest to resolve, is that the mapping has a two singularity points at each pole. Consider two points very close to the pole but 180 degrees different in longitude. On the unit square, (and also on the cylindrical projection) they would correspond to two points that are quite distant from each other, and yet when mapped to the surface of the sphere they could be joined be a very small great arc going over the north pole. It is this particular issue that makes many of the spiral mappings sub-optimal.

The spherical fibonacci spiral produced from equation 1, results in a value of $d_N^*$ for all $N$, and so $d^* = 2$.

Lattice 1

The more common version – especially in computing – which produces a better $d^*=3.09$, is:

$$t_i = \left( \frac{i+1/2}{N}, \frac{i}{\phi} \right) \quad \textrm{for }\; 0 \leq i \leq N-1 \tag{2}$$

It places the points in the midpoints of the intervals (aka the midpoint rule in gausssian quadrature), and so for $n=100$, the values of the first coordinate would simply be:

$$\{ \frac{0.5}{100},\frac{1.5}{100},\frac{2.5}{100},\ldots, \frac{97.5}{100},\frac{98.5}{100},\frac{99.5}{100} \}$$

Lattice 2.

A key insight to further improving on Equation 2, is to realize that the $d^*_N$ always corresponds to the distance between the points $t_0$ and $t_3$, which are at the poles. Thus, to improve $d_N$ the points near the poles should be positioned farther apart.

If we define the following distribution:

$$t_i(\varepsilon) = \left( \frac{i+1/2+ \varepsilon}{N+2\varepsilon}, \frac{i}{\phi} \right) \quad \textrm{for }\; 0 \leq i \leq N-1$$

The $d^*_N$-curves for various values. $\varepsilon=0$ (blue);  $\varepsilon=\frac{1}{2}$ (orange); $\varepsilon=\frac{3}{2}$ (green); and $\varepsilon=\frac{5}{2}$ (red).  One can see that $\varepsilon = \frac{5}{2}$ produces results close to the asymptotic results. That is, for $N>20$, compared to the canonical spherical Fibonacci lattice,  the following simple expression produces substantially better results of $d^* = 3.29$:

$$t_i = \left( \frac{i+3}{N+5}, \frac{i}{\phi} \right) \quad \textrm{for }\; 0 \leq i \leq N-1 \tag{3}$$

Thus for $n=100$, the values of the first coordinate would simply be:

$$\{ \frac{3}{105},\frac{4}{105},\frac{5}{105},\ldots, \frac{100}{105},\frac{101}{105},\frac{102}{105} \}$$

Lattice 3.

As stated earlier, one of the greatest challenges of the distributing points evenly on the sphere is that the optimal distribution critically depends on which objective function you use. It turns out that local measures such as $d_N^*$ are at times very “unforgiving” inasmuch as a single point in a suboptimal position can catastrophically reduce measure of the entire point distribution.

In our case, regardless of how large $N$ is, the $D_N^*$ is typically determined by the four points closest to each pole, especially $t_0$ and $t_3$. However, what is also known about this lattice is that the largest Voronoi polygon is at the pole. Thus, in trying to maximize $d_N$ by separating the initial polar points in the sequence, actually makes the void at the pole even larger!  Thus, we present an alternative to lattice 2 which is generally more preferable, as it does not exhibit such a large void near the poles.

It is almost identical to lattice 2 but with two differences. Firstly, it uses $\varepsilon = 11/2$ for $1 \leq i \leq n-2$. Secondly, in addition to these $n-2$ points, the first and final point is defined to be at each pole. That is,

$$t_0=(0,0); \; t_n = (1,0); \quad t_i = \left( \frac{i+6}{N+11}, \frac{i}{\phi} \right) \quad \textrm{for }\; 1 \leq i \leq N-2 \tag{4}$$

The amazing thing about this method of construction is that although it was motivated by desiring to minimize the gap at the poles, it  actually has the best $d_N$ and $d^*$ value of  all methods, with $d^* = 3.31$!

Thus for $n=100$, the values of the first coordinate would simply be:

$$\{ 0; \; \frac{7}{111},\frac{8}{111},\frac{9}{1111},\ldots, \frac{102}{111},\frac{103}{111},\frac{104}{111} ; \; 1\}$$

Comparison

For large $N$ this value of $d^*$ compares extremely well compared to other methods, such as geodesic domes, which are based on triangulated projections from the faces of platonic solids to the surface of the sphere. Not surprsingly, the best quality geodesic domes are those based on the icosahedron or the dodecahedron.

For various $N$ point geodesic domes based on the icosahedron.

\begin{array}{|c|cccccccccc|} \hline
N & 12 & 42 & 92 & 162 & 252& 362 & 492 & 642 & 812 & 1002  \\ \hline
d^*  & 3.64 & 3.54 & 3.34 & 3.22 & 3.15 & 3.09 & 3.06 & 3.03 & 3.00 & 2.99 \\ \hline
\end{array}

And for various $N$ point geodesic domes based on the dodecahedron.

\begin{array}{|c|ccccccc|} \hline
N & 20 & 32 & 122 & 272 & 482 & 752 & 1082\\ \hline
d^* & 3.19 & 3.63 & 3.16 & 2.99 & 2.90 & 2.84 & 2.81 \\ \hline
\end{array}

Also, the truncated icosahedron, which is the shape of a $C_{60}$ buckminster fullerene only corresponds to $d^* = 3.125$.

Thus for $N>100$, lattices based on Eqn 3 are better than any geodesic polyhedra.

As per the first reference,  some of the state-of-the-art methods which are typically complex and require recursive and/or dynamic programming are as follows.

\begin{array}{|lr|} \hline
\text{Lattice 1} & 3.09\\ \hline
\text{Max Determinant} & 3.19 \\ \hline
\text{Lattice 2} & 3.28 \\ \hline
\text{Lattice 3} & 3.31 \\ \hline
\text{Zonal Equal Area} & 3.32 \\ \hline
\text{Coulomb} & 3.37 \\ \hline
\text{Log Energy} & 3.37\\ \hline
\end{array}

Section 1 Summary

Lattice 3, as per equation 3, is a modification of the canonical Fibonacci lattice that produces a significantly better packing point distribution. That is,

$$t_0 = (0,0); \; t_{N-1} = (0,1); \; \; t_i = \left( \frac{i+6}{N+11}, \frac{i}{\phi} \right) \quad \textrm{for }\; 1 \leq i \leq N-2$$

# Section 2. Optimising the Convex hull (Delaunay mesh)

Although the previous section optimized for $d^*_N$, unfortunately these modifications actually make other measures worse, such as the volume of the convex hull (Delaunay mesh). This section shows how to evenly distribute points on a sphere in a manner that optimizes (maximizes) the a more global measure such as the volume of the convex hull.

Let us define, $C_N$ as the convex hull of the $N$ points,

$$\epsilon_N = N \left( \frac{4\pi }{3} – \textrm{Vol}(C_N) \right)$$

where the normalization factor of $N$ is included, as the absolute discrepancy decreases at a rate $~ 1/N$.

The behavior of $\epsilon_N$ for varying $N$ can be seen in Figure 3 (blue).

The key to improving the volume discrepancy is to note that although the use of $\phi$, the golden ratio intuitively makes sense as $N \rightarrow \infty$, it does not necessarily follow that it is the best value for finite $N$. In science terminology, we could say that need to consider finite-term correction effects.

Thus, let us generalize equation 1 as follows:

$$t_i = \left( \frac{i+1/2}{N}, \frac{i}{g(n)} \right) \quad \textrm{for }\; 0 \leq i \leq N-1 \tag{4}$$

First we define an auxillary value,

$$k = \left\lfloor \textrm{log}_{\phi}(\frac{n}{1.5}) \right\rfloor = \left\lfloor \frac{\ln (n/1.5)}{\ln \phi } \right\rfloor$$

where $\lfloor x \rfloor$ is the floor function.

Now define $g(n)$ as follows:

$$g(n) = \begin{cases} 3-\phi, & \text{if k is even} \\ \phi, & \text{if k is odd} \end{cases} \tag{5}$$

Figure 3 shows that this substantially improves the volume discrepancy for half the values of $N$.

The underlying reason why this works is based on the less well known fact that all numbers $x$ that satisfy the special Mobieus transformation are equivalent to $\phi$ in terms of irrationality.

$$x = \frac{a\phi+b}{c\phi+d}, \quad \textrm{for all integers} \; \; a,b,c,d \; \textrm{such that } |ad-bc|=1$$

And thus the connection why $\phi$ and $3-\phi$ work together so well , is that

$$\frac{1}{\phi} = \frac{\phi+1}{2\phi+1}, \quad \quad \frac{1}{3-\phi }= \frac{2\phi+1}{1\phi+1}$$

For the remaining half, we first define an auxiliary sequence $A_N$ that is variant of the Fibonacci sequence

$$A_1 =1, \; A_2 = 4; \; A_{n+2}= A_{n+1}+A_n \; \textrm{for } n = 1,2,3,…$$

That is,

$$A_N = 1,4,5,9,14,23,37,60,97,157,254,411,…$$

The convergents of this sequence all have elegant continued fractions, and in the limit converge to $\phi$. For example,

$$t _5/t_4 = 1+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{4}}}}$$.

We now fully generalize $g(n)$ as follows:

$$g(n) = \begin{cases} 3-\phi, & \text{if k is even} \\ A_{j+1}/A_j , & \text{if k is odd, where j= (k+7)/2} \end{cases} \tag{6}$$

The following table is a summary of the value of $g(n)$ for various $n$.

\begin{array}{|c|c|c|c|c|c|c|c|c|} \hline
N & 4-6 & 7-10 & 11-16& 17-26& 27-43& 44-70& 71-114 & 115-184 & 185-300\\ \hline
g(n)  &3-\phi & \frac{23}{14} & 3-\phi & \frac{37}{23} & 3-\phi & \frac{60}{37} & 3-\phi & \frac{97}{60} & 3-\phi  \\ \hline
\end{array}

Figure 4 shows that, in relation to convex hull volume, this new distribution is better than the canonical lattice for all values of $n$.

Figure 4. The discrepancy between the volume of the convex hull of points and the volume of a unit sphere. Note that smaller is better. This shows that the newly proposed method (green) produces consistently better distribution than the canonical Fibonacci lattice (blue).

Of interest, this distribution also slightly but consistently reduces the discrepancy between the surface area of the convex hull and the surface area of the unit sphere. This is shown in figure 6.

Section 2 Summary

The lattice as per equation 6, is a modification of the canonical Fibonacci lattice that produces a significantly better point distribution as measured by the volume and surface area of the convex hull (Delaunay Mesh).

Key References

• A Comparison of poopular point confugrations on S^2″: https://arxiv.org/pdf/1607.04590.pdf
• http://web.archive.org/web/20120421191837/http://www.cgafaq.info/wiki/Evenly_distributed_points_on_sphere
• https://perswww.kuleuven.be/~u0017946/publications/Papers97/art97a-Saff-Kuijlaars-MI/Saff-Kuijlaars-MathIntel97.pdf
• https://maths-people.anu.edu.au/~leopardi/Leopardi-Sphere-PhD-Thesis.pdf
• https://www.irit.fr/~David.Vanderhaeghe/M2IGAI-CO/2016-g1/docs/spherical_fibonacci_mapping.pdf
• https://arxiv.org/pdf/1407.8282.pdf
• https://maths-people.anu.edu.au/~leopardi/Macquarie-sphere-talk.pdf

My name is Martin Roberts. I have a PhD  in theoretical physics. I love maths and computing. I am open to new opportunities – consulting, contract or employment – as a data scientist, so let’s have a chat on how we can work together! Contact details can be found here.

# Other Posts you might like:

• ### Alan Wolfe

This is great stuff, thank you for sharing!

Something I think worth pointing out is that this “The packing criterion asks us to maximize the smallest neighboring distance among the points” is exactly what Mitchell’s best candidate algorithm does, so this seems to be blue noise distributed sampling points on a sphere, but without the need to do point generation in advance.

However, since it’s a low discrepancy sequence (golden ratio!) and not strictly blue noise, I bet it has (integration etc!) properties more like LDS, and less like blue noise. AFAIK blue noise is nice for visual patterns (perceptual error), but LDS is better for integration in general.

As a graphics person, I think it would be neat to explore the differences visually, for path tracing, calculating ambient occlusion, etc! 🙂