Skip to content

Simplex noise

David O'Rourke edited this page Dec 11, 2019 · 5 revisions

2d simplex noise usually starts with a skew along the xy axis. Given the intended isotropy of the noise, it should make no difference that this assumes a simplex grid which has no prime axes in common with the resulting square grid.

However for my limited simplex space, it would be easier to start with a nominally square grid, and treat it as a simplex grid for just some parts of the algorithm.

Therefore, instead of the standard skew transform which adding to both x and y a factor of (sqrt 3 - 1)/2 * (x + y) we can transform from a simplex grid aligned on the square grid x axis.

x => x + y/sqrt(3)
y => 2y/sqrt(3)

This can be computationally achieved as

y = y/sqrt(3);
x += y;
y += y;

A Single line of square cells (bounded by two parallel rows of nodes) can thus be mapped from a single line of equilateral triangles of unit side length.

The actual altitude of the simplex is sqrt(3)/2 and the transform will map it to 1. But this is not particularly relevant to the application in mind, because the scale of the y dimension is not apparent in the resulting output noise.

The corresponding inverse transform would be:

y = y/2;
x -= y;
y *= sqrt(3)

obviously sqrt(3) and its inverse should be pre-calculated.

Work with a path through the square grid. Calculate the distance vectors between the point of interest on the path, and the 4 surrounding corners. Then these distance vectors can be inverse transformed to take them to the simplex grid, allowing ease function (if relevant) and gradient dot product calculation. comparison of the non-integer parts of the untransformed point co-ordinates can determine which simplex is relevant and therefore which 3 of the 4 contributions to include.

Above assumes that we are using SSE and the cost of calculating 4 distance vectors and dot products is negligible over calculating 3. Therefore the simplex dissection can be done at the end. If not using SSE, do the simplex dissection at the beginning.

In fact. Since the contribution of any node should drop to zero before reaching the edge of any of its adjacent simplices, we can avoid the dissection altogether. For a point in the upper simplex of the nominal square, the contribution of the furthest node of the lower simplex is zero (strictly less than zero but clamped to zero) so it simply washes away.

Contribution tends to zero

The algorithm requires that the contribution of a kernel to the final summation drops to zero before the kernel crosses the boundary into another simplex. The classic algorithm uses a term based on this:

t0 = 0.5 - (x * x + y * y)

This is clamped to be greater than zero. The key here is that x * x + y * y is the square of the magnitude of the distance vector. At the closest point, the simplex boundary is sqrt(3)/2 from the opposite vertex (the center of the kernel); so the square of the magnitude is 3/4. If we want to clamp this appropriately, then our modified term becomes:

t0 = 0.75 - (x * x + y * y)

We should regard it's maximum value as 0.75, and since the square of the square of this will be multiplied by the dot product of the displacement vector and the gradient vector: then the final scaling factor will be considerably smaller than the 70 found in the original algorithm.

I'm currently unsure of the appropriate adjustment. I think either a factor of 5 or 3.375