User:Soul windsurfer/sandbox

geometric plane models edit

  • hyperbolic non-Euclid [1]
    • the Klein disc ( they are not equilateral maples. The parallel line quorum is easier to understand on the Klein disc )
    • Poincaré disc
    • upper semiplanar,

function K( z), where z Is the complex number on the Klein discz, is an unlawful function that converts to Poincaré discs

 

Algorithms and methods of plotting the Mandelbrot set on a computing device edit

Template:Tone Template:Use dmy dates

 
Still image of a movie of increasing magnification on 0.001643721971153 − 0.822467633298876i
 
Still image of an animation of increasing magnification

There are many programs and algorithms used to plot the Mandelbrot set and other fractals, some of which are described in fractal-generating software. These programs use a variety of algorithms to determine the color of individual pixels efficiently.

Escape time algorithm edit

The simplest algorithm for generating a representation of the Mandelbrot set is known as the "escape time" algorithm. A repeating calculation is performed for each x, y point in the plot area and based on the behavior of that calculation, a color is chosen for that pixel.

Unoptimized naïve escape time algorithm edit

In both the unoptimized and optimized escape time algorithms, the x and y locations of each point are used as starting values in a repeating, or iterating calculation (described in detail below). The result of each iteration is used as the starting values for the next. The values are checked during each iteration to see whether they have reached a critical "escape" condition, or "bailout". If that condition is reached, the calculation is stopped, the pixel is drawn, and the next x, y point is examined. For some starting values, escape occurs quickly, after only a small number of iterations. For starting values very close to but not in the set, it may take hundreds or thousands of iterations to escape. For values within the Mandelbrot set, escape will never occur. The programmer or user must choose how many iterations–or how much "depth"–they wish to examine. The higher the maximal number of iterations, the more detail and subtlety emerge in the final image, but the longer time it will take to calculate the fractal image.

Escape conditions can be simple or complex. Because no complex number with a real or imaginary part greater than 2 can be part of the set, a common bailout is to escape when either coefficient exceeds 2. A more computationally complex method that detects escapes sooner, is to compute distance from the origin using the Pythagorean theorem, i.e., to determine the absolute value, or modulus, of the complex number. If this value exceeds 2, or equivalently, when the sum of the squares of the real and imaginary parts exceed 4, the point has reached escape. More computationally intensive rendering variations include the Buddhabrot method, which finds escaping points and plots their iterated coordinates.

The color of each point represents how quickly the values reached the escape point. Often black is used to show values that fail to escape before the iteration limit, and gradually brighter colors are used for points that escape. This gives a visual representation of how many cycles were required before reaching the escape condition.

To render such an image, the region of the complex plane we are considering is subdivided into a certain number of pixels. To color any such pixel, let   be the midpoint of that pixel. We now iterate the critical point 0 under  , checking at each step whether the orbit point has modulus larger than 2. When this is the case, we know that   does not belong to the Mandelbrot set, and we color our pixel according to the number of iterations used to find out. Otherwise, we keep iterating up to a fixed number of steps, after which we decide that our parameter is "probably" in the Mandelbrot set, or at least very close to it, and color the pixel black.

In pseudocode, this algorithm would look as follows. The algorithm does not use complex numbers and manually simulates complex-number operations using two real numbers, for those who do not have a complex data type. The program may be simplified if the programming language includes complex-data-type operations.

for each pixel (Px, Py) on the screen do
    x0 := scaled x coordinate of pixel (scaled to lie in the Mandelbrot X scale (-2.00, 0.47))
    y0 := scaled y coordinate of pixel (scaled to lie in the Mandelbrot Y scale (-1.12, 1.12))
    x := 0.0
    y := 0.0
    iteration := 0
    max_iteration := 1000
    while (x*x + y*y ≤ 2*2 AND iteration < max_iteration) do
        xtemp := x*x - y*y + x0
        y := 2*x*y + y0
        x := xtemp
        iteration := iteration + 1
 
    color := palette[iteration]
    plot(Px, Py, color)

Here, relating the pseudocode to  ,   and  :

  •  
  •   -  
  •  

and so, as can be seen in the pseudocode in the computation of x and y:

  •   and  

To get colorful images of the set, the assignment of a color to each value of the number of executed iterations can be made using one of a variety of functions (linear, exponential, etc.). One practical way, without slowing down calculations, is to use the number of executed iterations as an entry to a palette initialized at startup. If the color table has, for instance, 500 entries, then the color selection is n mod 500, where n is the number of iterations.

Optimized escape time algorithms edit

The code in the previous section uses an unoptimized inner while loop for clarity. In the unoptimized version, one must perform five multiplications per iteration. To reduce the number of multiplications the following code for the inner while loop may be used instead:

x2:= 0
y2:= 0
w:= 0

while (x2 + y2 ≤ 4 and iteration < max_iteration) do
    x:= x2 - y2 + x0
    y:= w - x2 - y2 + y0
    x2:= x * x
    y2:= y * y
    w:= (x + y) * (x + y)
    iteration:= iteration + 1

The above code works via some algebraic simplification of the complex multiplication:

 

Using the above identity, the number of multiplications can be reduced to three instead of five.

The above inner while loop can be further optimized by expanding w to

 

Substituting w into   yields   and hence calculating w is no longer needed.

The further optimized pseudocode for the above is:

x2:= 0
y2:= 0

while (x2 + y2 ≤ 4 and iteration < max_iteration) do
    y:= 2 * x * y + y0
    x:= x2 - y2 + x0
    x2:= x * x
    y2:= y * y
    iteration:= iteration + 1

Note that in the above pseudocode,   seems to increase the number of multiplications by 1, but since 2 is the multiplier the code can be optimized via  .

Derivative Bailout or "derbail" edit

 
An example of the fine detail possible with the usage of derbail, rendered with 1024 samples

It is common to check the magnitude of z after every iteration, but there is another method we can use that can converge faster and reveal structure within julia sets.

Instead of checking if the magnitude of z after every iteration is larger than a given value, we can instead check if the sum of each derivative of z up to the current iteration step is larger than a given bailout value:

 

The size of the dbail value can enhance the detail in the structures revealed within the dbail method with very large values.

It is possible to find derivatives automatically by leveraging Automatic differentiation and computing the iterations using Dual numbers.

 
Hole caused by precision issues

Rendering fractals with the derbail technique can often require a large number of samples per pixel, as there can be precision issues which lead to fine detail and can result in noisy images even with samples in the hundreds or thousands.

Python code:

 
Derbail used on a julia set of the burning ship
def pixel(x: int, y: int, w: int, h: int) -> int:
    def magn(a, b):
        return a * a + b * b

    dbail = 1e6
    ratio = w / h

    x0 = (((2 * x) / w) - 1) * ratio
    y0 = ((2 * y) / h) - 1

    dx_sum = 0
    dy_sum = 0

    iters = 0
    limit = 1024

    while magn(dx_sum, dy_sum) < dbail and iters < limit:
        xtemp = x * x - y * y + x0
        y = 2 * x * y + y0
        x = xtemp

        dx_sum += (dx * x - dy * y) * 2 + 1
        dy_sum += (dy * x + dx * y) * 2

        iters += 1

    return iters

Coloring algorithms edit

In addition to plotting the set, a variety of algorithms have been developed to

  • efficiently color the set in an aesthetically pleasing way
  • show structures of the data (scientific visualisation)

Histogram coloring edit

Template:More citations needed section A more complex coloring method involves using a histogram which pairs each pixel with said pixel's maximum iteration count before escape/bailout. This method will equally distribute colors to the same overall area, and, importantly, is independent of the maximum number of iterations chosen.[2]

This algorithm has four passes. The first pass involves calculating the iteration counts associated with each pixel (but without any pixels being plotted). These are stored in an array: IterationCounts[x][y], where x and y are the x and y coordinates of said pixel on the screen respectively.

 
 
 
 
 
 
The top row is a series of plots using the escape time algorithm for 10000, 1000 and 100 maximum iterations per pixel respectively. The bottom row uses the same maximum iteration values but utilizes the histogram coloring method. Notice how little the coloring changes per different maximum iteration counts for the histogram coloring method plots.

The first step of the second pass is to create an array of size n, which is the maximum iteration count: NumIterationsPerPixel. Next, one must iterate over the array of pixel-iteration count pairs, IterationCounts[][], and retrieve each pixel's saved iteration count, i, via e.g. i = IterationCounts[x][y]. After each pixel's iteration count i is retrieved, it is necessary to index the NumIterationsPerPixel by i and increment the indexed value (which is initially zero) -- e.g. NumIterationsPerPixel[i] = NumIterationsPerPixel[i] + 1 .

for (x = 0; x < width; x++) do
    for (y = 0; y < height; y++) do
        i:= IterationCounts[x][y]
        NumIterationsPerPixel[i]++

The third pass iterates through the NumIterationsPerPixel array and adds up all the stored values, saving them in total. The array index represents the number of pixels that reached that iteration count before bailout.

total: = 0
for (i = 0; i < max_iterations; i++) do
    total += NumIterationsPerPixel[i]

After this, the fourth pass begins and all the values in the IterationCounts array are indexed, and, for each iteration count i, associated with each pixel, the count is added to a global sum of all the iteration counts from 1 to i in the NumIterationsPerPixel array . This value is then normalized by dividing the sum by the total value computed earlier.

hue[][]:= 0.0
for (x = 0; x < width; x++) do
    for (y = 0; y < height; y++) do
        iteration:= IterationCounts[x][y]
        for (i = 0; i <= iteration; i++) do
            hue[x][y] += NumIterationsPerPixel[i] / total /* Must be floating-point division. */

...

color = palette[hue[m, n]]

...

Finally, the computed value is used, e.g. as an index to a color palette.

This method may be combined with the smooth coloring method below for more aesthetically pleasing images.

Continuous (smooth) coloring edit

This image was rendered with the escape time algorithm. There are very obvious "bands" of color
This image was rendered with the normalized iteration count algorithm. The bands of color have been replaced by a smooth gradient. Also, the colors take on the same pattern that would be observed if the escape time algorithm were used.

The escape time algorithm is popular for its simplicity. However, it creates bands of color, which, as a type of aliasing, can detract from an image's aesthetic value. This can be improved using an algorithm known as "normalized iteration count",[3][4] which provides a smooth transition of colors between iterations. The algorithm associates a real number   with each value of z by using the connection of the iteration number with the potential function. This function is given by

 

where zn is the value after n iterations and P is the power for which z is raised to in the Mandelbrot set equation (zn+1 = znP + c, P is generally 2).

If we choose a large bailout radius N (e.g., 10100), we have that

 

for some real number  , and this is

 

and as n is the first iteration number such that |zn| > N, the number we subtract from n is in the interval [0, 1).

For the coloring we must have a cyclic scale of colors (constructed mathematically, for instance) and containing H colors numbered from 0 to H − 1 (H = 500, for instance). We multiply the real number   by a fixed real number determining the density of the colors in the picture, take the integral part of this number modulo H, and use it to look up the corresponding color in the color table.

For example, modifying the above pseudocode and also using the concept of linear interpolation would yield

for each pixel (Px, Py) on the screen do
    x0:= scaled x coordinate of pixel (scaled to lie in the Mandelbrot X scale (-2.5, 1))
    y0:= scaled y coordinate of pixel (scaled to lie in the Mandelbrot Y scale (-1, 1))
    x:= 0.0
    y:= 0.0
    iteration:= 0
    max_iteration:= 1000
    // Here N = 2^8 is chosen as a reasonable bailout radius.

    while x*x + y*y ≤ (1 << 16) and iteration < max_iteration do
        xtemp:= x*x - y*y + x0
        y:= 2*x*y + y0
        x:= xtemp
        iteration:= iteration + 1

    // Used to avoid floating point issues with points inside the set.
    if iteration < max_iteration then
        // sqrt of inner term removed using log simplification rules.
        log_zn:= log(x*x + y*y) / 2
        nu:= log(log_zn / log(2)) / log(2)
        // Rearranging the potential function.
        // Dividing log_zn by log(2) instead of log(N = 1<<8)
        // because we want the entire palette to range from the
        // center to radius 2, NOT our bailout radius.
        iteration:= iteration + 1 - nu

    color1:= palette[floor(iteration)]
    color2:= palette[floor(iteration) + 1]
    // iteration % 1 = fractional part of iteration.
    color:= linear_interpolate(color1, color2, iteration % 1)
    plot(Px, Py, color)

Exponentially mapped and cyclic iterations edit

 
Exponential Cyclic Coloring in LCH color space with shading

Typically when we render a fractal, the range of where colors from a given palette appear along the fractal is static. If we desire to offset the location from the border of the fractal, or adjust their palette to cycle in a specific way, there are a few simple changes we can make when taking the final iteration count before passing it along to choose an item from our palette.


When we have obtained the iteration count, we can make the range of colors non-linear. Raising a value normalized to the range [0,1] to a power n, maps a linear range to an exponential range, which in our case can nudge the appearance of colors along the outside of the fractal, and allow us to bring out other colors, or push in the entire palette closer to the border.

 

where i is our iteration count after bailout, max_i is our iteration limit, S is the exponent we are raising iters to, and N is the number of items in our palette. This scales the iter count non-linearly and scales the palette to cycle approximately proportionally to the zoom.

We can then plug v into whatever algorithm we desire for generating a color.

Passing iterations into a color directly edit

 
Example of exponentially mapped cyclic LCH coloring without shading

One thing we may want to consider is avoiding having to deal with a palette or color blending at all. There are actually a handful of methods we can leverage to generate smooth, consistent coloring by constructing the color on the spot.

v refers to a normalized exponentially mapped cyclic iter count edit

f(v) refers to the sRGB transfer function edit

A naive method for generating a color in this way is by directly scaling v to 255 and passing it into RGB as such

rgb = [v * 255, v * 255, v * 255]

One flaw with this is that RGB is non-linear due to gamma; consider linear sRGB instead. Going from RGB to sRGB uses an inverse companding function on the channels. This makes the gamma linear, and allows us to properly sum the colors for sampling.

srgb = [v * 255, v * 255, v * 255]
 
HSV Gradient

HSV coloring edit

HSV Coloring can be accomplished by mapping iter count from [0,max_iter) to [0,360), taking it to the power of 1.5, and then modulo 360.   We can then simply take the exponentially mapped iter count into the value and return

hsv = [powf((i / max) * 360, 1.5) % 360, 100, (i / max) * 100]

This method applies to HSL as well, except we pass a saturation of 50% instead.

hsl = [powf((i / max) * 360, 1.5) % 360, 50, (i / max) * 100]
 
LCH Gradient

LCH coloring edit

One of the most perceptually uniform coloring methods involves passing in the processed iter count into LCH. If we utilize the exponentially mapped and cyclic method above, we can take the result of that into the Luma and Chroma channels. We can also exponentially map the iter count and scale it to 360, and pass this modulo 360 into the hue.

 

One issue we wish to avoid here is out-of-gamut colors. This can be achieved with a little trick based on the change in in-gamut colors relative to luma and chroma. As we increase luma, we need to decrease chroma to stay within gamut.

s = iters/max_i;
v = 1.0 - powf(cos(pi * s), 2.0);
LCH = [75 - (75 * v), 28 + (75 - (75 * v)), powf(360 * s, 1.5) % 360];

Advanced plotting algorithms edit

In addition to the simple and slow escape time algorithms already discussed, there are many other more advanced algorithms that can be used to speed up the plotting process.

Distance estimates edit

One can compute the distance from point c (in exterior or interior) to nearest point on the boundary of the Mandelbrot set.[5][6]

Exterior distance estimation edit

The proof of the connectedness of the Mandelbrot set in fact gives a formula for the uniformizing map of the complement of   (and the derivative of this map). By the Koebe quarter theorem, one can then estimate the distance between the midpoint of our pixel and the Mandelbrot set up to a factor of 4.

In other words, provided that the maximal number of iterations is sufficiently high, one obtains a picture of the Mandelbrot set with the following properties:

  1. Every pixel that contains a point of the Mandelbrot set is colored black.
  2. Every pixel that is colored black is close to the Mandelbrot set.
 
Exterior distance estimate may be used to color whole complement of Mandelbrot set

The upper bound b for the distance estimate of a pixel c (a complex number) from the Mandelbrot set is given by[7][8][9]

 

where

  •   stands for complex quadratic polynomial
  •   stands for n iterations of   or  , starting with  :  ,  ;
  •   is the derivative of   with respect to c. This derivative can be found by starting with   and then  . This can easily be verified by using the chain rule for the derivative.

The idea behind this formula is simple: When the equipotential lines for the potential function   lie close, the number   is large, and conversely, therefore the equipotential lines for the function   should lie approximately regularly.

From a mathematician's point of view, this formula only works in limit where n goes to infinity, but very reasonable estimates can be found with just a few additional iterations after the main loop exits.

Once b is found, by the Koebe 1/4-theorem, we know that there is no point of the Mandelbrot set with distance from c smaller than b/4.

The distance estimation can be used for drawing of the boundary of the Mandelbrot set, see the article Julia set. In this approach, pixels that are sufficiently close to M are drawn using a different color. This creates drawings where the thin "filaments" of the Mandelbrot set can be easily seen. This technique is used to good effect in the B&W images of Mandelbrot sets in the books "The Beauty of Fractals[10]" and "The Science of Fractal Images".[11]

Here is a sample B&W image rendered using Distance Estimates:

 
This is a B&W image of a portion of the Mandelbrot set rendered using Distance Estimates (DE)

Distance Estimation can also be used to render 3D images of Mandelbrot and Julia sets

Interior distance estimation edit

 
Pixels colored according to the estimated interior distance

It is also possible to estimate the distance of a limitly periodic (i.e., hyperbolic) point to the boundary of the Mandelbrot set. The upper bound b for the distance estimate is given by[5]

 

where

  •   is the period,
  •   is the point to be estimated,
  •   is the complex quadratic polynomial  
  •   is the  -fold iteration of  , starting with  
  •   is any of the   points that make the attractor of the iterations of   starting with  ;   satisfies  ,
  •  ,  ,   and   are various derivatives of  , evaluated at  .

Analogous to the exterior case, once b is found, we know that all points within the distance of b/4 from c are inside the Mandelbrot set.

There are two practical problems with the interior distance estimate: first, we need to find   precisely, and second, we need to find   precisely. The problem with   is that the convergence to   by iterating   requires, theoretically, an infinite number of operations. The problem with any given   is that, sometimes, due to rounding errors, a period is falsely identified to be an integer multiple of the real period (e.g., a period of 86 is detected, while the real period is only 43=86/2). In such case, the distance is overestimated, i.e., the reported radius could contain points outside the Mandelbrot set.

 
3D view: smallest absolute value of the orbit of the interior points of the Mandelbrot set

Cardioid / bulb checking edit

One way to improve calculations is to find out beforehand whether the given point lies within the cardioid or in the period-2 bulb. Before passing the complex value through the escape time algorithm, first check that:

 ,
 ,
 ,

where x represents the real value of the point and y the imaginary value. The first two equations determine that the point is within the cardioid, the last the period-2 bulb.

The cardioid test can equivalently be performed without the square root:

 
 

3rd- and higher-order buds do not have equivalent tests, because they are not perfectly circular.[12] However, it is possible to find whether the points are within circles inscribed within these higher-order bulbs, preventing many, though not all, of the points in the bulb from being iterated.

Periodicity checking edit

To prevent having to do huge numbers of iterations for points inside the set, one can perform periodicity checking, which checks whether a point reached in iterating a pixel has been reached before. If so, the pixel cannot diverge and must be in the set. Periodicity checking is a trade-off, as the need to remember points costs data management instructions and memory, but saves computational instructions. However, checking against only one previous iteration can detect many periods with little performance overhead. For example, within the while loop of the pseudocode above, make the following modifications:

xold:= 0
yold:= 0
period:= 0
while (x*x + y*y ≤ 2*2 and iteration < max_iteration) do
    xtemp:= x*x - y*y + x0
    y:= 2*x*y + y0
    x:= xtemp
    iteration:= iteration + 1 
 
    if x ≈ xold and y ≈ yold then
        iteration:= max_iteration  /* Set to max for the color plotting */
        break        /* We are inside the Mandelbrot set, leave the while loop */
 
    period:= period + 1
    if period > 20 then
        period:= 0
        xold:= x
        yold:= y

The above code stores away a new x and y value on every 20th iteration, thus it can detect periods that are up to 20 points long.

Border tracing / edge checking edit

 
Edge detection using Sobel filter of hyperbolic components of Mandelbrot set

Because the Mandelbrot set is a simply connected set,[13] any point enclosed by a closed shape whose borders lie entirely within the Mandelbrot set must itself be in the Mandelbrot set. Border tracing works by following the lemniscates of the various iteration levels (colored bands) all around the set, and then filling the entire band at once. This can be a good speed increase, because it means that large numbers of points can be skipped.[14] Note that border tracing can't be used to identify bands of pixels outside the set if the plot computes DE (Distance Estimate) or potential (fractional iteration) values.

Border tracing is especially beneficial for skipping large areas of a plot that are parts of the Mandelbrot set (in M), since determining that a pixel is in M requires computing the maximum number of iterations.

Below is an example of a Mandelbrot set rendered using border tracing:

This is a 400x400 pixel plot using simple escape-time rendering with a maximum iteration count of 1000 iterations. It only had to compute 6.84% of the total iteration count that would have been required without border tracing. It was rendered using a slowed-down rendering engine to make the rendering process slow enough to watch, and took 6.05 seconds to render. The same plot took 117.0 seconds to render with border tracing turned off with the same slowed-down rendering engine.

Note that even when the settings are changed to calculate fractional iteration values (which prevents border tracing from tracing non-Mandelbrot points) the border tracing algorithm still renders this plot in 7.10 seconds because identifying Mandelbrot points always requires the maximum number of iterations. The higher the maximum iteration count, the more costly it is to identify Mandelbrot points, and thus the more benefit border tracing provides.

That is, even if the outer area uses smooth/continuous coloring then border tracing will still speed up the costly inner area of the Mandelbrot set. Unless the inner area also uses some smooth coloring method, for instance interior distance estimation.

Rectangle checking edit

An older and simpler to implement method than border tracing is to use rectangles. There are several variations of the rectangle method. All of them are slower than border tracing because they end up calculating more pixels.

The basic method is to calculate the border pixels of a box of say 8x8 pixels. If the entire box border has the same color, then just fill in the 36 pixels (6x6) inside the box with the same color, instead of calculating them. (Mariani's algorithm.)[15]

A faster and slightly more advanced variant is to first calculate a bigger box, say 25x25 pixels. If the entire box border has the same color, then just fill the box with the same color. If not, then split the box into four boxes of 13x13 pixels, reusing the already calculated pixels as outer border, and sharing the inner "cross" pixels between the inner boxes. Again, fill in those boxes that has only one border color. And split those boxes that don't, now into four 7x7 pixel boxes. And then those that "fail" into 4x4 boxes. (Mariani-Silver algorithm.)

Even faster is to split the boxes in half instead of into four boxes. Then it might be optimal to use boxes with a 1.4:1 aspect ratio, so they can be split like how A3 papers are folded into A4 and A5 papers. (The DIN approach.)

One variant just calculates the corner pixels of each box. However this causes damaged pictures more often than calculating all box border pixels. Thus it only works reasonably well if only small boxes of say 6x6 pixels are used, and no recursing in from bigger boxes. (Fractint method.)

As with border tracing, rectangle checking only works on areas with one discrete color. But even if the outer area uses smooth/continuous coloring then rectangle checking will still speed up the costly inner area of the Mandelbrot set. Unless the inner area also uses some smooth coloring method, for instance interior distance estimation.

Symmetry utilization edit

The horizontal symmetry of the Mandelbrot set allows for portions of the rendering process to be skipped upon the presence of the real axis in the final image. However, regardless of the portion that gets mirrored, the same number of points will be rendered.

Julia sets have symmetry around the origin. This means that quadrant 1 and quadrant 3 are symmetric, and quadrants 2 and quadrant 4 are symmetric. Supporting symmetry for both Mandelbrot and Julia sets requires handling symmetry differently for the two different types of graphs.

Multithreading edit

Escape-time rendering of Mandelbrot and Julia sets lends itself extremely well to parallel processing. On multi-core machines the area to be plotted can be divided into a series of rectangular areas which can then be provided as a set of tasks to be rendered by a pool of rendering threads. This is an embarrassingly parallel[16] computing problem. (Note that one gets the best speed-up by first excluding symmetric areas of the plot, and then dividing the remaining unique regions into rectangular areas.)[17]

Here is a short video showing the Mandelbrot set being rendered using multithreading and symmetry, but without boundary following:

This is a short video showing rendering of a Mandelbrot set using multi-threading and symmetry, but with boundary following turned off.

Finally, here is a video showing the same Mandelbrot set image being rendered using multithreading, symmetry, and boundary following:

This is a short video showing rendering of a Mandelbrot set using boundary following, multi-threading, and symmetry


Perturbation theory and series approximation edit

Very highly magnified images require more than the standard 64–128 or so bits of precision that most hardware floating-point units provide, requiring renderers to use slow "BigNum" or "arbitrary-precision" math libraries to calculate. However, this can be sped up by the exploitation of perturbation theory. Given

 

as the iteration, and a small epsilon and delta, it is the case that

 

or

 

so if one defines

 

one can calculate a single point (e.g. the center of an image) using high-precision arithmetic (z), giving a reference orbit, and then compute many points around it in terms of various initial offsets delta plus the above iteration for epsilon, where epsilon-zero is set to 0. For most iterations, epsilon does not need more than 16 significant figures, and consequently hardware floating-point may be used to get a mostly accurate image.[18] There will often be some areas where the orbits of points diverge enough from the reference orbit that extra precision is needed on those points, or else additional local high-precision-calculated reference orbits are needed. By measuring the orbit distance between the reference point and the point calculated with low precision, it can be detected that it is not possible to calculate the point correctly, and the calculation can be stopped. These incorrect points can later be re-calculated e.g. from another closer reference point.

Further, it is possible to approximate the starting values for the low-precision points with a truncated Taylor series, which often enables a significant amount of iterations to be skipped.[19] Renderers implementing these techniques are publicly available and offer speedups for highly magnified images by around two orders of magnitude.[20]

An alternate explanation of the above:

For the central point in the disc   and its iterations  , and an arbitrary point in the disc   and its iterations  , it is possible to define the following iterative relationship:

 

With  . Successive iterations of   can be found using the following:

 
 
 
 

Now from the original definition:

 ,

It follows that:

 

As the iterative relationship relates an arbitrary point to the central point by a very small change  , then most of the iterations of   are also small and can be calculated using floating point hardware.

However, for every arbitrary point in the disc it is possible to calculate a value for a given   without having to iterate through the sequence from  , by expressing   as a power series of  .

 

With  .

Now given the iteration equation of  , it is possible to calculate the coefficients of the power series for each  :

 
 
 

Therefore, it follows that:

 
 
 
 

The coefficients in the power series can be calculated as iterative series using only values from the central point's iterations  , and do not change for any arbitrary point in the disc. If   is very small,   should be calculable to sufficient accuracy using only a few terms of the power series. As the Mandelbrot Escape Contours are 'continuous' over the complex plane, if a points escape time has been calculated, then the escape time of that points neighbours should be similar. Interpolation of the neighbouring points should provide a good estimation of where to start in the   series.

Further, separate interpolation of both real axis points and imaginary axis points should provide both an upper and lower bound for the point being calculated. If both results are the same (i.e. both escape or do not escape) then the difference   can be used to recuse until both an upper and lower bound can be established. If floating point hardware can be used to iterate the   series, then there exists a relation between how many iterations can be achieved in the time it takes to use BigNum software to compute a given  . If the difference between the bounds is greater than the number of iterations, it is possible to perform binary search using BigNum software, successively halving the gap until it becomes more time efficient to find the escape value using floating point hardware.

external ray edit

https://math.stackexchange.com/questions/4057830/is-there-a-more-efficient-equation-for-the-mandelbrot-lemniscates It's much more natural to use complex numbers. Regarding efficiency, complex iteration of $z \to z^2 + c$ gives a degree $2^{n-1}$ polynomial in $c$ in $O(n)$ work. I don't know if it's possible to do better than that: starting from a pre-expanded polynomial will be $O(2^n)$ work which is exponentially worse.

Suppose $c = x + i y$, $f_c(z) = z^2 + c$, $f_c^{\circ (n+1)}(z) = f_c^{\circ n}(f_c(z))$. Then $\frac{d}{dc} f_c^{\circ(n+1)}(z) = 2 f_c^{\circ n}(z) \frac{d}{dc} f_c^{\circ n}(z) + 1$. These can be calculated together in one inner loop (being careful not to overwrite old values that are still needed).

Now you can use Newton's root finding method to solve the implicit form $f_c^{\circ n}(0) = r e^{2 \pi i \theta} = t$ by $c_{m+1} = c_{m} - \frac{f_{c_m}^{\circ n}(0) - t}{\frac{d}{dc}f_{c_m}^{\circ n}(0)}$. Use the previous $c$ as initial guess for next $\theta$. The increment in $\theta$ needs to be smaller than about $\frac{1}{4}$ for the algorithm to be stable. Note that $\theta$ (measured in turns) wraps around the unit circle $2^{n-1}$ times before $c$ gets back to its starting point.

This approach is inspired by [An algorithm to draw external rays of the Mandelbrot set by Tomoki Kawahira][1].

Compare with binary decomposition colouring (here with $r = 25$) (this particular image is rendered implicitly as a function from pixel coordinates to pixel colour, with edge detection of iteration bands $n$ as well as regions where $\Im f_c^{\circ n}(0) > 0$, where $n$ is the first iteration where $|f_c^{\circ n}(0)| > r$): [![exterior binary decomposition grid colouring of the Mandelbrot set][2]][2]


 [1]: http://www.math.titech.ac.jp/~kawahira/programs/mandel-exray.pdf
 [2]: https://i.stack.imgur.com/wxkwz.png


algorithm edit

To compute an external ray for the Mandelbrot set, you need to follow a specific process that involves iterating complex numbers and checking their divergence from the set. Here's a step-by-step guide:

  • Choose a point on the boundary of the Mandelbrot set. The boundary is the region where the points may or may not diverge.
  • Convert the chosen point to its corresponding complex number. The x-coordinate represents the real part, and the y-coordinate represents the imaginary part.
  • Initialize a variable to keep track of the iteration count, starting from 0.
  • Iterate the following process until one of the termination conditions is met:
    • a. Compute the square of the current complex number.
    • b. Add the original complex number to the squared value.
    • c. Update the current complex number with the result.
    • Note: At each iteration, you are essentially applying the Mandelbrot set formula, which is Z(n+1) = Z(n)^2 + C, where Z(n) is the current complex number and C is the original complex number.
  • Check if the magnitude (absolute value) of the current complex number exceeds a certain threshold, such as 2. If it does, the point is considered to have diverged, and you exit the iteration loop.
  • If the magnitude does not exceed the threshold, increment the iteration count and go back to step 4.
  • Once the iteration loop is exited, you can use the iteration count to compute the angle of the external ray. You can calculate the angle by multiplying the iteration count by a small value, such as 0.01, to convert it to radians.
  • The resulting angle represents the direction of the external ray from the chosen point on the boundary of the Mandelbrot set.
  • Keep in mind that the Mandelbrot set is a highly complex mathematical object, and computing external rays for every point on its boundary can be computationally expensive. The above steps provide a general framework, but you may need to implement optimizations and handle various edge cases to achieve accurate and efficient results.

ChatGPT

tuning edit

Tuning the external angle in the Mandelbrot by Claude Heiland-Allen[21]


Given a periodic external angle pair

 

tuning of an external angle   proceeds by replacing in  

  • every   by  
  • every   by  

Here  ,  ,   are blocks of binary digits (with   possibly aperiodic and infinite in extent)

description edit

The tuning algorithm for external angles works like this:[22]

  • start with
    • a pair of periodic external angles (.(-),.(+)) whose rays land at the root of the same hyperbolic component H
  • another external angle E
  • Now replace every 0 in E with -, and every 1 in E with +, to form the external angle F.
  • Then F's ray lands near H in the same way that E's ray lands near the period 1 continent



Examples edit

 
a -> 0 -> red
b -> 1 -> blue
  • input tuple of strings (a,b) of the (sub)wake as a periodic binary fractions
    • input string = a = 0.(011) =  
    • input string = b = 0.(100) =  
  • input string c = 0.(01)
  • output string in plain form   where a is displayed as a blue and b as a red font

code edit

type ExternalAngle = ([Bool], [Bool])

tuning
  :: (ExternalAngle, ExternalAngle)
  -> ExternalAngle
  -> ExternalAngle
tuning (([], per0), ([], per1)) (pre, per)
  = (concatMap t pre, concatMap t per)
  where
    t False = per0
    t True  = per1

Examples:

  • save code as t.hs
  • run Haskell in interactive mode
  • open funcion


ghci
:l t.hs

use edit

The Haskell function named `tuning` operates on the `ExternalAngle` type, which is defined as a tuple of two lists of booleans:

`([Bool], [Bool])`.

Here's what the function does:

1. It takes two parameters:

  • The first parameter is a tuple of two `ExternalAngle` values.
  • The second parameter is an `ExternalAngle` value.

2. It deconstructs the first parameter into two separate tuples: `([], per0)` and `([], per1)`. This means that the first tuple is a pair where the first element is an empty list and the second element is `per0`, and the second tuple is similar but with `per1`.

3. It then deconstructs the second parameter into two lists: `pre` and `per`.

4. It returns a new `ExternalAngle` by applying the function `t` to each element of `pre` and `per`.

5. Inside the `where` clause, the function `t` is defined. It takes a boolean value as an argument and returns either `per0` or `per1` based on the input boolean value. If the input is `False`, it returns `per0`, and if the input is `True`, it returns `per1`.



With this representation, assuming you want periodic angles, you would need to write

tuning (([], [False, True, True], ([], [True, False, False])) ([], [False, True])

which is neither user friendly not efficient.

Using mandelbrot-prelude Haskell library, this could be written:

$ ~/code/code.mathr.co.uk/mandelbrot-prelude/ghci.sh
...
> :t tune
tune :: Tuning t => (t, t) -> t -> t
> tune (parse ".(011)", parse ".(100)") (parse ".(01)") :: BinaryAngle
.(011100)

See also tune2 for operating on a pair of angles.

Using mandlebrot-symbolics C libray is also possible, let me know if you want instructions.

Böttcher function edit

jp edit


Böttcher function Japanese: Böttcher function , Böttcher function English: Böttcher function , French: Fonction de Böttcher , German: Iterative dynamical system (recurrence formula) with Böttcher-funktion as a complex number

 z, c Formulas for complex iterative dynamical systems of Julia sets A set such that in the limit of n→∞, is called a Julia set. The sequence of complex numbers in this case is called an orbit, and if it repeats the same value every finite number, it is said that the orbit has a period.  If a Julia set is a simply connected set (a set with no isolated parts), i.e. a point on the Mandelbrot (closed) set , then the outside of the Julia (closed) set is the outside of the closed unit disk. functions that conformally mapf n (z, c)∞WithDoubleStruckCapital:Jcf n (z, c) cDoubleStruckCapital:MDoubleStruckCapital:JcDoubleStruckCapital:D Conformal mapping formula by Böttcher function of Julia set exists. This is called the Böttcher function of the Julia set. In 1905, based on the theorem obtained by LEBöttcher, the existence of his Böttcher function (map) in general satisfying the functional equation called the Böttcher equation is guaranteed, but his Böttcher function of Julia sets is A special example of  From the previous conformal map, the Böttcher function of the Julia set becomes almost equal to Withthe linear function as it approaches the point at infinity . WithOn the other hand, Withthe absolute value of the function value approaches 1 when approaches the boundary of the Julia set from the outside. Therefore, the absolute value of the Böttcher function for Julia sets is always greater than one.  Specifically, the Böttcher function for Julia sets is Böttcher function for Julia sets (limit expression) defined by However, since the argument of the limit function is not the principal value but the limit value obtained while maintaining multivalence, it is actually very difficult to perform numerical calculations using this limit representation formula. However, the infinite series Böttcher function of Julia set (series expansion) has fast convergence and can be used numerically for the Böttcher function of Julia sets. Still, Within some regions near the boundary of the Julia set, Argument of the Böttcher function for Julia setsthe series term of Series term for argument of Böttcher function of Julia set Unnatural branch cutting lines may occur due to the multivalency of . This can be eliminated by analytic continuation so that the branch cut line always passes through the inside of the Julia set*1.  The Böttcher function of the Mandelbrot set conformally maps the exterior of Böttcher function Φ(c) of the Mandelbrot setthe Mandelbrot (closed) set to the exterior of DoubleStruckCapital:Mthe closed unit disk.DoubleStruckCapital:D Conformal mapping of the Mandelbrot set by the Böttcher function , which Φ(c)=Φc(c)is the function if in the Böttcher function of the Julia set. That is, the formulas can be obtained by replacing all of Withthem with , and it is good to use the aforementioned infinite series for numerical calculations. cHowever, it is necessary to process branch cut lines in the same way*1.  The Mandelbrot set has an infinite number of reduced replicas of the entire set (but they are slightly distorted and not similar) around the boundary, and it looks like a disconnected set with "islands". In reality, however, the ``islands are all connected to the main body through narrow regions that extend like a tree, forming part of a simply-connected set. In 1982 A. Douady and JH Hubbard F(c)used him to prove that the Mandelbrot set is simply connected.  Also, the area of ​​the Mandelbrot set can be expressed by an infinite series using the coefficients of the Laurent series expansion of the inverse function of the Böttcher function of the Mandelbrot set (for details, see http://mathworld.wolfram.com/MandelbrotSet . html , etc., but this series converges very slowly).  cAs the absolute value of increases, θc(c)the value of Arg(c)approaches . So the problem is how the Rcurve extends to the boundary of his Mandelbrot set when we specify an arbitrary declination . θc(c)=RThis curve is called the external ray, or perimeter angle.  Ris a rational number2 pIt has been proven that the tip of the external ray in the double case reaches the boundary of the Mandelbrot set. It is also expected to enter and asymptotically approach the interstices of the intricate Mandelbrot set). For her Böttcher function of the Julia set, we can think of an external ray as well.  The Böttcher function is a special case of a conformal map, and although it is an analytically defined function, it originates from complex iterative dynamical systems and fractal geometry and, as mentioned above, has applications exclusively in these fields. , is usually not treated as a kind of special function.  The Julia set itself originated from the research on iterative orbits of complex numbers by PJL Fatou, GM Julia and others c. It received little attention until 1980, when it was discovered. Since then, its extremely complex and beautiful shape has attracted the attention of many people, and has influenced not only mathematics but also philosophy and art. Fractal geometry has provided a realistic method of numerically and geometrically dealing with patterns of shapes and phenomena that have been excluded from consideration in conventional natural science, despite being commonplace in the natural world. In fact, when it was pointed out that fractal geometrical features were found in the branching of plants and the shape of coastlines, they were immediately applied to computer graphics to reproduce them.

[Notes]

  • 1 A specific example of how to do this is explained in " Mathematica Related ". (However, even this method cannot remove some unnatural branch cutting lines.)

Symbols for Böttcher functions in Julia sets  Graph of the (Julia set) Böttcher function of a complex variable Symbols for Böttcher functions in Julia sets. This Julia set is commonly called a "dendritic branch". (Julia set) graph of Böttcher function (complex variable) (Julia set) graph of Böttcher function (complex variable) (Julia set) graph of Böttcher function (complex variable) (Julia set) graph of Böttcher function (complex variable) (Julia set) graph of Böttcher function (complex variable)

 Graph of the (Julia set) Böttcher function of a complex variable Symbols for Böttcher functions in Julia sets. This Julia set is commonly known as "Douady's Rabbit". The complex numbers in this set take orbits of period 3 around each point where the three subregions join. (Julia set) graph of Böttcher function (complex variable) (Julia set) graph of Böttcher function (complex variable) (Julia set) graph of Böttcher function (complex variable) (Julia set) graph of Böttcher function (complex variable) (Julia set) graph of Böttcher function (complex variable)

 Graph of the (Julia set) Böttcher function of a complex variable Symbols for Böttcher functions in Julia sets. The orbits of this Julia set have period 11. (Julia set) graph of Böttcher function (complex variable) (Julia set) graph of Böttcher function (complex variable) (Julia set) graph of Böttcher function (complex variable) (Julia set) graph of Böttcher function (complex variable) (Julia set) graph of Böttcher function (complex variable)

 Graph of the (Julia set) Böttcher function of a complex variable Symbols for Böttcher functions in Julia sets. This Julia set is known as an example with an invariant circular orbit called " Siegel disk " inside it . (Julia set) graph of Böttcher function (complex variable) (Julia set) graph of Böttcher function (complex variable) (Julia set) graph of Böttcher function (complex variable) (Julia set) graph of Böttcher function (complex variable) (Julia set) graph of Böttcher function (complex variable)

symbol for the Böttcher function of the Mandelbrot set  Graph of the (Mandelbrot set) Böttcher function of a complex variable The symbol Φ(c) for the Böttcher function of the Mandelbrot set. The contour lines of the absolute value and the angle of argument in the second graph can be seen as the equipotential lines and electric lines of force that occur around a charged metal rod whose cross-section is Mandelbrot-shaped. The latter is also an external ray. (Mandelbrot set) Graph of Böttcher function (complex variable) (Mandelbrot set) Graph of Böttcher function (complex variable) (Mandelbrot set) Graph of Böttcher function (complex variable) (Mandelbrot set) Graph of Böttcher function (complex variable) (Mandelbrot set) Graph of Böttcher function (complex variable)

 Graph of the (Mandelbrot set) Böttcher function of a complex variable The symbol Φ(c) for the Böttcher function of the Mandelbrot set. Enlarge the peripheral part of the protrusion with the largest imaginary part. (Mandelbrot set) Graph of Böttcher function (complex variable) (Mandelbrot set) Graph of Böttcher function (complex variable) (Mandelbrot set) Graph of Böttcher function (complex variable) (Mandelbrot set) Graph of Böttcher function (complex variable) (Mandelbrot set) Graph of Böttcher function (complex variable)

 Graph of the (Mandelbrot set) Böttcher function of a complex variable The symbol Φ(c) for the Böttcher function of the Mandelbrot set. Zoom in on the peripheral portion of the "island" on the negative real axis. (Mandelbrot set) Graph of Böttcher function (complex variable) (Mandelbrot set) Graph of Böttcher function (complex variable) (Mandelbrot set) Graph of Böttcher function (complex variable) (Mandelbrot set) Graph of Böttcher function (complex variable) (Mandelbrot set) Graph of Böttcher function (complex variable)

Symbols of composite functions of Böttcher functions and automorphic functions of the Mandelbrot set  For reference, this function inverts the Schwarz automorphic function to the outside of the unit circle, and uses the Böttcher function to make the domain outside the Mandelbrot set. In order to make it easier to distinguish the basic areas of details, only the last graph is not contoured. (Mandelbrot set) Graph of Böttcher function (complex variable) (Mandelbrot set) Graph of Böttcher function (complex variable) (Mandelbrot set) Graph of Böttcher function (complex variable) (Mandelbrot set) Graph of Böttcher function (complex variable) (Mandelbrot set) Graph of Böttcher function (complex variable)



 A method for removing unnatural branch cuts that occur when computing the Böttcher function as a series. Here we use his Böttcher function for the Mandelbrot set, but the Julia set can be treated in a similar way.  However, even with this method, some unnatural branch cutting lines cannot be removed. This is especially noticeable when zooming in on details. In addition, it takes a lot of calculation time compared to simple series calculation.  The code actually programmed in this way is in the Mathematica-Package file FractalRelated.m in "Files Created with Version 8", "Code for Graphs of Special Functions"* 1 .  By the way, the method of calculating the complex function itself first like this site is not common, and it seems that the method of calculating by tracing the curve of the external ray is mostly adopted.  (Actually, the method described below has many technically "subtle" points and is not recommended.) Reproducing the phenomenon and confirming the cause  When processing to remove unnatural branch cutting lines is applied (Fig.1), and when series expansion (see Böttcher function ) is used for numerical calculation without processing (Fig.2 ), we get the following: Böttcher function of the Mandelbrot set (Fig.1) Böttcher function of the Mandelbrot set (Fig.2)

 In order to confirm the cause of (Fig.2), the argument θc(c)series term function Series term function Tn(c) is drawn separately ( n=2~5), it looks like (Fig.3). At this time, the unnatural branch-cutting line of (Fig.2) is exactly the same shape as the branch-cutting line of (Fig.3). Böttcher function of the Mandelbrot set (Fig.3)

Therefore, we only need to replace the series term function with if it is  within the region bounded by this branch cut and cthe boundary of the Mandelbrot set .Tn(c)+2πIm(c)<0Tn(c)-2π

How to remove unnatural branch cutting lines c=c0(1) With the argument as the initial value, find c0the "shortest distance*2" from to the Mandelbrot set (Fig.4). (2) Place an arbitrary number of equidistant points on the circle whose center is the radius (Fig.5). (3) Among the equally divided points on the circumference, select one equally divided point that minimizes the function value when used as an argument for the “continuous Mandelbrot set iteration count function*3” , and assume this point (Fig. .6). Find the shortest distance from ④ to the Mandelbrot set . After that, if you repeat from ② in the same way, at a certain point in time, the branch cutting line of the series term function may be exceeded*4. If it exceeds, it turns out that the original is in the intended domain, so we adjust ni (Fig.7) .d(c0)=d0

c0d0

S(c)c1

c1d(c1)=d1

kckTn(c)cTn(c)2 p Böttcher function of the Mandelbrot set (Fig.4) Böttcher function of the Mandelbrot set (Fig.5) Böttcher function of the Mandelbrot set (Fig.6) Böttcher function of the Mandelbrot set (Fig.7)

[Notes] *1: In Package file ``FractalRelated.m, Mathematica command `` Compile is used together  in order to increase the calculation speed even a little . However, I get a Compile error message (due to my lack of programming skills). Eliminating Compile on a trial basis obviously increases the calculation time, so it seems that Compile is effective.  In Mathematica Ver.10 released in July 2014, a new Böttcher function was implemented. I suspect (but I haven't verified) that the problem you're dealing with has also been resolved.  *2: The shortest distance function to the Mandelbrot set is given by the following formula. See the Böttcher function page here . This is the case for Julia sets, but for Mandelbrot sets, and so on.


fn(z, c)Withz=c Shortest distance function to the Mandelbrot set  This formula is described in Reiji Yoshida, Masayo Fujimura, and Yasuhiro Goto, “Visualization of Julia sets by constructing Böttcher functions” (National Defense Academy of Japan, Research Institute for Mathematical Sciences, Kokyuroku No.1759 (2011), pp.85-98). there is.  For reference, the visualization of the shortest distance function looks like this: graph of the shortest distance function to the Mandelbrot set  In addition, since the distance mentioned here is the "curve distance", there is an inherent problem in considering the radius in (2).

 *3: “Continuous Mandelbrot set iteration count function (tentative name)” is an integer value function representing the number of iterations of the complex dynamic system of the Mandelbrot set. (and normalized over the entire range so that the maximum value is 1). continuous Mandelbrot set iteration count function  The original formula for this can be found on Paul Nylander's site http://www.bugman123.com/index.html .  For reference, a visualization of the continuous Mandelbrot set iteration count function looks like this: Graph of the Mandelbrot set iteration count function Graph of the Mandelbrot set iteration count function  Note that it is also possible to use the aforementioned shortest distance function instead of this function (by selecting the point where the function value is maximum), but the calculation will be a little slower.

 *4: For the upper limit of repetitions, Tn(c)the value range of the series term function can be specified arbitrarily. lines will increase.  In fact, we have already confirmed in experimental calculations that there are parts that cannot be removed even if we specify a fairly wide range of values ​​(one of the reasons why this processing method is imperfect).

Wolf Jung edit

https://web.archive.org/web/20190627044357/http://www.mndynamics.com/indexp.html#XR 11 May 2008 - 10 Jun 2023

https://arxiv.org/abs/math/9711213 https://arxiv.org/abs/math/9905169


How to draw external rays:

Background:

In the iteration of quadratic polynomials  , the parameter c is fixed and the dynamic variable z is evolving when the mapping is applied again and again. Both c and z are complex numbers. The filled Julia set   contains those values z that are not going to the point at infinity under iteration. The dynamics is chaotic on its boundary  , which is the Julia set. The mapping   is conjugate to   in a neighborhood of  . The conjugacy is provided by the Boettcher mapping  , which satisfies

 


We have

 

for a suitable choice of the  -th root. This is made precise by the following formula, which is valid for large |z| with the principal value of the roots:


 


In the dynamics of quadratic polynomials, there is a basic dichotomy:

  • When the critical orbit stays bounded, then the Julia set is connected, and Φc(z) extends to a conformal mapping from its exterior to the exterior of the unit disk. External rays and equipotential lines in the dynamic plane are defined as preimages of straight rays and circles, respectively. By definition, the parameter c belongs to the Mandelbrot set M.
  • When the critical orbit escapes to ∞, the Julia set is totally disconnected, and the parameter c is not in M. Now Φc(z) does not extend to the critical point z = 0, e.g., but it is well-defined at the critcal value z = c.

The mapping ΦM(c) is defined by evaluating the Boettcher mapping at the critcal value z = c, i.e., ΦM(c) = Φc(c). Adrien Douady has shown that it is a conformal mapping from the exterior of the Mandelbrot set M to the exterior of the unit disk, thus M is connected. By means of ΦM(c), external rays and equipotential lines are defined in the parameter plane as well. See the image above. When the argument θ of an external ray is a rational multiple of 2π, then the ray is landing at a distinguished boundary point: a rational dynamic ray is landing at a periodic or preperiodic point in the Julia set. A rational parameter ray is landing at a Misiurewicz point or a root point in the boundary of the Mandelbrot set. See the following papers by Dierk Schleicher (arXiv:math.DS/9711213) and John Milnor (arXiv:math.DS/9905169). Computation of the external argument: External rays are characterized by the fact that the argument argc(z) of Φc(z), or the argument argM(c) of ΦM(c), equals a given value θ. For large |z|, formula (2) implies

(3)  argc(z) = arg(z) + ∑n=1∞ 1/2n arg(1 + c/(fcn-1(z))2) = arg(z) + ∑n=1∞1/2n arg(fcn(z)/(fcn(z)-c)).

Here arg(z) denotes the argument of the complex number z, i.e., its angle to the positive real axis. This angle is defined only up to multiples of 2π. E.g., a point z on the negative real axis has the arguments ±π, ±3π, ±5π ... The principal value of the argument is the unique angle with -π<arg(z)≤π. This definition is used because a function should be single-valued. However, a discontinuity is introduced artificially when a point z is crossing the negative real axis, say going from i through -1 to -i: its argument should go from π/2 through π to 3π/2, but the principal value goes from π/2 to π, jumps to -π, and goes to -π/2.

Formula (3) means that the orbit zn=fcn(z) of z and the arguments arg(zn/(zn-c)) are computed. The function arg(z/(z-c)) is discontinuous on the straight line segment from the critical point z = 0 to the critical value z = c, which is drawn in red in the figure: when z belongs to this line segment, then z/(z-c) will be on the negative real axis. When the Julia set is connected, the location of the discontinuity can be moved onto a kind of curve joining 0 and c within the Julia set, such that the argument is continuous in the whole exterior: Suppose that z is crossing the red line from right to left, and moving to the fixed point αc and beyond. The principal value arg(z/(z-c)) is jumping from π to -π at the red line, goes up to about -0.68π at αc and stays continuous there. The modified function arg(z/(z-c)) is continuous with value π, as z is crossing the red line. It goes up to about 1.32π when z is approaching αc from the right, and jumps to about -0.68π as z is crossing αc to the left. When the modified function arg(z/(z-c)) is used instead of the principal value for computing argc(z) according to (3), this formula will be valid not only for large |z| but everywhere outside the Julia set. In practice, the Julia set may be approximated by two straight line segments from the fixed point αc to 0 and c, respectively: when zn is in the triangle between 0, c, and αc, adjust the principal value of arg(zn/(zn-c)) by ±2π. The triangle is given by three simple inequalities, cf. mndlbrot::turn(). In the two images below, the different colors indicate values of argc(z), and thus, external rays. The computed value of argc(z), and thus the color, is jumping on certain curves joining preimages of 0, where some iterate zn is crossing the line where arg(z/(z-c)) is discontinuous. In the left image, the principal value of arg(z/(z-c)) is used, and in the right image, the argument is adjusted. The erroneous discontinuities of the computed argc(z) are moved closer to the Julia set.


The same modification seems to work for the computation of the argument argM(c) in the parameter plane as well, although the Julia set is disconnected. The left image below shows the discontinuities arising from the principal value of arg(z/(z-c)) in (3), they are appearing on curves joining centers of different periods. In the right image, the argument is adjusted, and the discontinuities are moved closer to the Mandelbrot set.


Drawing “all” rays: To get an overview of all rays, compute the argument at every pixel and map it to a range of colors. The images above where obtained with Mandel's algorithm No. 6. The code of mndlbrot::turn() has been ported to Java by Evgeny Demidov. See also the algorithms 7 and 8, where no discontinuity appears, but only the ends of certain rays are visible. Adam Majewski has suggested that boundary scanning (below) can be used to draw several rays simultaneously. Drawing a single ray: In the dynamic plane, external rays can be drawn by backwards iteration: It is most effective for a periodic or preperiodic angle. You must keep track of points on the finite collection of rays with angles θ, 2θ, 4θ ... Say zl,j corresponds to a radius R1/2l and the angle 2jθ. Then fc(z) maps zl,j to z(l-1),(j+1). This point, which was constructed before, has two preimages under fc(z) . The one that is closer to z(l-1),j is the correct one. This criterion was proved by Thierry Bousch. The ray will look better when you introduce intermediate points. Backwards iteration is used in Mandel again since version 5.3, both in the plane and on the sphere. The current implementation consists of QmnPlane::backRay(). The following methods apply with small modifications to the dynamic plane and the parameter plane as well: Choose a sequence of radii tending to 1, e.g., Rn = R1/2n with R large. Search corresponding points on the ray, which satisfy Φc(zn) = Rn eiθ or ΦM(cn) = Rn eiθ, respectively. To this end, the Newton method is employed. Using the approximation (1), the function N(z) = z - P(z)/P'(z) is iterated to find a zero of P(z) = fcn(z) - R ei2nθ, or analogously in the parameter plane. Intermediate points should be used to make the curve between the points smooth, and to reduce the danger of the Newton method failing when the previous point is not in the immediate basin of attraction for the new point. Presumably, this method has been developed by John Hamal Hubbard or John Milnor, and it has been implemented by Arnaud Chéritat, Christian Henriksen and Mitsuhiro Shishikura as well. It is used in Mandel since version 5.4, the current implementation consists of QmnPlane::newtonRay() and QmnPlane::rayNewton(). These were ported to Java as well. The paper by Tomoki Kawahira gives a detailed description and an error estimate for the approximation (1). There are two approaches to draw the external ray of angle θ by computing the external argument (3): Argument scanning means to check every pixel, which is very slow, but might be improved by a recursive subdivision of the image. Argument tracing means to construct a sequence of triangles along the ray. (I have learned this technique from Chapter 14 in this book, where boundary tracing is used to draw escape lines.) First two pixels at the border of the image are found, such that the external argument is <θ at P0 and ≥θ at P1. Then compute the external argument at the third vertex P2: If it is <θ, then the ray is intersecting the edge from P1 to P2. Reflect the triangle along this edge to obtain the new vertex P2, rename the old vertex P2 to P0, and keep P1. If it is ≥θ, then the ray is intersecting the edge from P0 to P2. Reflect the triangle along this edge to obtain the new vertex P2, rename the old vertex P2 to P1, and keep P0. Repeat this process with the new triangle … and draw the ray along the sequence of triangles. This basic idea can be improved in several ways: To make the ray look smooth, draw line segments instead of single pixels. Choose the way of reflection such that the distance to previously drawn pixels is increased. Check this distance also to avoid going in circles around the endpoint. (This is not yet implemented in the current version of Mandel.) When computing the external argument according to (3), adjust the principal value of arg(zn/(zn-c)) by ±2π, if zn=fcn(z) or zn=fcn(c) is in the triangle between 0, c, and αc as explained above. Save the intermediate arguments arg(zn/(zn-c)) for the next pixel to check for jump discontinuities by comparison, and to adjust them. (The previous technique shifts these closer to the boundary, but does not remove them.) A drawback of this approach is that it restricts the number of iterations to the length of the array, which means in particular that the end of the ray may be lost. When the image shows a very small part of the Julia set or Mandelbrot set, maybe the starting point will not be found in spite of the previous techniques. It can be searched for on the boundary of a virtual enlarged image, but this will be impractical when it is necessary to enlarge the image by several orders of magnitude. Argument tracing has been used in Mandel since 1997, the current implementation consists of mndlbrot::turnsign() and QmnPlane::traceRay(). Johannes Riedl has used a method, where the next point is chosen from a finite collection of points at a prescribed distance from the last one. The power series expansion of Φc-1 or ΦM-1 could be used as well to draw external rays, but I expect it to converge extremely slowly close to the boundary. Noting that the direction of an external ray is determined uniquely from the fact that it is orthogonal to the equipotential line, external rays can be drawn by solving a differential equation numerically. I am not sure if this method will be numerically stable, and I have received different claims about it. The inverse of Φc(z) or ΦM(c) can vary considerably on a small interval: the angles 1/7 (of period 3) and 321685687669320/2251799813685247 (of period 51) differ by about 10-15, but the landing points of the corresponding parameter rays are about 0.035 apart. On this scale, the rays cannot be drawn well by tracing the argument, unless high-precision arithmetics is used. The following image shows the parameter ray for 1/7 and two rays of period 51 in the slit betwen the main cardioid and the 1/3-component. The argument is not distinguishable (left), but the Newton method has no problem (right). In the dynamic plane, backwards iteration works as well.


Note that most of the methods above can be modified to draw equipotential lines. In this case there is no problem with the ambiguity of the argument. Boundary scanning is slow and the Newton method has three problems: finding a starting point, drawing several lines around a disconnected Julia set, and choosing the number of points depending on the chosen subset of the plane. Therefore I am using boundary tracing with starting points on several lines through the image. See QmnPlane::green(). Still, sometimes not the whole line is drawn, or not all components are found. Last modified: August 2015.  Disclaimer: I am not responsible for linked sites by other people.

shape of critical orbit edit

  • parameter c is a root point (radius = 1.0, int angle is rational number) : then critical orbit is n-th arm star
  • near root point ( t = 1/6 + 0.0001164088720081 , r = 0.9999361207965108 ): then critical orbit is a rotated n-th arm star ( spirals)
  • c = -0.717564224044015 +0.146337277323239 i  : simple spiral without rays ( near 10/21)



dynamics edit

evolution of dynamics along escape route 0 ( parabolic implosion)
parameter c location of c Julia set interior type of critical orbit dynamics critical point fixed points stability of alfa
c = 0 center, interior connected = Circle Julia set exist superattracting attracted to alfa fixed point fixed critical point equal to alfa fixed point, alfa is superattracting, beta is repelling r = 0
0<c<1/4 internal ray 0, interior connected exist attracting attracted to alfa fixed point alfa is attracting, beta is repelling 0 < r < 1.0
c = 1/4 cusp, boundary connected = cauliflower exist parabolic attracted to alfa fixed point alfa fixed point equal to beta fixed point, both are parabolic r = 1
c>1/4 external ray 0, exterior disconnected = imploded cauliflower disappears repelling repelling to infinity both finite fixed points are repelling r > 1


Stability r is absolute value of multiplier m at fixed point alfa  :

 


c = 0.0000000000000000+0.0000000000000000*I 	 m(c) = 0.0000000000000000+0.0000000000000000*I 	 r(m) = 0.0000000000000000 	 t(m) = 0.0000000000000000 	period = 1
 c = 0.0250000000000000+0.0000000000000000*I 	 m(c) = 0.0513167019494862+0.0000000000000000*I 	 r(m) = 0.0513167019494862 	 t(m) = 0.0000000000000000 	period = 1
 c = 0.0500000000000000+0.0000000000000000*I 	 m(c) = 0.1055728090000841+0.0000000000000000*I 	 r(m) = 0.1055728090000841 	 t(m) = 0.0000000000000000 	period = 1
 c = 0.0750000000000000+0.0000000000000000*I 	 m(c) = 0.1633399734659244+0.0000000000000000*I 	 r(m) = 0.1633399734659244 	 t(m) = 0.0000000000000000 	period = 1
 c = 0.1000000000000000+0.0000000000000000*I 	 m(c) = 0.2254033307585166+0.0000000000000000*I 	 r(m) = 0.2254033307585166 	 t(m) = 0.0000000000000000 	period = 1
 c = 0.1250000000000000+0.0000000000000000*I 	 m(c) = 0.2928932188134524+0.0000000000000000*I 	 r(m) = 0.2928932188134524 	 t(m) = 0.0000000000000000 	period = 1
 c = 0.1500000000000000+0.0000000000000000*I 	 m(c) = 0.3675444679663241+0.0000000000000000*I 	 r(m) = 0.3675444679663241 	 t(m) = 0.0000000000000000 	period = 1
 c = 0.1750000000000000+0.0000000000000000*I 	 m(c) = 0.4522774424948338+0.0000000000000000*I 	 r(m) = 0.4522774424948338 	 t(m) = 0.0000000000000000 	period = 1
 c = 0.2000000000000000+0.0000000000000000*I 	 m(c) = 0.5527864045000419+0.0000000000000000*I 	 r(m) = 0.5527864045000419 	 t(m) = 0.0000000000000000 	period = 1
 c = 0.2250000000000000+0.0000000000000000*I 	 m(c) = 0.6837722339831620+0.0000000000000000*I 	 r(m) = 0.6837722339831620 	 t(m) = 0.0000000000000000 	period = 1
 c = 0.2500000000000000+0.0000000000000000*I 	 m(c) = 0.9999999894632878+0.0000000000000000*I 	 r(m) = 0.9999999894632878 	 t(m) = 0.0000000000000000 	period = 1
 c = 0.2750000000000000+0.0000000000000000*I 	 m(c) = 1.0000000000000000+0.3162277660168377*I 	 r(m) = 1.0488088481701514 	 t(m) = 0.0487455572605341 	period = 1
 c = 0.3000000000000000+0.0000000000000000*I 	 m(c) = 1.0000000000000000+0.4472135954999579*I 	 r(m) = 1.0954451150103321 	 t(m) = 0.0669301182003075 	period = 1
 c = 0.3250000000000000+0.0000000000000000*I 	 m(c) = 1.0000000000000000+0.5477225575051662*I 	 r(m) = 1.1401754250991381 	 t(m) = 0.0797514300099943 	period = 1
 c = 0.3500000000000000+0.0000000000000000*I 	 m(c) = 1.0000000000000000+0.6324555320336760*I 	 r(m) = 1.1832159566199232 	 t(m) = 0.0897542589928440 	period = 1

Matrix operations edit

  • multiplication
  • convolution

convolution edit

In mathematics (in particular, functional analysis)

  • (continous) convolution is a mathematical operation on two functions (f and g) that produces a third function ( )
  • discere convolution is
    • in Digital Image Processing it is an matrix operation
    • In Digital Signal Processing (DSP) seem that it is different thing ( the discrete convolution of two infinite series = the Cauchy product )


Video


discrete edit

In Digital Image Processing : For discrete convolution functions are represented/defined by arrays ( matrices) with the same size, so convolution is an matrix operation, different then matrix multiplication

Elements:

  • input
    • signal = input pixel = central pixel with it's neiborhood = input matrix = fragment of grayscale image with the same size as the kernel matrix
    • filter = kernel matrix or matrices. It's dimensions are odd (since we need it to have a central entry). This function is called impulse response in DSP.
    • convolution formula
  • output: output pixel


discrete finite 1D convolution edit

For functions f, g defined on the set Z of integers, the discrete convolution of f and g is given by:[23]

The convolution of two finite sequences is defined by extending the sequences to finitely supported functions on the set of integers. Thus when g has finite support in the set   (representing, for instance, a finite impulse response), a finite summation may be used:[24]

 

where

  • f is an input matrix
  • g is a mask ( 1D array with length = 1 + 2*M

numerical examples edit

Example by Song Ho Ahn [25]

  • input x[n] = { 3, 4, 5 }
  • kernel h[n] = { 2, 1 }
  • output y[n] = { 6, 11, 14}

Example by Cornell University [26]

  • input 1D image f = [10, 50, 60, 10, 20, 40, 30]
  • kernel is: g = [1/3, 1/3, 1/3]
  • output h = [20, 40, 40, 30, 20, 30, 23.333] assuming all entries outside the image boundaries are 0

discrete 2D convolution edit

 
Discrete 2D Convolution Animation


notation edit

letter notation edit

For example, if we have two three-by-three matrices with:

  • the first is a kernel
  • the second is the piece of image

The convolution is the process of flipping both the rows and columns of the kernel and multiplying locally similar entries and summing. The element at coordinates [2, 2] (that is, the central element) of the resulting image would be a weighted combination of all the entries of the image matrix, with weights given by the kernel:

 

number notation edit

The general form for matrix convolution is

 

delta notation edit

The general expression of a convolution is

 

where:

  •   is the original image ( input)
  •   is the filter kernel
  •   is the filtered image = output

Every element of the filter kernel is considered by   and  .

numerical example edit

Example from rosettacode : Image convolution Original image:

    1     2     1     5     5
    1     2     7     9     9
    5     5     5     5     5
    5     2     2     2     2
    1     1     1     1     1

Original kernel:

    1     2     1
    2     4     2
    1     2     1

Padding with zeroes:

   12    24    43    66    57
   27    50    79   104    84
   46    63    73    82    63
   42    46    40    40    30
   18    19    16    16    12

Padding with fives:

   47    44    63    86    92
   47    50    79   104   104
   66    63    73    82    83
   62    46    40    40    50
   53    39    36    36    47

Duplicating border pixels to pad image:

   20    30    52    82    96
   35    50    79   104   112
   62    63    73    82    84
   58    46    40    40    40
   29    23    20    20    20

Renormalizing kernel and using only values within image:

  21.3333   32.0000   57.3333   88.0000  101.3333
  36.0000   50.0000   79.0000  104.0000  112.0000
  61.3333   63.0000   73.0000   82.0000   84.0000
  56.0000   46.0000   40.0000   40.0000   40.0000
  32.0000   25.3333   21.3333   21.3333   21.3333

Only processing inner (non-border) pixels:

    1     2     1     5     5
    1    50    79   104     9
    5    63    73    82     5
    5    46    40    40     2
    1     1     1     1     1

logistic edit

https://math.stackexchange.com/questions/4648174/chaotic-behavior-of-the-logistic-map-at-r-4

It is well-known that a parametrization of   transforms the dynamic to  , the destructive left-shift of the binary digit sequence of  , removing the leading digit in every step. For a period-3 cycle you now need a rational number with a period-3 binary digit sequence like  , thus take  

This logic also predicts that using double precision floating point numbers the cycle will deteriorate in about 50 iterations, as then the encoding bits of the initial value are used up, and the remaining contents in   will be random bits produced by the floating point truncation errors during the computation.

wrt m edit

Define the iterated function   by:

 


Pick arbitrary names for the iterated function and it's derivatives These derivatives can be computed by recurrence relations.

Derivation of recurrence relations edit

First basic rules:

  • Iterated function   is a function composition
  • product rule : the derivative of the function h(x) = f(x) g(x) with respect to x is  
  • Applay chain rule for composite functions, so if   then  
  • z0 is a constant value
  • m is a variable here


 

 


 


 

result edit

description math notation arbitrary names Initial conditions Recurrence steps common names
iterated function         z or f
first derivative with respect to m         d

M islands edit

// https://www.shadertoy.com/view/NtKXRy
// fast deep mandelbrot zoom Created by peabrainiac in 2022-01-01

struct Minibrot {
    vec2 position;
    vec2 scale;
    vec2 a;
    float approximationRadius;
    float period;
    float innerApproximationRadius;
};

vec2 cmul(vec2 a, vec2 b){
    return vec2(dot(a,vec2(1.0,-1.0)*b),dot(a,b.yx));
}

vec2 cdiv(vec2 a, vec2 b){
    return vec2(dot(a,b),dot(a.yx,vec2(1.0,-1.0)*b))/dot(b,b);
}

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    // minibrots that this zoom passes by, in order. generated using another (currently private) shader.
    // those values are all relative to the previous minibrot each, which is also why we got fractional periods occuring here.
    Minibrot[] minibrots = Minibrot[](
        Minibrot(vec2(0.0,0.0),vec2(1.0000000,0.0),vec2(1.0000000,0.0),1e310,1.0000000,1e310),
        Minibrot(vec2(-1.7548777,7.6738615e-13),vec2(0.019035511,8.2273679e-14),vec2(-9.2988729,1.7585520e-11),0.14589541,3.0000000,0.14539978),
        Minibrot(vec2(0.26577526,7.9269924e-14),vec2(1.7697619e-7,-5.1340973e-17),vec2(270.20074,4.0151175e-8),0.000077616154,24.666666,0.0063840700),
        Minibrot(vec2(-108.77785,-54.851582),vec2(-0.0066274544,-0.0015506834),vec2(-1.3899392,-12.041053),2.4771779,2.0540543,2.4771934),
        Minibrot(vec2(7.1870313,8.6428413),vec2(-0.0000022837414,-0.0000032153393),vec2(-220.36572,-444.80170),0.036942456,2.4868422,0.037870880),
        Minibrot(vec2(-1568.3745,271.39987),vec2(0.000062814426,-0.00053683209),vec2(32.133900,28.593172),1.9547913,1.8042328,1.9547923),
        Minibrot(vec2(-39.815723,-13.059175),vec2(2.0205009e-8,-1.0168816e-8),vec2(-6508.9990,-1532.6521),0.0095185349,2.5542521,0.0094131418),
        Minibrot(vec2(-36646.895,-15671.298),vec2(0.000033099699,-0.000025827576),vec2(-145.93997,-50.201008),3.9468935,1.7830081,3.9468935),
        Minibrot(vec2(45.757519,-169.32626),vec2(-3.0256565e-11,4.0970952e-11),vec2(62932.395,-125135.27),0.0021758196,2.5608499,0.0021774585),
        Minibrot(vec2(-1356.5258,127.47163),vec2(-1.4676038e-11,-1.9145330e-10),vec2(-49048.219,-52953.777),0.091265261,2.3904953,0.091265261),
        Minibrot(vec2(617.66748,-1510.6793),vec2(-1.3358331e-11,1.0865342e-11),vec2(80652.148,-227075.38),0.089992270,2.4183233,0.089992270),
        Minibrot(vec2(-3096.2500,389.08243),vec2(-5.4140579e-13,-3.8956390e-12),vec2(-331122.59,-380331.28),0.14324503,2.4135096,0.14324503),
        Minibrot(vec2(-4897.3506,-2047.1642),vec2(5.5859196e-13,-4.3974197e-13),vec2(1120745.9,388190.56),0.16162916,2.4143343,0.16162916),
        Minibrot(vec2(-8970.1211,-2440.9661),vec2(8.1748722e-14,-1.0900626e-13),vec2(-2423142.8,-1211603.1),0.17583868,2.4141929,0.17583868)
    );
    
 

Tips edit

Basilica tips edit

  • 0, 1/2,
  • 1/4, 3/4,
  • 1/8, 3/8, 5/8, 7/8


Mandelbrot tips edit

Shrub ( using Romero-Pastor notation )

  • The end of the period-doubling cascade of any hyperbolic component hc is its Myrberg-Feigenbaum point MF(hc)
  • chaotic part of q/p limb
    • shrub is what emerges from MF(hc), due to its shape
    • filament
    • antenna
    • The chaotic region is made up of an infinity of hyperbolic components mounted on an infinity of shrub branches in each one of the infinity shrubs of the family.


Branches ( arms) of the shhrub:

  • main branch is the branch that emerges from the MF point. We associate the number 0 to the main branch because, as we shall see later, this branch does not belong to the shrub. The main branch finishes at the main node (whose

associated number is also 0) where the branches of the first level begin; and, therefore the main node is the point where in fact the shrub begins

  • It has p arms ( spokes, branches) numbered from 0 to p-1 in a clockwise direction ( following the branches by q ( where q is numerator of q/p )
  • We denominate hyperbolic component representative of this branch, or simply representative, the hyperbolic component with the smallest period in this branch (which is also that with the biggest size).

the period of the representative pr of any level m branch of a primary shrub q/p is = q + the sum of the m digits of the branch associated number

Arms numbers list ( from 0 to p-1 in a clockwise direction modulo q):

  • for q/p = 1/3 is {0,1,2}
  • for q/p = 1/5 is {0,1,2,3,4}
  • for q/p = 2/5 is {0,3,1,4,2}




Points ( nodes) = misiurewicz points

  • node of the shrub
    • branching points
      • Main node of the shrub ( associated number is 0) = Principal misiurewicz point of p/q-wake is  .
    • end of periodic cascade = Myrberg-Feigenbaum point MF(hc)
  • tips of arms
    • first tip of the shrub = ftip
    • last tip of the shrub = ltip

Topological types edit

all Misiurewicz points are centers of the spirals, which are turning:[27]

  • fast
  • slow
  • no turn : if the Misiurewicz point is a real number, it does not turn at all


Spirals can also be classified by the number of arms.

Visual types:[28]

  • branch tips = terminal points of the branches[29] or tips of the midgets[30]
  • branch point = points where branches meet[31]
    • centers of slow spirals with more then 1 arm
    • centers of spirals = fast spiral
  • band-merging points of chaotic bands (the separator of the chaotic bands   and   )[32] = 2 arm spiral



example tips edit

ftip(1/3) = M_{2,1}



ftip(2/5) =   = 0.0100(1) = 0.0101(0) ;

The angle  5/16  or  0101 has  preperiod = 4  and  period = 1. Entropy: e^h = 2^B = λ = 1.45108509 The corresponding parameter ray lands at a Misiurewicz point of preperiod 4 and period dividing 1.
Do you want to draw the ray and to shift c to the landing point? c = -0.636754346582390  +0.685031297083677 i    period = 0

My method edit

  • all number are decimal ratios
  • start from the angles of q/p-wake
  • number of main tips n = p - 1
  • step between tips  ??? s = wake_width/p
  • Least common multiple , between

1/2 wake edit

  • 1/3 wake
  • 1/2 tip
  • 2/3 wake


   


The angle  5/12  or  01p10 has  preperiod = 2  and  period = 2. The corresponding parameter ray lands
at a Misiurewicz point of preperiod 2 and period dividing 2. 
Do you want to draw the ray and to shift c to the landing point?
c = -1.543689012692076  +0.000000000000000 i    


Important points

  • root point between period 1 and 2 = c = -0.75 = -3/4 = birurcation point for internal angle 1/2. Landing point of 2 external rays 1/3 and 2/3
  • center of period 2 components c = -1
  • tip of main antenna c = -2 =  . It is landing point of externa ray for angle  
  • c = -1.543689012692076 = Principal Misiurewicz point of wake 1/2 =   = landing point of external rays 5/12 i 7/12

1/3 wake edit

The 1/3-wake of the main cardioid is bounded by the parameter rays with the angles

  • 1/7
  • 2/7

so

  • p = 3
  • wake_width = 1/7
  • number of main tips n = p - 1 = 2
  • step between tips s = wake_width/p = 1/21

Angles

  • 1/7 = wake
  • ftip
  • ltip
  • 2/7 = wake


The angle 3/14 or 0p011 has preperiod = 1 and period = 3. The corresponding parameter ray lands at a Misiurewicz point of preperiod 1 and period dividing 3. Do you want to draw the ray and to shift c to the landing point? c = -0.155788496687127 +1.112217114596038 i period = 0


The angle 2/7 or p010 has preperiod = 0 and period = 3. The conjugate angle is 1/7 or p001 . The kneading sequence is AA* and the internal address is 1-3 . The corresponding parameter rays land at the root of a satellite component of period 3. It bifurcates from period 1. Do you want to draw the rays and to shift c to the corresponding center?


.................... Data :

Angles

  • 1/7 = wake
  • 1/4 = 0.00(1) =   = c = -0.228155493653962 +1.115142508039937i = ftip
  • 1/6 = 0.0(01) =   = c = i = ltip
  • 2/7 = wake


The angle  1/4  or  01 has  preperiod = 2  and  period = 1. Entropy: e^h = 2^B = λ = 1.69562077 
The corresponding parameter ray lands at a Misiurewicz point of preperiod 2 and period dividing 1.
Do you want to draw the ray and to shift c to the landing point?
c = -0.228155493653962  +1.115142508039937 i    period = 0
The angle  1/6  or  0p01 has  preperiod = 1  and  period = 2.  The corresponding parameter ray lands at a Misiurewicz point of preperiod 1 and period dividing 2.
Do you want to draw the ray and to shift c to the landing point?
c = -0.000000000000000  +1.000000000000000 i    period = 10000

The angle 10/56 = 5/28  = 00p101 has  preperiod = 2  and  period = 3. The corresponding parameter ray lands at a Misiurewicz point of preperiod 2 and period dividing 3.

Do you want to draw the ray and to shift c to the landing point?

c = -0.007567415510079  +1.030984699759986 i  =  M_{2,3}

The angle  12/56 = 3/14  or  0p011 has  preperiod = 1  and  period = 3. The corresponding parameter ray lands at a Misiurewicz point of preperiod 1 and period dividing 3.
Do you want to draw the ray and to shift c to the landing point?
c = -0.155788496687127  +1.112217114596038 i   = M_{1,3}

The angle  13/56  or  001p110 has  preperiod = 3  and  period = 3. The corresponding parameter ray lands at a Misiurewicz point of preperiod 3 and period dividing 3.
Do you want to draw the ray and to shift c to the landing point? 
 c = -0.206384671763432  +1.122483745619330 i  = M_{3,3}
 

res edit

 

1/4 edit

 

Claude edit

Method by Claude

Algorithm

  • find angles of the wake
  • find angles of principal Misiurewicz point
  • find angles of spoke's tips using: "The tip of each spoke is the longest matching prefix of neighbouring angles, with 1 appended"

tips 1/3 edit

  • ftip  
  • ltip : The angle 1/6 or 0p01 has preperiod = 1 and period = 2. The corresponding parameter ray lands at a Misiurewicz point of preperiod 1 and period dividing 2. c= i is the landing point

1/3 edit

3 angles landing on M:

  0.001(010) 
  0.001(100)
  0.010(001)

The tip of each spoke is the longest matching prefix of neighbouring angles, with 1 appended

  0.001(010) 
  0.0011
  0.001(100)
  0.01
  0.010(001)


The angle 3/16 or 0011 has preperiod = 4 and period = 1. Entropy: e^h = 2^B = λ = 1.59898328 The corresponding parameter ray lands at a Misiurewicz point of preperiod 4 and period dividing 1. Do you want to draw the ray and to shift c to the landing point?

c = -0.017187977338350 +1.037652343793215 i period = 0

The angle 1/4 or 01 has preperiod = 2 and period = 1. Entropy: e^h = 2^B = λ = 1.69562077 The corresponding parameter ray lands at a Misiurewicz point of preperiod 2 and period dividing 1. Do you want to draw the ray and to shift c to the landing point?

M_{2,1) = c = -0.228155493653962 +1.115142508039937 i


The angle 1/6 or 0p01 has preperiod = 1 and period = 2. The corresponding parameter ray lands at a Misiurewicz point of preperiod 1 and period dividing 2. Do you want to draw the ray and to shift c to the landing point? c = -0.000000000000000 +1.000000000000000 i period = 10000

smooth ( continous) interpolation betweeen 2 functions edit

way to invert the Mandelbrot set or how to get from c to 1/c edit

  • Fraktoler Curious : z_(n+1) = (z_n)^2 + (tan(t) + i*c)/(i + c*tan(t)) with t from 0 to pi/2 gives interpolation between z^2 + c and z^2 + 1/c
  • Arneauxtje
    • 20 different methods: First the cardioid of the main body of the Mandelbrot set is converted to a circle


They all group in 3 families: addition ( c -> c-c*t+t/c and t=0..1) multiplication (c -> c/(t*(c*c-1)+1) and t=0..1 ) exponentiation ( c -> c^t and t=1..-1). The first 2 are pretty elementary to work out. The 3rd makes use of the fact that: c = a+ib = r*exp(i*phi) where r=sqrt(a*a+b*b) and phi=arctan(b/a) then c^t = r^t*exp(t*i*phi) = r^t*[cos(t*phi)+i*sin(t*phi)]

How to straighten a curve? edit

straighten a parabola edit

unfold a spiral edit

Unroll cardioid edit

subdivision edit

Tekst nagłówka edit

esher edit

mapping edit

Difference between transformation and projection? edit

"A projection is a kind of transformation.

A transformation is any modification of a coordinate that expresses that coordinate relative to a coordinate system that is different from its original coordinate system. As such, a transformation involves two coordinate systems: the source and the destination.

A projection is any transformation where the destination coordinate system has fewer dimensions than the source coordinate system. That is, it removes one or more dimensions from the coordinate." Nicol Bolas[34]

complex edit

log edit

http://www.carlostoro.com/section/main/complex_functions

 

In our example of the complex logarithm, we will procced to see how the unit circumference is transformed through it, ie we take the set

 

We know that it image through logarithm will be contained into imaginary axis since

 

In addition, we know that it will be in the interval   from imaginary axis because

 

So, Thus the unit circumference is transformed into the segment   of imaginary axis.

Therefore, it is easy to see now that any circumference of radius r is transformed into a segment  

And from this we deduce that the whole complex plane becomes the band given by the whole set

 

Observe that the negative part of the band is the image of the interior of the circumference of radius one and the positive part is the image of the exterior, let see the following figure:


exp edit

http://lymcanada.org/54/ : "The hidden periodicity of the complex exponential can be seen more dramatically when viewed as a transformation of an orthogonal grid of curves on a plane, or, as Riemann suggested, on the surface of a very large sphere. (See Figure 11.)

 

m set edit

https://fractalforums.org/fractal-programs-discussion-help-and-support/12/can-you-find-the-location-of-this-fractal/4963/new;topicseen#new The period 1 circle on the right takes up 66 of 601 vertical pixels representing 1 turn By trigonometry, c = -1 -0.25/sin(pi*66/601) = -1.739 +- 0.005.

angle t subtended by circle of radius r Draw right angled triangle from point outside circle to center of circle to tangent point on circle (pick one side) Right angle is on circle, angle at point outside circle is t/2, so hypotenuse * sin(t/2) = r So distance from point to center of circle is hypotenuse = r / sin(t/2)

conformal edit


Hyperbolic circle edit

Cayley edit

riemann edit


how to compute explicit riemann mapping for central component of basilica julia set?


ChatGPT

  • The computation of an explicit Riemann mapping for the central component of the Basilica Julia set is a highly complex task. The Basilica Julia set is associated with the quadratic map defined by the equation:

f(z) = z^2 + c where c is a complex constant. The central component of the Basilica Julia set is the largest connected component of points that do not escape to infinity under iteration of the map.

  • Finding an explicit Riemann mapping for such a set involves determining a conformal mapping that maps the interior of the unit disk onto the central component of the Basilica Julia set. Unfortunately, due to the intricate and fractal nature of the set, there is no known closed-form formula or simple algorithm to compute such a mapping.
  • The study of Riemann mappings and Julia sets is a complex field of research in complex analysis and fractal geometry. Researchers typically employ numerical approximation techniques, computational algorithms, and advanced mathematical tools to study and approximate Riemann mappings for complicated Julia sets.
  • If you are interested in the specific Riemann mapping for the central component of the Basilica Julia set, I would recommend consulting research papers, textbooks, or contacting experts in the field of complex analysis and fractal geometry for the most up-to-date information and techniques available.



How to compute approximate Riemann mappings for Central component of Basilica Julia set? ChatGPT

  • Computing approximate Riemann mappings for the central component of the Basilica Julia set involves using numerical methods and iterative algorithms. Here's a general outline of the process:
  • Choose a suitable domain: Start by selecting a domain in the complex plane that encompasses the central component of the Basilica Julia set. A common choice is the unit disk (or a slightly larger disk centered at the origin).
  • Discretize the domain: Divide the chosen domain into a grid of points. The density of the grid will depend on the desired accuracy of the approximation.
  • Iterate the quadratic map: For each point in the grid, apply a large number of iterations of the quadratic map f(z) = z^2 + c, where c is a complex constant associated with the Basilica Julia set. Keep track of the last few iterates (e.g., the last 100 iterates) for each point.
  • Normalize the iterates: Normalize the iterates obtained in step 3 by dividing each iterate by the maximum absolute value of the iterates for that particular point. This step helps ensure that the iterates are bounded and suitable for further processing.
  • Apply conformal mapping algorithms: Use numerical algorithms to approximate a conformal mapping that transforms the normalized iterates onto the unit disk or another desired target domain. These algorithms aim to find a conformal map that best preserves angles and shapes. There are several methods available, such as
  • Validate and refine the approximation: Check the accuracy of the approximate Riemann mapping by comparing the images of known points or curves on the Basilica Julia set. If needed, refine the discretization or adjust the parameters of the conformal mapping algorithm to improve the accuracy.
  • It's important to note that the process described above can be computationally intensive, and the accuracy of the approximation depends on the chosen domain, discretization, and numerical methods used. The complexity of the Basilica Julia set makes it a challenging problem, and obtaining highly accurate Riemann mappings may require advanced techniques and extensive computational resources.
  • To dive deeper into this topic, I recommend exploring research papers, textbooks, or consulting with experts in complex analysis, fractal geometry, or numerical methods for more specific algorithms and implementations.


Moebius edit

https://twitter.com/gabrielpeyre/status/1663409907276578819?s=20 A sub-group of Möbius transforms are those preserving the Poincaré upper-half (equivalently the disk). It corresponds to PSL(2) the 2x2 matrices with determinant 1. https://en.wikipedia.org/wiki/M%C3%B6bius_transformation#Subgroups_of_the_M%C3%B6bius_group

mapping Poincaré upper-half to the disk

 

reverse mapping ( disk to upper half)

 
 
Orthogonal trajectories of tree rings

Period doubling cascade edit

 
Self-similarity in the Mandelbrot set shown by zooming in on a round feature while panning in the negative-x direction. The display center pans from (−1, 0) to (−1.31, 0) while the view magnifies from 0.5 × 0.5 to 0.12 × 0.12 to approximate the Feigenbaum ratio.

In the case of the Mandelbrot set for complex quadratic polynomial

 

first Feigenbaum constant edit

the Feigenbaum constant is the ratio between the diameters of successive circles on the real axis in the complex plane (see animation on the right).

n Period = 2n Bifurcation parameter (cn) Ratio  
1 2 −0.75
2 4 −1.25
3 8 −1.3680989 4.2337
4 16 −1.3940462 4.5515
5 32 −1.3996312 4.6458
6 64 −1.4008287 4.6639
7 128 −1.4010853 4.6682
8 256 −1.4011402 4.6689
9 512 −1.401151982029
10 1024 −1.401154502237
−1.4011551890

Bifurcation parameter is a root point of period-2n component. This series converges to the Feigenbaum point c = −1.401155...... The ratio in the last column converges to the first Feigenbaum constant.

Other maps also reproduce this ratio, in this sense the Feigenbaum constant in bifurcation theory is analogous to π in geometry and e in calculus.


precision edit

 
Mandelbrot zoom
n Period = 2n Bifurcation parameter (cn) Width   Size (approx) zoom precision
0 1 0.25 1.0
1 2 −0.75 0.5
2 4 −1.25 0,1180989
3 8 −1.3680989 0,0259473 = 2,59473×10⁻²
4 16 −1.3940462 0,005585 = 5,585×10⁻³
5 32 −1.3996312 1,1975×10⁻³
6 64 −1.4008287 2,566×10⁻⁴
7 128 −1.4010853 5,49×10⁻⁵
8 256 −1.4011402 1,1782029×10⁻⁵
9 512 −1.401151982029 2,520208×10⁻⁶
10 1024 −1.401154502237
- −1.4011551890 - - - -

Misiurewicz edit

internal adrees = 1-(1/3)->3 -(1/4) -> 12 -(1/2)-> 24 M_{12,3} = c = -0.012652382460890 +0.738185971487936 i 4 external arguments


Mandelbrot Set Height Field edit

Generating Function: cet = n + log2ln(R) - log2ln|z| https://www.researchgate.net/publication/335572530_Fractals_Generating_Techniques

binary edit

Here denominator n of fraction r :

 

is even:

 

and q is odd.

cases:

  • q = 1, then one uses equal infinite representation of finite binary fraction
  • q is a prime number,
    • q = 2^p - 1 then period = p
    • if q is not integer one less the a power of two then period p = ( minimal length of repeating part)  
  • q is a composite number, period p "is the same as for the denominator q . It is either Phi(q) or a strict divisor of Phi(q) . You can determine it numerically when you double 1/q again and again until you get 1/q ." ( Wolf Jung)



Binary expansion of unitary fractions with even denominator 1/(2n)
k   factors(2k) = q*2^t infinite binary expansion   preperiod,period
1 1/2 2 0.0(1) 1,1
2 1/4 2^2 0.00(1) 2,1
3 1/6 2*3 0.0(01) 1,2
4 1/8 2^3 0.000(1) 3,1
5 1/10 2*5 0.0(0011) 1,4
6 1/12 2*2*3 0.00(01) 2,2
7 1/14 2*7 0.0(001) 1,3
8 1/16 2^4 0.0000(1) 4,1
9 1/18 2*9 0.0(000111) 1,6
10 1/20 2*2*5 0.00(0011) 2,4
11 1/22 2*11 0.0(0001011101) 1,10
15 1/30 2*3*5 0.0(0001) 1,4
16 1/32 2^5 0.00000(1) 5,1
21 1/42 2*3*7 0.0(000011) 1,6
27 1/54 2*3*9 0.0(000010010111101101) 1,18
32 1/64 2^6 0.000000(1) 6,1
33 1/66 2*3*11 0.0(0000011111) 1,10
54 1/108 2*2*3*9 0.00(000010010111101101) 2,18
64 1/128 2^7 0.0000000(1) 7,1
66 1/132 2*2*3*11 0.00(0000011111) 2,10

Equivalent representations of preperiodic and periodic binary fractions edit

Note that

 

 

convert number edit

types based on the direction, radix, form edit

tools edit

Online


Programs:

  • Mandel by Wolf Jung:

algorithms edit

Method 2a (Multiplication by Busing radix-b arithmetic) edit

It is a method 2a in "The Art of Computer Programming", vol. 2, Radix Conversion, p. 319

Convert fractional number u in base b to fractional number in base B = obtain radix-B representation of u


 

as follows:

 

 

 



notation:

  •   denotes floor function
  •   denotes  
  • radix = base of positional numeral system
  • radix point
  • Number type
    • integers (radix point at the right)
    • fraction = fractional number (radix point at the left)
      • terminating radix-b fraction  


Notes:

knowledgedoor edit

( work in progress )

algorithm "convert_a_ratio_of_integers" using:


Example conversion: from decimal base ( radix) to binary .

Other bases, like binary, octal, decimal, hexadecimal, etc. , are also possible.

Steps :

  • u = (the fractional part to be converted) = ratio of integers  
  • We convert any decimal integers ( numerator and denominator) to binary BigInteger. Now the u is represented by a ratio of binary BigIntegers
  • using long Euclidean or integer division we get each binary digit and remainder with BigInteger.DivRem and we save every remainder in a hash table.
    • If we see one of the remainders again, then we know there is a repeating part between where we saw the remainder before and where we see it now.
    • Otherwise, we just go until we get an exact answer or we reach the limit of digits that we are willing to calculate.

irrational edit

  • http://inversed.ru/InvMem.htm#InvMem_17
  • Irrational numbers fill in the gaps between Rational numbers on the number line
  • Classes of irrational numbers:
    • Irrational numbers that do not belong to Brjuno number is said to be Cremer number.
    • The set of irrational numbers, except Diophantine is called Liouville number.


Dirichlet's theorem on Diophantine approximation edit

for a given irrational α, the inequality

 

is satisfied by infinitely many integers p and q.


so score s :

 

for any irrational number, there are an infinite number of continued fractions that score less than 1.


  • score of the fraction
    • the score could be the (absolute) difference between her number and the proposed fraction, and then multiplied by the denominator. (The lower the better).
    • for each fraction, the score should be the absolute difference multiplied by the denominator squared. Lowest score wins.

Hurwitz's theorem (number theory) edit

 

keyhole contour and branch cuts edit

See

numerical edit


precision edit


The MSDN documentation is nonsensical and wrong.

    • Bad concept.** Binary-floating-point format does not have any precision in decimal digits because it has no decimal digits at all. It represents numbers with a sign, a fixed number of binary digits (bits), and an exponent for a power of two.
    • Wrong on the high end.** The floating-point format represents many numbers exactly, with infinite precision. For example, “3” is represented exactly. You can write it in decimal arbitrarily far, 3.0000000000…, and all of the decimal digits will be correct. Another example is 1.40129846432481707092372958328991613128026194187651577175706828388979108268586060148663818836212158203125•10−45. This number has 105 significant digits in decimal, but the `float` format represents it exactly (it is 2−149).
    • Wrong on the low end.** When “999999.97” is converted from decimal to `float`, the result is 1,000,000. So not even one decimal digit is correct.
    • Not a measure of accuracy.** Because the `float` significand has 24 bits, the resolution of its lowest bit is about 223 times finer than the resolution of its highest bit. This is about 6.9 digits in the sense that log10223 is about 6.9. But that just tells us the resolution—the coarseness—of the representation. When we convert a number to the `float` format, we get a result that differs from the number by at most ½ of this resolution, because we round to the nearest representable value. So a conversion to `float` has a relative error of at most 1 part in 224, which corresponds to about 7.2 digits in the above sense. If we are using digits to measure resolution, then we say the resolution is about 7.2 digits, not that it is 6-9 digits.
    • Where do these numbers came from?**

So, if “~6-9 digits” is not a correct concept, does not come from actual bounds on the digits, and does not measure accuracy, where does it come from? We cannot be sure, but 6 and 9 do appear in two descriptions of the `float` format.

6 is the largest number *x* for which this is guaranteed:

  • If any decimal numeral with at most *x* significant digits is within the normal exponent bounds of the `float` format and is converted to the nearest value represented in the format, then, when the result is converted to the nearest decimal numeral with at most *x* significant digits, the result of that conversion equals the original number.

So it is reasonable to say `float` can preserve at least six decimal digits. However, as we will see, there is no bound involving nine digits.

9 is the smallest number *x* that guarantees this:

  • If any finite `float` number is converted to the nearest decimal numeral with *x* digits, then, when the result is converted to the nearest value representable in `float`, the result of that conversion equals the original number.

As an analogy, if `float` is a container, then the largest “decimal container” guaranteed to fit inside it is six digits, and the smallest “decimal container” guaranteed to hold it is nine digits. 6 and 9 are akin to interior and exterior measurements of the `float` container.

Suppose you had a block 7.2 units long, and you were looking at its placement on a line of bricks each 1 unit long. If you put the start of the block at the start of a brick, it will extend 7.2 bricks. However, somebody else chooses where it starts, they might start it in the middle of a brick. Then it would cover part of that brick, all of the next 6 bricks, and and part of the last brick (e.g., .5 + 6 + .7 = 7.2). So a 7.2-unit block is only guaranteed to cover 6 bricks. Conversely, 8 bricks can cover the 7.2-unit block if you choose where they are placed. But if somebody else chooses where they start, the first might cover just .1 units of the block. Then you need 7 more and another fraction, so 9 bricks are needed.

The reason this analogy holds is that powers of two and powers of 10 are irregularly spaced relative to each other. 210 (1024) is near 103 (1000). 10 is the exponent used in the `float` format for numbers from 1024 (inclusive) to 2048 (exclusive). So this interval from 1024 to 2048 is like a block that has been placed just after the 100-1000 ends and the 1000-10,000 block starts.

But note that this property involving 9 digits is the **exterior** measurement—it is not a capability that `float` can perform or a service that it can provide. It is something that `float` needs (if it is to be held in a decimal format), not something it provides. So it is not a bound on how many digits a `float` can store.

    • Further Reading**

For better understanding of floating-point arithmetic, consider studying [the IEEE-754 Standard for Floating-Point Arithmetic](https://en.wikipedia.org/wiki/IEEE_754) or a good textbook like [Handbook of Floating-Point Arithmetic by Jean-Michel Muller *et al*]

conformal edit

kleinian edit

https://www.facebook.com/groups/391950357895182/permalink/561042337652649/

Fast Kleinian Limit Set:
( post in True tile by Dieter Steemann)
Roger Bagula's various recent posts dealing with Kleinian limit sets 
stimulated me to look into old records. There I reproduced many of the 
incredible images shown in the book "Indra's Pearls". One limitation 
when I done this about 10 years ago was the time consuming computation 
of such images. Recently I came across a post of the well-known artist 
and engineer Jos Leys where he published a fast method for the 
computation of Kleinian limit sets [1]. This method is implemented in 
the Mathematica code below. Two images produced by this code are attached.

[1] Jos Leys:
A fast algorithm for limit sets of Kleinian groups with the Maskit 
parametrisation.
Posted on 2017-01-10 13:43
http://www.josleys.com/articles.php

---- Fast Kleinian Limit Set ----

Author: Dieter K. Steemann
Notebook: created 2018/Dec/17 using Mathematica 11

This notebook is based on the escape time algorithm for the generation 
of Kleinian limit sets developed by Jos Leys [1].
The following generators are used for the computation of Kleinian limit 
sets. Its Möbius transforms (a, b, A, B) are simplified versions of the 
common matrix notation.

a = (t*z - I)/(-I *z);
A = I/(I *z + t);
b = z + 2;
B = z - 2;

The left and right boundary lines and the separation line (blue) depend 
on the Maskit parameter t that defines the shape of the limit set. The 
additional parameter k and m are used to adapt the shape of the 
separation line (see example below).

(* - - - - Mathematica - - - - *)
t = 1.95 + .07 I; (*Maskit parameter*)
k = -.2; m = 5; (*separation parameter*)
separation[x_, t_, k_, m_] :=
Module[{u = Re[t], v = Im[t], f},
u/2 - Sign[x + v/2]* k*u (1 - Exp[-m*Abs[x + v/2]])]
Show[{Graphics[
Line[{{Re[#], Im[#]} & /@ {1, 1 - Im[t] + I*Re[t]},
ReIm@{-1, -1 - Im[t] + I*Re[t]}}], Axes -> True],
Plot[separation[x, t, -.2, 5], {x, -1, 1}]}, ImageSize -> 250]

(*Fast version*)

(*The uncompiled version is already quite fast but, the compiled version 
is still faster. On my laptop the 800x800 image of the example below is 
generated in less than 4 seconds.*)

$HistoryLength = 2;
kleinC = Compile[{{z, _Complex}},
Module[{t = 1.9 + .06 I, u, v, xi, yi, zi = (1. + I), k = 1, low = 1 
+ I,
high = 1 + I},
xi = Re[z]; yi = Im[z];
u = Re[t]; v = Im[t];
If[yi < 0 || yi > u, Return[0]];
Do[
xi = Mod[xi + 1. + v yi/u, 2] - 1 - v yi/u;
zi = xi + I yi;
(*check separation line*)
If[yi < u/2. + Sign[xi + v/2] 0.2*u (1 - Exp[-5. Abs[xi + v/2.]]),
If[Abs[zi - low] < 10^-4 , k = -n; Break[], low = zi];
k = 1;
zi =(*(t*zi-I)/(-I *zi)*)I*t + 1./zi,
If[Abs[zi - high] < 10^-4 , k = -n; Break[], high = zi];
zi = I/(I *zi + t);
k = 2];
xi = Re[zi]; yi = Im[zi];
If[yi < 0 || yi > u , Break[]];
k = 3,
{n, 1, 30}
];
k]
];

(*In this example the part belonging to the generators "a" and "A" are 
colored blue and red. The coordinates where the maximum iteration depth 
(escape time) is reached is colored green.*)

res = 800;
dodger = ColorData["Legacy", "DodgerBlue"];
xres = res; yres = res;
dx = 2./(xres - 1); dy = 2./(yres - 1);
t0 = AbsoluteTime[];
img = Table[kleinC[(x + I y)], {y, 0, 2, dy}, {x, -1, 1, dx}];
Print["runtime = ", Round[AbsoluteTime[] - t0, .01], " seconds"]

g = MatrixPlot[img, DataReversed -> {False, True}, ImageSize -> 550,
ColorFunctionScaling -> False,
ColorFunction -> (Which[# <= 0, White, # == 1, dodger, # == 2., Red, 
True,
Green] &)]

(*By using a pre-transform additional shapes can be generated. Below an 
inversion at a circle centered at p = (0 + I) with radius = 1 is 
executed.*)

res = 800;
{dodger, navy} = ColorData["Legacy", #] & /@ {"DodgerBlue", "Navy"};
xres = res; yres = res;
dx = 1/(xres - 1); dy = 1/(yres - 1);
p = -I; r2 = 1^2;
t0 = AbsoluteTime[];
img = Table[q = (x + I y) - p; z = p + r2/Conjugate[q];
kleinC[z], {y, -1, 0, dy}, {x, -.5, .5, dx}];
Print["runtime = ", Round[AbsoluteTime[] - t0, .01]]

g = MatrixPlot[img, DataReversed -> {False, True}, ImageSize -> 550,
AspectRatio -> 1, ColorFunctionScaling -> False,
ColorFunction -> (Which[# <= 0, White, # == 1, dodger, # == 2., Red, 
# < 0,
ColorData["BrightBands"][-#/40.], True, Green] &)]

(*Below the same parameter as above are used. But, the "inside" parts 
not belonging to limit set are colored according the required iteration 
depth for a stable two cycle.*)

res = 800;
{dodger, navy} = ColorData["Legacy", #] & /@ {"DodgerBlue", "Navy"};
xres = res; yres = res;
dx = 1/(xres - 1); dy = 1/(yres - 1);
p = -I; r2 = 1.^2;

t0 = AbsoluteTime[];
img = Table[q = (x + I y) - p; z = p + r2/Conjugate[q];
kleinC[z], {y, -1, 0, dy}, {x, -.5, .5, dx}];
Print["runtime = ", Round[AbsoluteTime[] - t0, .01]];

g = MatrixPlot[img, DataReversed -> {False, True}, ImageSize -> 550,
AspectRatio -> 1, ColorFunctionScaling -> False,
ColorFunction -> (Which[# == 0, White, # == 1, navy, # == 2., Red, # 
< 0,
ColorData["BrightBands"][-#/40.], True, Green] &)]

(*Below another pre-transform for an image based on the same Maskit 
parameter as above is generated. Again the escape time is used for 
coloring.*)

res = 800;
dodger = ColorData["Legacy", "DodgerBlue"];
xres = res; yres = res;
dx = 2.75/(xres - 1); dy = 2.75/(yres - 1);
p = .55 + 1 I; r2 = .75^2;

t0 = AbsoluteTime[];
img = Table[q = (x + I y) - p; z = p + r2/Conjugate[q];
kleinC[z], {y, -.75, 2, dy}, {x, -1.5, 1.25, dx}];
Print["runtime = ", Round[AbsoluteTime[] - t0, .01]];

g = MatrixPlot[img, DataReversed -> {False, True}, ImageSize -> 550,
AspectRatio -> 1, ColorFunctionScaling -> False,
ColorFunction -> (Which[# == 0, White, # == 1, navy, # == 2., Red, # 
< 0,
ColorData["BrightBands"][-#/40.], True, Green] &)]
(* - - - - End of code - - - - *)

__._,_.___
Posted by: "Dieter K. Steemann" <steemann@aol.com>
Image may contain: text
8You, Owen Maresh, Chris Thomasson and 5 others
1 comment
Seen by 42
Like
Comment
Comm

recurrence relations edit

A recurrence relation is an equation ( map) that recursively defines a sequence of points (for example orbit). To compute the sequence one needs:

  • starting point
  • map (equation = function = recurrence relation )

definitions edit

Define the iterated function   by:

 

Please note that n denotes function composition not exponentiation !!

Pick arbitrary names for the iterated function and it's derivatives These derivatives can be computed by recurrence relations.

description arbitrary name formula Initial conditions Recurrence step common names
iterated function         z or f
first derivative with respect to z         dz, d (from derivative) or p ( from prime) of f'
second derivative with respect to z        
first derivative with respect to c         dc
second derivative with respect to c        

Derivation of recurrence relations edit

A edit

 

 

B edit

 

 

C edit

 

 

D edit

 

 

E edit

 


 

Computing edit

A = 0;
B = 1;
for (m = 0; m < mMax; m++){
  /* first compute B then A */
  B = 2*A*B; /* first derivative with respect to z */
  A = A*A + c; /* complex quadratic polynomial */
   }

rk4 edit

Fourth-order Runge-Kutta (RK4)


  is a vector function that for each point p

p = (x, y)

in a domain assigns a vector v

 

where each of the functions   is a scalar function:

 

A field line is a line that is everywhere tangent to a given vector field at an instant in time.

Let r(s) be a field line given by a system of ordinary differential equations, which written on vector form is:

 

where:

  • s representing the arc length along the field line
  •   is a seed point

Given a seed point   on the field line, the update rule ( RK4) to find the next point  along the field line is[36]

 

where:

  • h is the step size
  • k are the intermediate vectors:

 

scalar field edit

vector filed edit

roots edit

scratchapixel edit

trace curve on the image edit

  • start from point c ( escape time level, angle)
  • go to the next point
    • crossing circular lines
    • not crossing radial lines


chain code edit

path finding edit

sort edit

g(g(z) = z^2+c edit

Solutions :

  • the closed-form solution
  • expansion

Solution edit

Key words edit

  • Iterative Functional Equations
  • g = f^{1/2} = functional square root of f = half-iterate
  • fractional iterations
  • continous iterations

curve edit

http://math.stackexchange.com/questions/186512/definition-of-winding-number-have-doubt-in-definition

It is a non-trivial fact from topology that given any curve

  

we can find a continuous polar coordinate expression about   such that

  


where $\theta$ and $r$ are continuous themselves (if the curve is differentiable, so are $r$ and $\theta$). Moreover $\theta$ and $r$ are uniquely defined up to multiples of $2\pi$ which is a freedom in the choice of the starting angle. The main consequence of the above is that the natural definition of the winding number should be $$\mathrm{Wnd}(a,\ \gamma) = \frac{\theta(1) - \theta(0)}{2\pi}$$ i.e. the total continuous angular change of the curve divided by $2\pi$. It should not be hard to convince yourself that this is always an integer for closed curves and represents the number of times $\gamma$ "winds" about $a$. It remains to relate this intuitive definition to our integral definition and the key component is the logarithm $$\log(z) = \ln|z| + i\theta(z)$$ where the imaginary component of the logarithm is essentially an angle tracking function, provided that we can make it vary continuous. (The logarithm is inherently discontinuous along some branch cut, so one remedy is the above continuous polar expression. Another is to piece together multiple integrals to make the argument vary continuously.) So we have $$\oint_\gamma \frac{1}{z-a}\ dz = \int_0^1 \frac{r'(t)e^{i\theta(t)} + i\theta'(t)r(t)e^{i\theta(t)}}{r(t)e^{i\theta(t)}}\ dt$$ $$=\int_0^1 \frac{r'(t)}{r(t)} + i\theta'(t)\ dt$$ $$=\log(r(1))-\log(r(0)) + i\left[\theta(1) - \theta(0)\right]$$ Since the curve is closed, it follows that $r(1) = r(0)$ so the expression evaluates to $$i\left[\theta(1) - \theta(0)\right]= 2\pi i\cdot\mathrm{Wnd}(a,\ \gamma)$$ which is the justification for the integral definition.

beziere edit

Fit a logarithmic spiral to the data edit

http://blogs.sas.com/content/iml/2015/06/11/spiral-of-splatter.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+TheDoLoop+%28The+DO+Loop%29 http://blogs.sas.com/content/iml/2015/06/10/polar-angle-curve.html

" The logarithmic spiral is usually given explicitly as a polar (parametric) curve with the standard form R(θ) = a*exp(bθ), where a and b are unknown parameters. The second problem is that the spiral has multiple branches. For example, in the vertical direction there are several polar angles (π/2 and 5π/2), each with a different radius.

To construct a "spiral of best fit," you can do the following:

Choose a coordinate system and use the photo to find the (x,y) coordinates of points along the spiral curve. Create an objective function whose minimum will produce parameters for the spiral that best fits the data. This problem has four parameters: The center of the spiral (x0, y0) and the two parameters a and b the amplitude and decay rate for the logarithmic spiral. Fit the curve to the data and see if the splatter marks on the wall appear to be modeled by the curve.


Create an objective function To find the logarithmic spiral that best fit the data, you can minimize a function that computes the sum of the squared distances (in the radial direction) between the data and the predicted points along the spiral. Because the center of the spiral is unknown, you have fit four parameters: the center of the spiral (x0, y0) and the parameters a and b.

For given values (x0, y0), you can center the (x, y) coordinates by subtracting the coordinates of the center point. In a previous blog post, I showed how to obtain an increasing sequence of polar angles from the coordinates of (centered) data that are ordered along a parametric curve. Let θi be the polar angle that corresponds to the point (xi, yi). By using the fact that Euclidean and polar coordinates are related by x = R cos(θ) and y = R sin(θ), you can write down the predicted Euclidean coordinates of the logarithmic spiral: ui = a cos(thetai) exp(b*thetai) vi = a sin(thetai) exp(b*thetai) " Rick Wicklin

1/3 edit

(%i1) z:x+y*%i;
(%o1) %i*y+x
(%i2) w:1/(z^3);
(%o2) 1/(%i*y+x)^3
(%i3) realpart(w);
(%o3) (x^3-3*x*y^2)/(y^2+x^2)^3
(%i4) imagpart(w);
(%o4) -(3*x^2*y-y^3)/(y^2+x^2)^3

Siegel disc edit

 n = 3
 t t(n) = [0,3,10^n,g] = 0.3332223278292314 
 c = c(t) = -0.124396035791842  +0.649518736914556 i

it is near t = 1/3 = 0.33333...


Siegel disc is modelled by repelling period 3 orbit

 z = -0.310935327598721  +0.477068611518648 i
 z = -0.255309717939249  +0.352843766895320 i
 z = -0.183711707554503  +0.469349851709223 i

which is converging to alfa fixed point ( center of Siegel disc) :

 z = -0.183711707554503  +0.469349851709223 i

Here Siegel disc:

  • is closed curve = Jordan curve
  • is digitated = has 3 digits (such process is cold infolding )


3 digits with corners:

  • first digit
    • z = 0 = critical points = main corner of 1-st digit
    • b(a(a(0))) = z = 0.229496637914054 +0.142071606596408 i // it should be aaa
  • second digit
    • z = a(0) = -0.626785322381975 +0.518134928914886 i = main corner of 2-nd digit
    • abaa = z = -0.697335869115836 +0.363847001704895 i
  • third digit
    • a(a(0)) = z = -0.091911670378872 +0.714728649028403 i = main corner of 3-rd digit
    • aabaa = z = -0.183398220134522 +0.778829082965258 i

external argument edit

http://www.mndynamics.com/indexp.html#XR


"the arguments arg(zn/(zn-c)) are computed. The function arg(z/(z-c)) is discontinuous on the straight line segment from the critical point z = 0 to the critical value z = c, which is drawn in red in the figure: when z belongs to this line segment, then z/(z-c) will be on the negative real axis. When the Julia set is connected, the location of the discontinuity can be moved onto a kind of curve joining 0 and c within the Julia set, such that the argument is continuous in the whole exterior: Suppose that z is crossing the red line from right to left, and moving to the fixed point αc and beyond. The principal value arg(z/(z-c)) is jumping from π to -π at the red line, goes up to about -0.68π at αc and stays continuous there. The modified function arg(z/(z-c)) is continuous with value π, as z is crossing the red line. It goes up to about 1.32π when z is approaching αc from the right, and jumps to about -0.68π as z is crossing αc to the left. When the modified function arg(z/(z-c)) is used instead of the principal value for computing argc(z) according to (3), this formula will be valid not only for large |z| but everywhere outside the Julia set. In practice, the Julia set may be approximated by two straight line segments from the fixed point αc to 0 and c, respectively: when zn is in the triangle between 0, c, and αc, adjust the principal value of arg(zn/(zn-c)) by ±2π. The triangle is given by three simple inequalities, cf. mndlbrot::turn(). In the two images below, the different colors indicate values of argc(z), and thus, external rays. The computed value of argc(z), and thus the color, is jumping on certain curves joining preimages of 0, where some iterate zn is crossing the line where arg(z/(z-c)) is discontinuous. In the left image, the principal value of arg(z/(z-c)) is used, and in the right image, the argument is adjusted. The erroneous discontinuities of the computed argc(z) are moved closer to the Julia set. " Wolf Jung


itinerary or kneading sequence edit

" The map z^2 + c has two inverses. E.g., take

z1 = root(z - c)   and   z2 = -root(z - c) .

And the map 2t has two inverses, t1 = t/2 and t2 = (t + 1)/2 .

If ray(t) is landing at z , then either ray(t1) is landing at z1 and ray(t2) is landing at z2 , or ray(t1) is landing at z2 and ray(t2) is landing at z1 .

It is difficult to choose the preimage more specifically. The itinerary or kneading sequence is working with the regions A and B defined by external and internal rays. See the images.

In the case of c = i , we have the external ray(1/6) at z = c and the rays ray(1/12) and ray(7/12) at z = 0 . In the case of the period-4 component with the external angles 3/15 and 4/15 at c , you may define the regions A and B either with ray(3/30) and ray(18/30) , or with ray(4/30) and ray(19/30) .

To define the maps A and B in the latter case, you should compute t1 and t2 and say if (4/30 < t1 < 19/30) then A(t) = t1 , B(t) = t2 else A(t) = t2 , B(t) = t1 .

To define the maps A(z) and B(z) properly, you would need a formula for the rays in the plane. So what I am doing in "inverse with argument adjusted" is just a rough approximation: Instead of the external rays I am using a straight line perpendicular to the line from z = 0 to z = c .

If this approximation is valid, the ray with A(t) should land at A(z) . In your example, you should modify the definition of A(t) and B(t) . But in the case of parameters in the main cardioid, the regions A and B are not defined precisely anyway. And I am still wondering about the meaning of internal rays when c is not 0."

parabolic edit

https://www.osapublishing.org/oe/fulltext.cfm?uri=oe-23-25-32238 "Four kinds of two dimensional orthogonal coordinate systems. (a) The polar coordinate system. The constant r and φ curves are a series of circles and rays, respectively. (b) The parabolic coordinate system. The constant u and v curves are two sets of confocal parabolas. (c) The bipolar coordinate system. The constant u and v curves are two groups of non-concentric circles. (d) The elliptic coordinate system. The constant u and v curves are confocal ellipses and hyperbolas, respectively."


Topological classification of differential equations on the plane in the neighbourhood (near) of the singular point Dynamical Systems I: Ordinary Differential Equations and Smooth Dynamical ... Autorzy S.Kh. Aranson,I.U. Bronshtein,V.Z.


"ently, Möbious automorphisms of D. We distinguish 3 cases: 1. ∆2 = 0. We call this case parabolic. The vector field σ has one zero at the boundary of order 2. 2. ∆2 > 0. We call this case hyperbolic. The vector field σ has two zeros at the boundary of order 1. 3. ∆2 < 0. We call this case elliptic. The vector field σ has one zeros inside of D" http://arxiv.org/pdf/1510.04123.pdf General Slit Stochastic Lowner Evolution ¨ and Conformal Field Theory Alexey Tochin


A line field is parabolic if it is given by µ = A∗ (dz/dz) for some A ∈ Aut(Cb). A parabolic line field is tangent to the pencil of circles passing through a given point in a given direction; see Figure 2.1. When µ = dz/dz, the circles become the horizontal lines in the plane. We now show that any hyperbolic manifold whose injectivity radius is bounded above is quasiconformally rigid. More generally we have: http://www.math.harvard.edu/~ctm/papers/home/text/papers/renorm/book.pdf

https://www.researchgate.net/publication/349787425_A_strong_Yoccoz_inequality_for_near-parabolic_quadratic_polynomials

Sectorial decomposition of nonelementary singularities edit

http://www.wisdom.weizmann.ac.il/~yakov/thebook1.pdf "Since the boundary rays of the sector are leaves of the standard foliations, standard sectors of different types in the quadrant can be inscribed in sectors of arbitrary positive opening and attached to each other in a cyclical order to produce an oriented foliation on (R 2 , 0) r {0} (this construction can be implemented in any smoothness, though in general not in the real analytic category). If a nonelementary singularity of a planar vector field is topologically equivalent to a foliation obtained by such a surgery, so that the boundaries of the sectors are characteristic trajectories, we say that the singularity admits sectorial decomposition"

Vector field edit

description

  • time independent = stationary = Steady
  • 2D


Plot types (Visualization Techniques for Flow Data) : [37]

  • arrow plot = quiver plot
  • Glyphs = Icons or signs for visualizing vector fields
    • simplest glyph = Line segment (hedgehog plots)
  • Characteristic Lines [38]
    • stremlines = curve everywhere tangential to the instantaneous vector (velocity) field (time independent vector field)
    • Path lines = Extensions of stream lines for time-varying data. Trace of a single particle in continuous time
    • streak lines [39]
  • texture (line integral convolution = LIC)[40]
  • Topological skeleton [41]
    • fixed point extraction ( Jacobian)


"Having several nearby streamlines gives a better "global" understanding of the field: [42]

  • A rake is a regular collection of starting points, usually on a line.
  • A stream ribbon is a surface attaching two nearby streamlines.
  • A stream tube follows the streamlines of a "circle" of initial points.
  • Hyperstreamlines are streamlines of ellipical crosssection used with tensor visualization."


gradient edit


gradient flow

gradient vector flow

phase portrait edit

stremlines edit


InkDroplet edit


  • Bulleted list item

LIC edit

GPU edit

paraview edit

c++ edit

java edit

js edit

matlab edit

python edit

oregon edit

http://web.engr.oregonstate.edu/~mjb/cs553/Handouts/Vector/vector.2pp.pdf

March 9, 2015  Oregon State University Computer Graphics Vasu Lakshmanan

Line Integral Convolution (LIC) involves taking a white noise image and smearing it in the directions of the flow, in this case, a circular flow:

Mathematically, we create each pixel in the output image by following a streamline from that point (forwards and backwards) and performing a weighted average of all pixels that the streamline touches in the white noise image:

 


Where:

  • S(i) is the streamline position “i” pixels away from the starting point,
  • I( ) are the contents of the white noise image,
  • w(i) is the weight used for this pixel,
  • I’( ) is the resulting image.


gimp edit

convolution edit

isoline edit

programs edit

ref edit

  1. world of special functions
  2. "Newbie: How to map colors in the Mandelbrot set?". www.fractalforums.com. May 2007. Archived from the original on 9 September 2022. Retrieved 11 February 2020.
  3. García, Francisco; Ángel Fernández; Javier Barrallo; Luis Martín. "Coloring Dynamical Systems in the Complex Plane" (PDF). Archived (PDF) from the original on 30 November 2019. Retrieved 21 January 2008. {{cite journal}}: Cite journal requires |journal= (help)
  4. Linas Vepstas. "Renormalizing the Mandelbrot Escape". Archived from the original on 14 February 2020. Retrieved 11 February 2020.
  5. a b Albert Lobo. "Interior and exterior distance bounds for the Mandelbrot set". Archived from the original on 9 September 2022. Retrieved 29 April 2021.
  6. Wilson, Dr. Lindsay Robert (2012). "Distance estimation method for drawing Mandelbrot and Julia sets" (PDF). Archived (PDF) from the original on 3 May 2021. Retrieved 3 May 2021.
  7. Chéritat, Arnaud (2016). "Boundary detection methods via distance estimators". Archived from the original on 18 December 2022. Retrieved 2 January 2023.
  8. Christensen, Mikael Hvidtfeldt (2011). "Distance Estimated 3D Fractals (V): The Mandelbulb & Different DE Approximations". Archived from the original on 13 May 2021. Retrieved 10 May 2021.
  9. Dang, Yumei; Louis Kauffman; Daniel Sandin (2002). "Chapter 3.3: The Distance Estimation Formula". Hypercomplex Iterations: Distance Estimation and Higher Dimensional Fractals (PDF). World Scientific. pp. 17–18. Archived (PDF) from the original on 23 March 2021. Retrieved 29 April 2021.
  10. Peitgen, Heinz-Otto; Richter Peter (1986). The Beauty of Fractals. Heidelberg: Springer-Verlag. ISBN 0-387-15851-0.
  11. Peitgen, Heinz-Otto; Saupe Dietmar (1988). The Science of Fractal Images. New York: Springer-Verlag. p. 202. ISBN 0-387-96608-0.
  12. "Mandelbrot Bud Maths". Archived from the original on 14 February 2020. Retrieved 11 February 2020.
  13. Douady, Adrien; Hubbard, John (2009). "Exploring the Mandelbrot set. Exploring the Mandelbrot set. The Orsay Notes". Retrieved 9 April 2023. {{cite journal}}: Cite journal requires |journal= (help)
  14. "Boundary Tracing Method". Archived from the original on 20 February 2015.
  15. Dewdney, A. K. (1989). "Computer Recreations, February 1989; A tour of the Mandelbrot set aboard the Mandelbus". Scientific American: 111. JSTOR 24987149. Template:Subscription required
  16. http://courses.cecs.anu.edu.au/courses/COMP4300/lectures/embParallel.4u.pdf Template:Webarchive Template:Bare URL PDF
  17. http://cseweb.ucsd.edu/groups/csag/html/teaching/cse160s05/lectures/Lecture14.pdf Template:Webarchive Template:Bare URL PDF
  18. "Superfractalthing - Arbitrary Precision Mandelbrot Set Rendering in Java". Archived from the original on 30 June 2020. Retrieved 11 February 2020.
  19. K. I. Martin. "Superfractalthing Maths" (PDF). Archived from the original (PDF) on 28 June 2014. Retrieved 11 February 2020. {{cite journal}}: Cite journal requires |journal= (help)
  20. "Kalles Fraktaler 2". Archived from the original on 24 February 2020. Retrieved 11 February 2020.
  21. Online tuning in the Mandelbrot set by Claude Heiland-Allen
  22. 2013-10-02 islands in the hairs by Claude Heiland-Allen
  23. Damelin & Miller 2011, p. 219
  24. Press, William H.; Flannery, Brian P.; Teukolsky, Saul A.; Vetterling, William T. (1989). Numerical Recipes in Pascal. Cambridge University Press. p. 450. ISBN 0-521-37516-9.
  25. convolution by Song Ho Ahn
  26. CS1114 Section 6: ConvolutionFebruary 27th, 2013
  27. Book : Fractals for the Classroom: Part Two: Complex Systems and Mandelbrot Set, page 461, by Heinz-Otto Peitgen, Hartmut Jürgens, Dietmar Saupe
  28. Fractal Geometry from Yale University by Michael Frame, Benoit Mandelbrot (1924-2010), and Nial NegerFebruary 2, 2013
  29. Terminal Point by  Robert P. Munafo, 2008 Mar 9.
  30. mathoverflow question : Is there a way to find regions of depth in the Mandelbrot set other than simply poking around?
  31. Branch Point by Robert P. Munafo, 1997 Nov 19.
  32. Symbolic sequences of one-dimensional quadratic map points by G Pastor, Miguel Romera, Fausto Montoya Vitini
  33. The Arithmetics of the Hyperbolic Plane Talk 1: Hyperbolic Geometry by Valentino Delle Rose
  34. computer graphics stack exchange question: difference-between-transformation-and-projection
  35. math.stackexchange question: entire-function-with-image-contained-in-slit-plane-is-constant
  36. Classification and visualisation of critical points in 3d vector fields. Master thesis by Furuheim and Aasen
  37. Flow Visualisation from TUV
  38. Data visualisation by Tomáš Fabián
  39. A Streakline Representation of Flow in Crowded Scenes from UCF
  40. lic by Zhanping Liu
  41. Vector Field Topology in Flow Analysis and Visualization by Guoning Chen
  42. Visualizing Vector Fields by Steven Bellenot