Fractal software





  • floatexp with single precision
  • FloatExp with double precision


  • Left mouse button
    • Drag - Pan view
    • Double click - Find nearby feature and open Feature Finder dialog
  • Mouse scroll wheel - Zoom
  • A - Increase color density
  • S - Decrease color density
  • D - Increase iteration limit
  • F - Decrease iteration limit
  • E - Cycle color forward
  • R - Cycle color backward

Nanoscope by pauldelbrot

Extra range is needed on a few specific iterations, if zooming past e300 and doing flybys past minibrots below e300.
Nanoscope stores a similar array to KF, 3x double with re, im, and a mag value used for glitch detection. It also stores a pointer that is either null or points to a wide-exponent copy of those three values. If the double precision values underflow (denorm or zero) these get set for that iteration during reference orbit computation. Otherwise the pointer is null. During iteration, if the pointer is not null the next iteration is done using 52-bit mantissa wide-exponent calculations (significantly slower, but for only one iteration).
The circumstances that trigger this are instances like this. Say there's a period-73174 mini at about e380. If the zoom goes very close to that mini, then for images near and deeper than that mini, every 73174th reference orbit iteration is within 1e-380 of zero or so, so it underflows in a double. So wide exponent calculations must be done on those iterations, and those reference orbit entries must be stored with a wide exponent.
For every other iteration below e300, Nanoscope does the calculations with rescaled values that don't need a wide exponent. The rescaling is changed every 500 or so iterations -- this works because until a point is on the verge of escape, its orbit dynamics are dominated by the effects of repelling points in the Julia set nearby, and that typically means its magnitude doubles each iteration. The exponent width of doubles is 11 bits, so from about -1000 to about 1000, representing powers of 2 (not 10), so easily accommodates 500 doublings with plenty of margin for error. Re-rescaling is also done after every iteration that needed a wide-exponent reference point, because the orbit has jumped much closer to 0 again on such iterations.
On "final approach" an escape-bound critical orbit moves faster, with the magnitude squaring each iteration, but by the time this happens the unscaled magnitude is above the e-300 threshold and Nanoscope has switched to bog-standard perturbation calculations without any rescaling or other sneaky tricks. And escape is usually within the next 500 iterations anyway.[1]


time ./nanomb64_i7.exe --kfr ./test_x1.kfr --maxiters 250000 --period 2489  --width 500 --height 500 --orderM 4 --orderN 4 --output_ppm s.ppm --output_kfb s.kfb --force_type 'floatexp'

Period is 2489

nanomb.exe --kfr test_x1.kfr --period 2449 --width 500 --height 500 --orderM 4 --orderN 4 --output_ppm s.ppm --force_type floatexp


  • SuperMB
  • superMB is actually not super at all. It is just a modification of the code Claude posted in the superfractalthing thread. I use it for experimenting. It now have a rudimentary GUI. I've attached below the source code +dependencies

Knighty's SMB which I think is still a bit faster than KF though being more of a testbed than a usable renderer puts the glitches in distinct sets (G1,.., Gn) with same iteration number where glitch was detected. Next references will then be 1 random pixel from each of the G1,..,Gn and is used only to recalculate the pixels in each set G. Secondary glitches simply generate another set G and you just put then in some queue or stack and keep going at it till no more G sets left. When dealing with glitched pixels you can use the same series expansion as a starting point. ( Gerrit)[2]

on GitHub



  1. memory-bandwidth-trade-offs-for-perturbation-rendering
  2. : how-to-get-second-reference-when-using-perturbation-theory