# Examples and counterexamples in mathematics/Real-valued functions of one real variable

## Polynomials

### Polynomial with infinitely many roots

The zero polynomial $P(x)=0;$  every number is a root of P. This is the only polynomial with infinitely many roots. A non-zero polynomial is of some degree n (n may be 0,1,2,...) and cannot have more than n roots since, by a well-known theorem of algebra, if $P(a_{1})=\dots =P(a_{m})=0$  (for pairwise different $a_{1},\dots ,a_{m}$ ), then necessarily $P(x)=(x-a_{1})\dots (x-a_{m})Q(x)$  for some non-zero polynomial Q of degree $n-m\geq 0.$

### Integer values versus integer coefficients

Every polynomial P with integer coefficients is integer-valued, that is, its value P(k) is an integer for every integer k; but the converse is true only for first degree polynomials (linear functions). For example, the polynomial $\textstyle P_{2}(x)={\frac {1}{2}}x^{2}-{\frac {1}{2}}x={\frac {1}{2}}x(x-1)$  takes on integer values whenever x is an integer. That is because one of x and x-1 must be an even number. The values $\textstyle P_{2}(k)={\binom {k}{2}}$  are the binomial coefficients.

More generally, for every n=0,1,2,3,... the polynomial $\textstyle P_{n}(x)={\frac {1}{n!}}x(x-1)\dots (x-n+1)$  is integer-valued; $\textstyle P_{n}(k)={\binom {k}{n}}$  are the binomial coefficients. In fact, every integer-valued polynomial is an integer linear combination of these Pn.

### Polynomial mimics cosine: interpolation

The cosine function, $f(x)=\cos x,$  satisfies $f(0)=1,$  $f(\pi /6)={\sqrt {3}}/2,$  $f(\pi /4)={\sqrt {2}}/2,$  $f(\pi /3)=1/2,$  $f(\pi /2)=0;$  also $f(-x)=f(x)$  and $f(x+\pi )=-f(x)$  for all x, which gives infinitely many x such that $f(x)$  is one of the numbers $\pm 1,\pm {\sqrt {3}}/2,\pm {\sqrt {2}}/2,\pm 1/2,0;$  that is, infinitely many points on the graph. A polynomial cannot satisfy all these conditions, because $|P(x)|\to \infty$  as $x\to \pm \infty$  for every non-constant polynomial P; can it satisfy a finite portion of them?

Let us try to find P that satisfies the five conditions $P(0)=1,$  $P(\pi /6)={\sqrt {3}}/2,$  $P(\pi /4)={\sqrt {2}}/2,$  $P(\pi /3)=1/2,$  $P(\pi /2)=0.$  For convenience we rescale x letting $\textstyle P(x)=Q{\big (}{\frac {12}{\pi }}x{\big )}$  and rewrite the five conditions in terms of Q: $Q(0)=1,$  $Q(2)={\sqrt {3}}/2,$  $Q(3)={\sqrt {2}}/2,$  $Q(4)=1/2,$  $Q(6)=0.$  In order to find such Q we use Lagrange polynomials.

Using the polynomial $\ell (x)=x(x-2)(x-3)(x-4)(x-6)$  of degree 5 with roots at the given points 0, 2, 3, 4, 6 and taking into account that $\ell '(0)=(-2)\cdot (-3)\cdot (-4)\cdot (-6)=144$  (check it by differentiating the product) we consider the so-called Lagrange basis polynomial $\textstyle {\frac {\ell (x)}{\ell '(0)x}}={\frac {1}{144}}(x-2)(x-3)(x-4)(x-6)$  of degree 4 with roots at 2, 3, 4, 6 (but not 0); the division in the left-hand side is interpreted algebraically as division of polynomials (that is, finding a polynomial whose product by the denominator is the numerator $\ell (x)$ ) rather than division of functions, thus, the quotient is defined for all x, including 0. Its value at 0 is 1. Think, why; see the picture; recall that $\textstyle \ell '(0)=\lim _{x\to 0}{\frac {\ell (x)}{x}}$ .

Similarly, the second Lagrange basis polynomial $\textstyle {\frac {\ell (x)}{\ell '(2)(x-2)}}=-{\frac {1}{16}}x(x-3)(x-4)(x-6)$  takes the values 0, 1, 0, 0, 0 (at 0, 2, 3, 4, 6 respectively). The third one $\textstyle {\frac {\ell (x)}{\ell '(3)(x-3)}}={\frac {1}{9}}x(x-2)(x-4)(x-6)$  takes the values 0, 0, 1, 0, 0. And so on (calculate the fourth and fifth). It remains to combine these five Lagrange basis polynomials with the coefficients equal to the required values of Q:

{\begin{aligned}Q(x)&=\textstyle Q(0){\frac {\ell (x)}{\ell '(0)x}}+Q(2){\frac {\ell (x)}{\ell '(2)(x-2)}}+Q(3){\frac {\ell (x)}{\ell '(3)(x-3)}}+Q(4){\frac {\ell (x)}{\ell '(4)(x-4)}}+Q(6){\frac {\ell (x)}{\ell '(6)(x-6)}}\\&=\textstyle {\frac {1}{144}}(x-2)(x-3)(x-4)(x-6)-{\frac {\sqrt {3}}{2}}\cdot {\frac {1}{16}}x(x-3)(x-4)(x-6)+\\&\qquad +\textstyle {\frac {\sqrt {2}}{2}}\cdot {\frac {1}{9}}x(x-2)(x-4)(x-6)-{\frac {1}{2}}\cdot {\frac {1}{16}}x(x-2)(x-3)(x-6).\end{aligned}}

Finally, $\textstyle \cos x\approx P(x)=Q{\big (}{\frac {12}{\pi }}x{\big )}.$  As we see on the picture, the two functions are quite close for $0  in fact, the greatest $|P(x)-\cos x|$  for these x is about 0.00029.

A better approximation can be obtained via derivative. The derivative of $f(x)=\cos x$  being $f'(x)=-\sin x,$  we have $f'(0)=0,$  $f'(\pi /6)=-1/2,$  $f'(\pi /4)=-{\sqrt {2}}/2,$  $f'(\pi /3)=-{\sqrt {3}}/2,$  $f'(\pi /2)=-1.$  The corresponding derivatives $P'(x)$  are close but different; for instance,

{\begin{aligned}P'{\big (}{\tfrac {\pi }{2}}{\big )}&={\tfrac {12}{\pi }}Q'(6)={\tfrac {12}{\pi }}\cdot {\big (}{\tfrac {1}{144}}\cdot 4\cdot 3\cdot 2-{\tfrac {\sqrt {3}}{2}}\cdot {\tfrac {1}{16}}\cdot 6\cdot 3\cdot 2+{\tfrac {\sqrt {2}}{2}}\cdot {\tfrac {1}{9}}\cdot 6\cdot 4\cdot 2-\\&\quad -{\tfrac {1}{2}}\cdot {\tfrac {1}{16}}\cdot 6\cdot 4\cdot 3{\big )}={\tfrac {2}{\pi }}-{\tfrac {27{\sqrt {3}}}{2\pi }}+{\tfrac {32{\sqrt {2}}}{\pi }}-{\tfrac {27}{\pi }}\approx -0.9956\neq -1.\end{aligned}}

In order to fix the derivative without spoiling the values we replace $Q(x)$  with $Q(x)+\ell (x)R(x)$  where $R(x)$  is a polynomial of degree 4 such that the derivative of $Q{\big (}{\tfrac {12}{\pi }}x{\big )}+\ell {\big (}{\tfrac {12}{\pi }}x{\big )}R{\big (}{\tfrac {12}{\pi }}x{\big )}$  is equal to $f'(x)$  for $x=0,{\tfrac {\pi }{6}},{\tfrac {\pi }{4}},{\tfrac {\pi }{3}},{\tfrac {\pi }{2}};$  it means, ${\tfrac {12}{\pi }}Q'{\big (}{\tfrac {12}{\pi }}x{\big )}+{\tfrac {12}{\pi }}\ell '{\big (}{\tfrac {12}{\pi }}x{\big )}R{\big (}{\tfrac {12}{\pi }}x{\big )}=f'(x),$  since $\ell {\big (}{\tfrac {12}{\pi }}x{\big )}=0$  for these x; so,

$R(x)={\frac {{\frac {\pi }{12}}f'{\big (}{\frac {\pi }{12}}x{\big )}-Q'(x)}{\ell '(x)}}$  for $x=0,2,3,4,6.$

We find such R as before:

$R(x)=\textstyle R(0){\frac {\ell (x)}{\ell '(0)x}}+R(2){\frac {\ell (x)}{\ell '(2)(x-2)}}+R(3){\frac {\ell (x)}{\ell '(3)(x-3)}}+R(4){\frac {\ell (x)}{\ell '(4)(x-4)}}+R(6){\frac {\ell (x)}{\ell '(6)(x-6)}},$

and get a better approximation $\textstyle P_{2}(x)=P(x)+\ell {\big (}{\frac {12}{\pi }}x{\big )}R{\big (}{\frac {12}{\pi }}x{\big )};$  in fact, $\max _{0  If you still want a smaller error, try second derivative and $Q(x)+\ell (x)R(x)+\ell ^{2}(x)S(x).$

### Polynomial mimics cosine: roots

The cosine function, $f(x)=\cos x,$  satisfies $f(0)=1$  and has infinitely many roots: $f(\pm 0.5\pi )=f(\pm 1.5\pi )=f(\pm 2.5\pi )=\dots =0.$  A polynomial cannot satisfy all these conditions; can it satisfy a finite portion of them?

It is easy to find a polynomial P such that $P(0)=1$  and $P(\pm 0.5\pi )=0,$  namely $P_{1}(x)=-{\frac {4}{\pi ^{2}}}(x^{2}-0.25\pi ^{2})$  (check it). What about $P(0)=1$  and $P(\pm 0.5\pi )=P(\pm 1.5\pi )=0\,?$

The conditions being insensitive to the sign of x, we seek a polynomial of $x^{2},$  that is, $P(x)=Q(x^{2})$  where Q satisfies $Q(0)=1$  and $Q(0.25\pi ^{2})=Q(2.25\pi ^{2})=0.$  It is easy to find such Q, namely, $Q(x)={\frac {16}{9\pi ^{4}}}(x-0.25\pi ^{2})(x-2.25\pi ^{2})$  (check it), which leads to

$P_{2}(x)={\frac {16}{9\pi ^{4}}}(x^{2}-0.25\pi ^{2})(x^{2}-2.25\pi ^{2}).$  As we see on the picture, the two functions are rather close for $-0.5\pi   in fact, the greatest $|P_{2}(x)-\cos x|$  for these x is about 0.028, while the greatest $|P_{1}(x)-\cos x|$  (for these x) is about 0.056.

The next step in this direction: $\textstyle P_{3}(x)=-{\frac {64}{225\pi ^{6}}}(x^{2}-0.25\pi ^{2})(x^{2}-2.25\pi ^{2})(x^{2}-6.25\pi ^{2})$  $\textstyle ={\Big (}1-{\frac {4x^{2}}{\pi ^{2}}}{\Big )}{\Big (}1-{\frac {4x^{2}}{9\pi ^{2}}}{\Big )}{\Big (}1-{\frac {4x^{2}}{25\pi ^{2}}}{\Big )};$  $\quad \max _{-0.5\pi

And so on. For every $n=1,2,\dots$  the polynomial

$P_{n}(x)={\Big (}1-{\frac {4x^{2}}{\pi ^{2}}}{\Big )}{\Big (}1-{\frac {4x^{2}}{9\pi ^{2}}}{\Big )}\dots {\Big (}1-{\frac {4x^{2}}{(2n-1)^{2}\pi ^{2}}}{\Big )}=\prod _{k=1}^{n}{\Big (}1-{\frac {4x^{2}}{(2k-1)^{2}\pi ^{2}}}{\Big )}$

satisfies $P_{n}(0)=1$  and $P_{n}(\pm 0.5\pi )=P_{n}(\pm 1.5\pi )=\dots =P_{n}(\pm (n-0.5)\pi )=0,$  which is easy to check. It is harder (but possible) to prove that $P_{n}(x)\to \cos x$  as $n\to \infty ,$  which represents the cosine as an infinite product

$\cos x={\Big (}1-{\frac {4x^{2}}{\pi ^{2}}}{\Big )}{\Big (}1-{\frac {4x^{2}}{9\pi ^{2}}}{\Big )}\dots =\prod _{k=1}^{\infty }{\Big (}1-{\frac {4x^{2}}{(2k-1)^{2}\pi ^{2}}}{\Big )}.$

On the other hand, the well-known power series $\cos x=1-{\frac {x^{2}}{2}}+{\frac {x^{4}}{24}}-\cdots =\sum _{k=0}^{\infty }{\frac {(-1)^{k}x^{2k}}{(2k)!}}$  gives another sequence of polynomials $Q_{n}(x)=1-{\frac {x^{2}}{2}}+{\frac {x^{4}}{24}}-\dots +{\frac {(-1)^{n}x^{2n}}{(2n)!}}=\sum _{k=0}^{n}{\frac {(-1)^{k}x^{2k}}{(2k)!}}$  converging to the same cosine function. See the picture for Q3; $\quad \max _{-0.5\pi

Can we check the equality $\textstyle {\big (}1-{\frac {4x^{2}}{\pi ^{2}}}{\big )}{\big (}1-{\frac {4x^{2}}{9\pi ^{2}}}{\big )}\dots =1-{\frac {x^{2}}{2}}+{\frac {x^{4}}{24}}-\cdots$  by opening the brackets? Let us try. The constant coefficient: just 1=1. The coefficient of x2: $\textstyle -{\frac {4}{\pi ^{2}}}-{\frac {4}{9\pi ^{2}}}-\dots =-{\frac {1}{2}},$  that is, $\textstyle 1+{\frac {1}{3^{2}}}+{\frac {1}{5^{2}}}+\dots ={\frac {\pi ^{2}}{8}};$  really? Yes, $\textstyle 1+{\frac {1}{3^{2}}}+{\frac {1}{5^{2}}}+\dots =\textstyle (1+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+\dots )-({\frac {1}{2^{2}}}+{\frac {1}{4^{2}}}+{\frac {1}{6^{2}}}+\dots )=(1+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+\dots )-{\frac {1}{2^{2}}}(1+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+\dots )={\frac {3}{4}}\cdot {\frac {\pi ^{2}}{6}}={\frac {\pi ^{2}}{8}};$  the well-known series of reciprocal squares is instrumental.

Such non-rigorous opening of brackets can be made rigorous as follows. For every polynomial P, the constant coefficient is the value of P at zero, P(0); the coefficient of x is the value at zero of the derivative, P '(0); and the coefficient of x2 is one half of the value at zero of the second derivative, ½P''(0). Clearly, $Q_{n}(0)=1=f(0),$  $Q'_{n}(0)=0=f'(0)$  and $Q''_{n}(0)=-1=f''(0)$  for all $n\geq 1$  (as before, $f(x)=\cos x$ ). The calculation above shows that $P''_{n}(0)\to -1=f''(0)$  as n tends to infinity. What about higher derivative, $P_{n}^{(k)}(0),$  does it converge to $f^{(k)}(0)\,$ ? It is tedious (if at all possible) to generalize the above calculation to k=4,6,...; fortunately, there is a better approach. Namely, $P_{n}(z)\to f(z)$  for all complex numbers z, and moreover, $\max _{|z|\leq R}|P_{n}(z)-f(z)|\to 0$  for every R>0. Using Cauchy's differentiation formula one concludes that $\max _{|z|\leq R}|P_{n}^{(k)}(z)-f^{(k)}(z)|\to 0$  (as $n\to \infty$ ) for each k, and in particular, $P_{n}^{(k)}(0)\to f^{(k)}(0).$

### Limit of derivatives versus derivative of limit

$P_{n}(x)=x(1-x^{2})^{n}\,.$

For $-1\leq x\leq 1$  we have $P_{n}(x)\to 0$  (think, why) as $n\to \infty .$  Nevertheless, the derivative at zero does not converge to 0; rather, it is equal to 1 (for all n) since, denoting $Q_{n}(x)=(1-x^{2})^{n},$  we have $P'_{n}(x)=(xQ_{n}(x))'=1\cdot Q_{n}(x)+x\cdot Q'_{n}(x);$  $P'_{n}(0)=Q_{n}(0)=1.$

Thus, the limit of the sequence of functions $(P_{1},P_{2},\dots )$  on the interval $[-1,1]$  is the zero function, hence the derivative of the limit is the zero function as well. However, the limit of the sequence of derivatives $(P'_{1},P'_{2},\dots )$  fails to be zero at least for $x=0.$  What happens for $0<|x|\leq 1\,$ ? Here the limit of derivatives is zero, since $P'_{n}(x)={\big (}1-(2n+1)x^{2}{\big )}(1-x^{2})^{n-1}\to 0$  (check it; the exponential decay of the second factor outweighs the linear grows of the first factor). Thus,

$0={\big (}\lim _{n\to \infty }P_{n}(x){\big )}'\neq \lim _{n\to \infty }P'_{n}(x)=f(x)={\begin{cases}0&{\text{for }}-1\leq x<0,\\1&{\text{for }}x=0,\\0&{\text{for }}0

It is not always possible to interchange derivative and limit.

Note the two equivalent definition of the function f; one is piecewise (something for some x, something else for other x, ...), but the other is a single expression $f(x)=\lim _{n\to \infty }{\big (}1-(2n+1)x^{2}{\big )}(1-x^{2})^{n-1}$  for all these x.

The function f is discontinuous (at 0), and nevertheless it is the limit of continuous functions $P'_{n}.$  This can happen for pointwise convergence (that is, convergence at every point of the considered domain), since the speed of convergence can depend on the point.

Otherwise (when the speed of convergence does not depend on the point) convergence is called uniform; by the uniform convergence theorem, the uniform limit of continuous functions is a continuous function.

It follows that convergence of $P'_{n}$  (to f) is non-uniform, but this is a proof by contradiction.

A better understanding may be gained from a direct proof. The derivatives $P'_{n}$  fail to converge uniformly, since $P'_{n}(x)$  fails to be small (when n is large) for some x close to 0; for instance, try $\textstyle x={\sqrt {\frac {c}{n-1}}}\,:$

$\max _{0

for all $c>0;$  and $(1-2c)e^{-c}$  is not zero (unless $c=0.5$ ).

In contrast, $P_{n}\to 0$  uniformly on $[-1,1],$  that is, $\max _{-1\leq x\leq 1}|P_{n}(x)|\to 0$  as $n\to \infty ,$  since the maximum is reached at $\textstyle x=\pm {\frac {1}{\sqrt {2n+1}}}$  (check it by solving the equation $P'_{n}(x)=0$ ) and $\textstyle 0\leq P_{n}{\big (}{\frac {1}{\sqrt {2n+1}}}{\big )}\leq {\frac {1}{\sqrt {2n+1}}}\to 0.$  And still, it appears to be impossible to interchange derivative and limit. Compare this case with a well-known theorem:

If $(f_{n})$  is a sequence of differentiable functions on $[a,b]$  such that $\lim _{n\to \infty }f_{n}(x_{0})$  exists (and is finite) for some $x_{0}\in [a,b]$  and the sequence $(f'_{n})$  converges uniformly on $[a,b]$ , then $f_{n}$  converges uniformly to a function $f$  on $[a,b]$ , and $f'(x)=\lim _{n\to \infty }f'_{n}(x)$  for $x\in [a,b]$ .

Uniform convergence of derivatives $f'_{n}$  is required; uniform convergence of functions $f_{n}$  is not enough.

Complex numbers, helpful in Sect. "Polynomial mimics cosine: roots", are helpless here, since for $z=\pm ci$  we have $|P(z)|=c(1+c^{2})^{n}\to \infty$  for all $c>0.$

## Monster functions

### Continuous monster

Everyone knows how to visualize the behavior of a continuous function by a curve on the coordinate plane. To this end one samples enough points ${\big (}x,f(x){\big )}$  on the graph of the function, plots them on the $(x,y)$  plane and, connecting these points by a curve, sketches the graph. However, this seemingly uncontroversial idea is challenged by some strange functions, sometimes called "continuous monsters". Often they are sums of so-called lacunary trigonometric series

$f(x)=a_{0}+\sum \nolimits _{n=1}^{\infty }(a_{n}\sin(\lambda _{n}x)+b_{n}\cos(\lambda _{n}x){\big )}\qquad$  (where the numbers $\lambda _{n}\to \infty$  are far apart).

The most famous of these is the Weierstrass function

$f(x)=\sum \nolimits _{n=0}^{\infty }a^{n}\cos(b^{n}\pi x)\qquad$  (for appropriate $a,b$  such that ${\tfrac {1}{b}}\leq a<1$ ).

According to Jarnicki and Pflug (Section 3.1), this is $C_{a,b}{\big (}{\tfrac {1}{2}}x{\big )}$  (and the similar series of sine functions is $S_{a,b}$ ).

This image (reproduced here by Michael McLaughlin with reference to the above-mentioned Wikipedia article, and here by Richard Lipton without reference) is in fact a graph of the approximation $f_{7}(x)=\sum \nolimits _{n=0}^{7}{\tfrac {1}{2^{n}}}\cos(3^{n}\pi x)$  to the function $f(x)=C_{0.5,3}{\big (}{\tfrac {1}{2}}x{\big )}$  obtained by connecting $\approx 3100$  sample points (which can be seen by inspecting the XML code in the SVG file). It looks like a curve albeit rather strange one. The last two summands are distorted, since the step size $\approx {\tfrac {4}{3100}}\approx 0.0013$  exceeds the period ${\tfrac {2}{3^{7}}}\approx 0.0009$  of $\cos(3^{7}\pi x)$  and is close to half the period ${\tfrac {2}{3^{6}}}\approx 0.0027$  of $\cos(3^{6}\pi x).$  However, all terms with $n>5$  contribute at most $2^{-6}+2^{-7}+2^{-8}+\dots =2^{-5}={\tfrac {1}{32}},$  that is, $|f(x)-f_{5}(x)|\leq {\tfrac {1}{32}}$  for all $x;$  this difference is barely visible unless one zooms the image.

So far, so good. But this monster is not the worst. In order to get a more monstrous lacunary trigonometric series one may try frequencies $\lambda _{n}$  increasing faster than $3^{n},$  or coefficients decreasing slower than $1/2^{n},$  or both.

Let us try the series

$g(x)=\sum \nolimits _{n=0}^{\infty }{\tfrac {1}{(n+1)(n+2)}}\cos(3^{n}\cdot 2\pi x)={\tfrac {1}{2}}\cos 2\pi x+{\tfrac {1}{6}}\cos 6\pi x+{\tfrac {1}{12}}\cos 18\pi x+\dots \,;$

these coefficients ${\tfrac {1}{(n+1)(n+2)}}={\tfrac {1}{n+1}}-{\tfrac {1}{n+2}}$  are convenient, since their sum is (finite and) evident: $\sum \nolimits _{n=0}^{\infty }{\tfrac {1}{(n+1)(n+2)}}={\tfrac {1}{2}}+{\tfrac {1}{6}}+{\tfrac {1}{12}}+\dots =(1-{\tfrac {1}{2}})+({\tfrac {1}{2}}-{\tfrac {1}{3}})+({\tfrac {1}{3}}-{\tfrac {1}{4}})+\dots =1.\;$  Accordingly, $\,g(0)=1\,$  (since $\cos 0=1$ ), and $\,-1\leq g(x)\leq 1\,$  for all $x$  (since $-1\leq \cos x\leq 1$ ).

Partial sums
$g_{0}(x)$
$g_{1}(x)$
$g_{2}(x)$

Similarly, $\,\sum \nolimits _{n=0}^{N}{\tfrac {1}{(n+1)(n+2)}}=1-{\tfrac {1}{N+2}}\,$  and $\,\sum \nolimits _{n=N+1}^{\infty }{\tfrac {1}{(n+1)(n+2)}}={\tfrac {1}{N+2}}.\,$  Accordingly, partial sums $\;g_{N}(x)=\sum \nolimits _{n=0}^{N}{\tfrac {1}{(n+1)(n+2)}}\cos(3^{n}\cdot 2\pi x)$  and tails $\;g(x)-g_{N}(x)=\sum \nolimits _{n=N+1}^{\infty }{\tfrac {1}{(n+1)(n+2)}}\cos(3^{n}\cdot 2\pi x)\,$  satisfy $\,g_{N}(0)=1-{\tfrac {1}{N+2}},\,$  $\,g(0)-g_{N}(0)={\tfrac {1}{N+2}}\,$  and $\,|g_{N}(x)|\leq 1-{\tfrac {1}{N+2}},\,$  $\,|g(x)-g_{N}(x)|\leq {\tfrac {1}{N+2}}\,$  for all $x.$

Due to evident symmetries (see the graphs of $g_{0},g_{1},g_{2}$  on the period $[0,1]$ ) it is enough to plot this function on $[0,\,1/4].$

Partial sum $g_{8}(x),$  zoom 4

Looking closely at the graph of $g_{8}$  we come to some doubt. At first glance, it is drawn with a thicker pen. But no; some (almost?) vertical lines are thin. So, what do we see here, a curve, or rather, the area between two "parallel" curves?

$g_{8}(x)$  and $g_{10}(x),$  zoom 40
$g_{10}(x)$  and $g_{12}(x),$  zoom 400

The graph of $g_{8}$  becomes clearly visible after a zoom, but the doubt returns, hardened, with $g_{10}.$  One more zoom only worsens. We realize that the graph of $g_{8}$  on $[0,\,1/4]$  looks too nice. In particular, it appears that the graph of $g$  crosses the upper side of the red box. So, how to get closer to the graph of $g(x)?$

Fortunately, for some special values of $x$  the exact value of $g(x)$  is evident. First, $g(0)=1$  (see above), whence $g(k)=1$  for all integers $k$  due to periodicity: $g(x+1)=g(x)$  for all $x.$  Similarly, $g{\big (}k+{\tfrac {1}{2}}{\big )}=g{\big (}{\tfrac {1}{2}}{\big )}=-1,$  since $\cos k\pi =-1$  for all odd integers $k$  (in particular, $k=3^{n}$ ). And on the other hand, $-1\leq g(x)\leq 1$  for all $x.$

Without the first summand we have the first tail $g(x)-g_{0}(x),$  with the period ${\tfrac {1}{3}}.$  Accordingly, $g{\big (}{\tfrac {1}{3}}k{\big )}-g_{0}{\big (}{\tfrac {1}{3}}k{\big )}=g(0)-g_{0}(0)={\tfrac {1}{2}}$  and $g{\big (}{\tfrac {1}{3}}(k+{\tfrac {1}{2}}){\big )}-g_{0}{\big (}{\tfrac {1}{3}}(k+{\tfrac {1}{2}}){\big )}=g{\big (}{\tfrac {1}{6}}{\big )}-g_{0}{\big (}{\tfrac {1}{6}}{\big )}=-{\tfrac {1}{2}}$  for all integers $k.$  And on the other hand, $-{\tfrac {1}{2}}\leq g(x)-g_{0}(x)\leq {\tfrac {1}{2}}$  for all $x.$  Thus,

Special values and bounds of $g(x)$
via $g_{0}(x)$
via $g_{1}(x)$
$g{\big (}{\tfrac {1}{3}}k{\big )}=g_{0}{\big (}{\tfrac {1}{3}}k{\big )}+{\tfrac {1}{2}},\quad$  $g{\big (}{\tfrac {1}{3}}(k+{\tfrac {1}{2}}){\big )}=g_{0}{\big (}{\tfrac {1}{3}}(k+{\tfrac {1}{2}}){\big )}-{\tfrac {1}{2}}\quad$  for all integers $k,$
$g_{0}(x)-{\tfrac {1}{2}}\leq g(x)\leq g_{0}(x)+{\tfrac {1}{2}}\quad$  for all $x.$

Similarly, for all $N,$

$g{\big (}{\tfrac {1}{3^{N+1}}}k{\big )}=g_{N}{\big (}{\tfrac {1}{3^{N+1}}}k{\big )}+{\tfrac {1}{N+2}},\quad$  $g{\big (}{\tfrac {1}{3^{N+1}}}(k+{\tfrac {1}{2}}){\big )}=g_{N}{\big (}{\tfrac {1}{3^{N+1}}}(k+{\tfrac {1}{2}}){\big )}-{\tfrac {1}{N+2}}\quad$  for all integers $k,$
$g_{N}(x)-{\tfrac {1}{N+2}}\leq g(x)\leq g_{N}(x)+{\tfrac {1}{N+2}}\quad$  for all $x.$

Via $g_{10}(x),$  zoom 400; also $g_{12}(x).$

Returning to $g_{10}(x)$  and $g_{12}(x)$  on $[0.11,0.1125],$  we add special values and bounds via $g_{10}(x)$  and see that $g(x)$  is much further from $g_{12}(x)$  than $g_{12}(x)$  from $g_{10}(x).$  We have more than 440 points on the red curve, and equally many points on the blue curve.

Graph of $g(x),$  zoom 400.

Thus, if the horizontal size of picture is less than 440 pixels, then inevitably the graph of $g(x)$  crosses all the pixels between the red curve and the blue curve! Within the given resolution the graph of $g(x)$  does not look like a curve, but as the area between two parallel curves.

This, in itself, isn't surprising. For example, a 10 megahertz radio wave modulated by a 100 hertz sound may be described by the function $h(t)=\cos(10^{2}\cdot 2\pi t)\cdot \cos(10^{7}\cdot 2\pi t).$  The graph of $h(t)$  on, say, $[0,0.1]$  does not look like a curve, but as the region between two curves, $\pm \cos(10^{2}\cdot 2\pi t).$  And on $[0,10^{-4}]$  it looks like a rectangular region. However, zoom ultimately helps; the graph of $h(t)$  on $[0,10^{-6}]$  (or any other 1 microsecond interval) does look like a curve.

In contrast, for $g(t)$  zoom never helps, it only worsens. In fact, it ultimately leads to a graph that looks like a rectangular region. This shows the monstrous nature of $g(x).$  On the other hand, the height of the (nearly) rectangular region converges to zero as zoom tends to infinity; this shows the continuous nature of $g(x).$

Random sample of points on the graph of $g(x)-g_{10}(x).$

Visualization by a region-like graph leaves much to be desired. Unable to draw a curve-like graph, we still can do more. We can choose at random many values of $x,$  compute the corresponding values $y$  of the given function, and draw the points $(x,y).$  This picture shows a sample of $37\,500$  random points on the graph of the tail $g(x)-g_{10}(x)$  in the first quarter of the period of this tail (which is sufficient due to the evident symmetries, as was noted). Truth be told, the function $g_{310}(x)-g_{10}(x)$  was used here as a satisfactory approximation of $g(x)-g_{10}(x),$  and each $x$  was specified by more than $300$  ternary (that is, base 3) digits in order to compute $\cos(3^{310}\cdot 2\pi x).$

As before, the red line and the blue line are bounds of the given function $g(x)-g_{10}(x)$  (via $g_{15}(x)-g_{10}(x)$ ), and the points on these curves are special values. A wonder: all the $37\,500$  random points are far from these bounds! Usually $50$  points is enough to get a satisfactory picture of the maximum value of the function. But for this monster, $37\,500$  points are few.

Generally, for the sum of a lacunary trigonometric series, it is quite a challenge, to find its maximum and minimum (on a given interval) even approximately, say, with relative error less than 10%. Our choice of frequencies $\lambda _{n}=3^{n},$  in concert with our use of cosine rather than sine, allow us to find unusually high values of the sum. In order to fully appreciate this good luck, try to maximize such function as $\sum \nolimits _{n=0}^{\infty }{\tfrac {1}{(n+1)(n+2)}}\sin(3^{n}\cdot 2\pi x),$  or to minimize $\sum \nolimits _{n=0}^{\infty }{\tfrac {1}{(n+1)(n+2)}}\cos(4^{n}\cdot 2\pi x),$  and you will realize, why Weierstrass preferred cosine functions and frequencies of the form $b^{n}$  where $b$  is a positive odd integer. Usual numerical optimization methods fail because local extrema are numerous and very sharp. By continuity, a monster function is close to its maximum in some neighborhood of its maximizer, but this neighborhood is very small; a random point has very little chance of getting into such neighborhood. The vast majority of values are far from extreme.

Increasing rearrangement of sample points on the graph of $g(x)-g_{10}(x).$

In order to investigate the distribution of values of the tail $g(x)-g_{10}(x)$  on its period, one may took the $37\,500$  values $y$  in the first quarter of the period (used above), together with the $37\,500$  opposite numbers $(-y)$  (these are values in the second quarter), sort the list of all these $75\,000$  numbers in ascending order, and treat the result as values of a new function at equally spaced points. Here is the result plotted, see the red curve. This is a numeric approximation of the so-called monotone (increasing) rearrangement of the given function.

Significantly, the result is quite close to the famous normal distribution ${\mathcal {N}}(0,\sigma ^{2})$ , and the red curve is quite close to the corresponding quantile function $\sigma \Phi ^{-1}(p)$  indicated by the black points.

(Here $\textstyle \sigma ^{2}=\int _{0}^{1}{\big (}g_{310}(x)-g_{10}(x){\big )}^{2}\,\mathrm {d} x={\tfrac {1}{2}}\sum _{n=11}^{310}{\big (}{\tfrac {1}{(n+1)(n+2)}}{\big )}^{2}\approx 0.00981^{2}.$ ) A wonder: the function is monstrous, but its monotone rearrangement is nice. Why so?

The given function is the sum of many summands (of the form $\textstyle {\tfrac {1}{(n+1)(n+2)}}\cos(3^{n}\cdot 2\pi x)$ ); and "The central limit theorem states that under certain (fairly common) conditions, the sum of many random variables will have an approximately normal distribution."

Choosing a point $x\in [0,1]$  at random according to the uniform distribution we may treat the summands as random variables. However, one of the conditions is independence; and our summands are dependent, moreover, functionally dependent, since $\cos(3\theta )=4\cos ^{3}\theta -3\cos \theta .$  (More generally, $\cos n\theta =T_{n}(\cos \theta ),$  $T_{n}$  being the Chebyshev polynomial.) Nevertheless, probabilistic approach to lacunary trigonometric series exists and bears fruit; in particular, appearance of normal distribution in this context is a well-know phenomenon.

But clearly, the normal approximation must fail somewhere, since our function is bounded (by $1/12$ ), while a normal random variable is not.

The word "central" in the name of the central limit theorem is interpreted in two ways: (1) of central importance in probability theory; (2) describes the center of the distribution as opposed to its tails, that is, large deviations.

Increasing rearrangement of $g(x)-g_{10}(x),$  zoom 40.

In the normal distribution, deviations above ${\text{mean}}+3\sigma$  appear with the probability $\approx 0.0013;$  accordingly, for $0  the inequality $g(x)-g_{10}(x)>3\sigma$  holds on intervals of total length $\approx 0.0013,$  and a random sample of size $75\,000$  should contain about $0.0013\times 75\,000\approx 100$  such values. This is indiscernible on the previous picture, but clearly visible on the zoomed picture.

Increasing rearrangement of $g(x)-g_{10}(x),$  log scale.

For $4\sigma$  the normal probability is only $\approx 0.000032,$  and $0.000032\times 75\,000$  is only $\approx 2.4;$  in order to see what happens for $4\sigma$  and $5\sigma$  we need larger samples and logarithmic scale. This last picture presents a sample of size $10^{8}=100\,000\,000$  (that took a computer several hours). We see that the distribution is close to normal at $4\sigma$  but moves away from normal at $5\sigma .$  Hopefully, the approximate normality follows from some moderate deviations theory that applies at $4\sigma ,$  while the departure from normality follows from some large deviations theory that applies at $5\sigma$  and further; but for now, in spite of the recent progress in the probabilistic approach to lacunary series, such theories are not available, and the situation near the big red question mark on the picture remains unknown. It is natural to guess that is this domain the probability of $t\sigma$  deviation is smaller than its normal approximation, therefore, smaller than $\exp(-t^{2}/2).$

Assuming that this conjecture is true, and taking into account that $0.9\cdot {\tfrac {1}{12}}>7.6\sigma$  and $\exp(-7.6^{2}/2)<3\cdot 10^{-13},$  we conclude that the function $g(x)-g_{10}(x)$  exceeds $90\%$  of its maximum on intervals of total length less (probably, much less) than $3\cdot 10^{-13}.$

But, again, this monster is not the worst. In order to get a more monstrous lacunary trigonometric series one may try frequencies $\lambda _{n}$  increasing faster than $3^{n},$  or coefficients decreasing slower than $1/n^{2},$  or both. Unexpectedly, or not so unexpectedly, such functions, being more monstrous analytically, are more tractable probabilistically. (In the paper by Delbaen and Hovhannisyan, note the coefficients $1/n^{\alpha }$  for $\alpha <{\tfrac {1}{2}}$  in Remark 2.3; note also the "big gaps" theorems 1.4, 2.15, 2.16 for ${\tfrac {\lambda _{n+1}}{\lambda _{n}}}\to \infty .$ ) This is a special case of a general phenomenon formulated by De Bruijn as follows:

It often happens that we want to evaluate a certain number [...] so that the direct method is almost prohibitive [...] we should be very happy to have an entirely different method [...] giving at least some useful information to it. And usually this new method gives (as remarked by Laplace) the better results in proportion to its being more necessary [...]