Author's Commentary (Hong Kong China, 31/03/2026): The exam period is approximately 3 weeks away, and I will have to balance assignments, exam preparations, some curricular projects, a possible research project (a tremendous privilege), along with additional extracurricular activities such as participation in jiu-jitsu and in student flag-guard obligations (another tremendous privilege). Additionally, there may be interviews in preparation for potential summer internships, a graduation requirement of my undergraduate program. Nonetheless, I feel lighter, and am excited to continue updating persistent self-studying, where an increasing emphasis is placed on artificial intelligence.
One must remember that there are approximately 108-116 waking hours per week (depends on the person), and I am set to finalize a 13-hour productivity day today.
1 An Evolutionary Algorithm in the Theory of Minimal Surfaces
Recently, I came across certain topics of evolutionary optimization algorithms, and attempts to understand and interact with them produced many interesting and fruitful results. In fact, an evolutionary algorithm demonstrated to be extremely promising for a certain classical problem of the calculus of variations, which is to find the curve that minimizes the following,
$$2 \pi \int_{x_0}^{x_1} y \sqrt{1 + \left( \frac{dy}{dx} \right)^2} dx$$
As is well known, an application of the Euler-Lagrange differential equation is all that is required to furnish a solution in the form of (for $C$ and $C_1$ real constants),
$$y = C \cosh \frac{x + C_1}{C}$$
The curve is the catenary. In fact, revolving this catenary around the $x$-axis produces a surface that achieves minimal surface area for certain classes of curves (i.e., non-pathological), a classic result of the theory of minimal surfaces, described succinctly by the given integral.
But, could such a curve be produced using an evolutionary algorithm? A neuroevolution model, even? Yes, and the results were quite impressive (at least from my vantage point). Such a possibility did not appear immediately obvious as, for instance, evolutionary optimization algorithms are sometimes justified from the perspective of schema theory (at least in introductory chapters), but the subtlety is in how to express genotypes non-trivial for stated problems and their encoding. The first approach I had opted for is to consider the class of polynomials, of degree 21, as randomized; it is their coefficients that are randomized and that which provides the encoding for the evolutionary algorithm (we note that polynomials can always approximate the hyperbolic cosine to an arbitrary degree of accuracy on closed intervals due to such theorems as the Stone-Weierstrass theorem).
The algorithm can be quickly described as,
- Population Initialization - A set of candidates is initially randomly generated.
- Fitness Evaluation - The fitness scores of candidates are computed.
- "Elitism" Filter - Candidates with the highest fitness scores are copied to the next generation, clearly important for such variational problems.$^{[1]}$
- "Breeding" - Consisting of tournament selection, crossover, and mutation.
And, the fitness is,
$$\text{fitness}(y) = - \int_{x_0}^{x_1} y \sqrt{1 + \left( \frac{dy}{dx} \right)^2} dx$$
The results are as follows (I recommend clicking on the GIF to enlarge),
I initially thought there was a mistake since the first generation immediately produced a curve that resembled the catenary quite closely. So, I decided to provide more visualizations, and they show that one generation is sufficient for the evolutionary algorithm to produce a curve that almost solves the variational problem in one-shot. The result surprised me, to say the least, since, having a somewhat traditional mathematical background, I had never encountered evolutionary algorithms in any traditional books on variational analysis or the classical calculus of variations, yet, the results are so successful and efficient (a comprehensive study of their efficiency relative to other methods will not be provided here, such will require a comprehensive survey, suited for a thesis, perhaps).
Of course, we also got lucky in some of the simulations, since the initial population is randomized. But, it also shows the aptness of our particular "genotypic" approach.
For further visualization, the "genotypic profiles" are as follows,$^{[2]}$
And regarding convergence, we see that convergence is not entirely, perfectly uniform (unlikely), hence the lack of perfect alignment between different function norms,$^{[3]}$
Fitness in the theory of evolutionary optimization algorithms is the driving criterion that "buys" the next generations, fitness in the sense of satisfying a fitness criterion (consider such to be almost analogous to the loss function for neural networks). In the theory of evolutionary optimization algorithms, the same ideas are implemented exactly, and mathematical problems can be solved with evolutionary procedures, which is entirely unsurprising. In fact, the stated problem of this blog, belonging to the theory of minimal surfaces, is quite ubiquitous in mathematical biology in part due to notions of potential energy minimization in many natural problems of the sciences.
Intelligent optimization, then, whether in evolutionary optimization algorithms or swarm intelligence,$^{[4]}$ have interesting potential indeed in becoming integrated into the fourth industrial revolution.
Foonotes
[1] There is one unique solution, the catenary, and so it is important to force one's way toward it, in a certain sense. Now, if there existed complex problems with multiple open-ended alternatives/solutions, then the "elitism" filter may not always necessarily be justified, since a lack of fitness locally may not justify a lack of fitness globally.
[2] One can clearly consider the analogy with DNA-based genotypic profiles, and, incidentally, SNP distribution profiles (single nucleotide polymorphism distribution profiles), are very commonly used for features in machine learning and deep learning.
[3] The fact that the $\mathbb{L}^{\infty}$ error is the greatest shows that there are some "kinks" that are not ironed out, but averaged out in the other averaged errors. Thus, if one is interested in finer methods to fine-tune these kinks, additional methods will have to be applied to supplement the evolutionary algorithm.
[4] For more, please see Discovering Extrema due to the Stochastic Heat Differential Equation via the Bat Algorithm or Machine Learning Mini-Project Series V: CNN Automated Hyperparameter Tuning via Bat Swarming,



No comments:
Post a Comment