What We Know We Don’t Know About Algorithms (part 1)

Taking a step back to classify the problem at hand from the algorithms perspective can make the difference between failure and success, between software masterpiece and technical debt.

What We Know We Don’t Know About Algorithms (part 1)

written by Mihai Serban (Java Software Engineer), in the April 2023 issue of Today Software Magazine.

Read the second part of the article here

I believe we, developers, have started to forget the importance of algorithms that reside at the core of what we do. They are hidden beneath frameworks and libraries, mischievously eluding our awareness as we are caught up with daily tasks.

I will try to persuade you that paying attention to them, taking a step back to classify the problem at hand from the algorithms perspective can make the difference between failure and success, between software masterpiece and technical debt. Let us dive into it.

First, what is an algorithm? According to Google, it is “a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.” According to Wikipedia, it is “a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation”.

In that regard, it is common knowledge that adding two numbers is not an algorithm. Neither is the Ubuntu operating system. There must be something in between that is not trivial, and also has a bound scope. Something you can apply to multiple situations, and you can use to classify and understand problems.

What algorithms do you think of when someone asks you to name one? Let us start with the first ones and build up from there. It is difficult to settle the starting point of algorithms, but few dare to go back to before the ancient Babylonian clay tablets.

Recently decoded, they contain Pythagorean numbers written in base 60 (from where we have our base for time measurements and angles measured in degrees; Babylonians years had 360 days, as the Earth covers approximately 1 degree each day around the Sun, conveniently for their 60 number bases). Read more about the Babylonian Plimpton 322 Clay Tabley here.

However, numbers and techniques to draw a right angle can hardly be named “algorithms”, that is why some say it is Eratosthenes with his Sieve, discovering a method for finding all prime numbers (up to a certain limit) while measuring the circumference of the Earth and being the chief librarian at the Library of Alexandria.


I personally prefer to place the birth of algorithms in the hand of Euclid, probably the most influential figure in the history of mathematics, as the father of geometry (through his work in “Elements”, still the second most published book after the Bible).

Also in the Elements, an idea that is now called the fundamental theorem of arithmetic started what is now the number theory, stating that every (integer) number has a unique prime factorization, which makes prime numbers the building blocks of all numbers.


Even for the ancient Greeks, primes were more than an interesting curiosity. Using the theorem above, they were providing an effective way to simplify fractions, which was especially useful in constructions (or dividing goods).

Because it was cumbersome to decompose the nominator and denominator into its prime factors and perform simplifications this way, Euclid devised an algorithm to compute the greatest common divisor of two numbers (gcd function) efficiently. If you studied Computer Science, you would remember that one of the first algorithms you implemented was the gcd or the primality test.

If we allow ourselves a quick time travel jump 2500 years in the future, this is how the gcd algorithm might look in Java.

public int gcd(int a, int b){

while(a != b) {

if (a > b) {

a = a - b;

} else {

b = b - a;



return b;


Muhamad ibn al-Khwārizmī

It can be argued that the history of algorithms starts with Muhamad ibn al-Khwārizmī, a Persian mathematician that lived in Bagdad in the 9th century. The word “algorithm” is derived from his name. He was the one who introduced a decimal positional system to the world, in his book “On Calculation with Hindu Numerals”.

It is not for this book that we call him the father of algorithms, however, but for a book that was requested of him by the Caliph, in which he had to compile a list of daily life problems and methods for solving them. This included, among other things, questions about the measurement of land, commercial transactions, and distribution of a legacy in the family.

He decided to start the book with a mathematical work that would abstract all concrete problems into what we now call algorithms. The famous book is called “The Compendious Book on Calculation by Completion and Balancing”. In Arabic, that would be “al-Kitāb al-Mukhtaṣar fī Ḥisāb al-Jabr wal-Muqābalah”, but the shorter version is known as al-Jabr, which we now simply call algebra, after the book that established the new mathematical field.

Ada Lovelace and the Generation of Bernoulli series algorithm

Some argue that to have an algorithm, one needs a machine to run it. If humans do it by hand, it is mere mathematics. Those will set the birthplace of algorithms in 1843, in a publication written by Ada Lovelace to prove the usefulness of a mechanical machine designed by Charles Babbage which he called the Analytical Engine.

In Ada Lovelace’s notes to “Sketch of The Analytical Engine Invented by Charles Babbage by Luigi Menabrea”, she described how the Analytical Engine could calculate the Bernoulli numbers using a recursive algorithm, that could run on a machine that has never been built.

Bernoulli numbers appear when calculating any sum of powers of natural numbers. Having a formula for them simplifies calculus. They are also connected to prime numbers and provide a way to distinguish between regular and irregular primes. As we will also see later, primes are following us everywhere. You can read more about Ada Lovelace’s algorithm and history here.

See the Diagram of an algorithm for the Analytical Engine for the computation of Bernoulli numbers, from “Sketch of The Analytical Engine Invented by Charles Babbage by Luigi Menabrea”with notes by Ada Lovelace here. The steps describe how the analytical engine would compute B8, which she called B7 in her notation.

The analytical engine was never built, as it was not feasible at that time to produce mechanical components at such high precision and quality as needed, but two difference engines (v2) were built, one by the Science Museum in London in the 80' and another one in the US which is now at Intellectual Ventures in Seattle.

Luigi Menabrea’s paper along with Ada Lovelace’s first complex algorithm designed for a machine, uncovered the possibilities of computing machines, fueling what has become the era of computing.

Below, you can see the marvellous first difference engine built, a machine of 8,000 parts and weighing about 5 tons.

As Menabrea cleverly intuited, in the future, mathematicians will no longer be needed for complex calculus. Workers should do simply fine. We, the programmers, are the “workpeople” working on these machines, and indeed, we are no mathematicians.

“When once the engine shall have been constructed, the difficulty will be reduced to the making of the cards; but as these are merely the translation of algebraic formulae, it will, by means of some simple notation, be easy to consign the execution of them to a workman.”, said the Italian engineer and future prime minister Luigi Menabrea on the design of Analytical Engine.

Read more about the London Science Museum’s difference engine, the first one actually built from Babbage’s design, here. The design has the same precision on all columns, but in calculating polynomials, the precision on the higher-order columns could be lower.

Soon after, Alan Turing came along and proposed a Universal Turing Machine that could compute everything, inspiring the construction of programmable, electronic computers.

In Alan Turing’s theoretical design, the Turing Machine was the program (the algorithm), past to the Universal Turing Machine along with the input. This gave a new dimension to programmability and defined terms like algorithm and computability in mathematical terms, as he was disproving the decidability statement of David Hilbert’s program of formalizing mathematics.

Along with Kurt Gödel’s completeness theorem, we now know that, unfortunately, mathematics is neither complete nor decidable, but the byproduct of Turing’s proof, the Turing Machine, gave birth to computer science as we know it and to the information age that followed. He really is our role model and our founding parent for this achievement.

Immediately after, algorithms penetrated every field of our existence, detecting, searching, classifying, modelling, predicting, reporting, and optimising everything from the next best chess move to the next best time interval for the green traffic light into human behaviour understanding and persuasion, theorem proving and protein unfolding.

Let us see how we can classify them to shed some light on the possibilities and methods offered by algorithms.

Classification: Recursive vs Iterative Algorithms

Remember the Euclid algorithm at the beginning of the article? There is another way to solve this problem, which I’m sure you will recognize instantly:

public int rgcd(int a, int b){

if( b == 0) {

return a;


return rgcd(b, a % b);


The recursive version of the same algorithm you see above brings us to one of the ways we can devise algorithms: iterative and recursive. In theory, they are equally valid; in practice, we might prefer the iterative approach as each method call consumes some memory on the stack on each call (in Java, at least).

The algorithm above is a special case of recursion: it is tail-recursive, and in most functional programming languages today, it is optimised by the compiler. That can be a good reason to use Scala or Kotlin instead of Java if you plan to use recursion in your code.

On a deeper level, the existence of the stack and of the compiler behind every programming language function makes each recursive function, in principle, an iterative one. Remember that, in essence, it runs on something like a Turing Machine.

Every recursive algorithm can be expressed iteratively and vice versa, and that is a fact. We know that because in the ’30s, three computational models had been defined: Kurt Godel’s general recursive functions, lambda calculus (introduced by Alonzo Church) and Turing machines, introduced by, of course, Alan Turing, which had been proven to be equivalent.

These three gentlemen made life difficult for you, the developer, as if you are asked to convert a recursive function into its equivalent iterative variant (as some projects or companies forbid recursivity) you cannot tell them it is not possible.

Moreover, if you are an imperative style programmer and the team lead adds a comment in your pull request stating that “this can be implemented with pure functions to limit side effects”, it has already been proven that it can be done (yes, even in Java), you just need to figure out how.


We now say that something is computable by definition if a Turing Machine can implement it. We do not know yet if any function is computable (by a Turing Machine), as we do not yet know what an “effectively calculable” function is or can be outside of Turing’s definition.

From a practical point of view, we believe that every function which does not involve infinite loops is computable, so you cannot use this argument to tell your stakeholder that his requested feature is impossible to implement. There are, of course, problems that are undecidable, problems for which there cannot be any algorithm we can write to solve.

The most famous example is Alan Turing’s Halting Problem, stating that there cannot be an algorithm that can determine if a given algorithm (or Turing Machine) and an input produces a result or runs indefinitely.

For this sort of problem, let us suppose that your stakeholder asks you to implement a code generation algorithm that optimises the company budget. In that case, you should start with a huge “spike” to research the possibilities of generating non-halting algorithms. I wish you good luck!

For a deeper dive into the current state of the issue check Church-Turing’s thesis, hopefully, one day, we can turn it into a theorem and prove that we, the programmers, can really implement everything our stakeholders desire (having reasonable time and financial resources).

Join our community of problem-solvers