Blog & News

Advanced Mathematical Decision Making: What is it and How is it used in Society?

In the sense of machine learning, advanced mathematical decision making is the process of feeding large amounts of data to AI (artificial intelligence) in order for it to gain “knowledge and experience” through the process of finding connections and rules. Mathematical algorithms are used to process data and draw conclusions from it. The aim is to create an unbiased, efficient and (in the long run) cheap way of processing information.

What are algorithms?

Strictly speaking, an algorithm is a set of steps which need to be fulfilled to achieve a certain result. If you want to make a cake, you follow a recipe. If you want to do your taxes, you have to follow certain mathematical formulas. All of these are algorithms.

In every-day speech the word algorithm is usually associated with a computer program or a formula that is too complicated for a layman to understand. This intuitive understanding is the one that is the focus of this discussion - machine learning and AI algorithms.

How are algorithms used?

There are three major groups:

  1. Simple algorithms - algorithms that completely depend on human input. This means that we provide a formula that is going to be used for a certain task, and the computer is going to follow its rules. A good example would be Ofqual’s grading algorithm which we’re going to talk about a bit later.
  2. Machine learning and data mining algorithms - two branches of computer science that overlap significantly, but are very different in their purpose. Simply put, the goal of machine learning is to predict outcomes or replicate knowledge that we already have. We feed them large amounts of data and expect the algorithms to learn from it, make conclusions and give an accurate output, just like a human being would, but much faster and more efficiently.
    On the other hand, data mining focuses on discovering new properties in data. This is something that could never be done by humans instead, because of the sheer volume of data and our own predetermined cognitive patterns.
  3. Artificial intelligence - although similar to machine learning, AI is a much broader concept. The goal of AI is to mimic human intelligence, behavior and ability to solve previously unknown, complex problems. While machine learning is a tool, AI is the mind using it.

For the needs of this article, we will focus on simple algorithms and machine learning only.

Algorithms are all around us

Where are algorithms used?

Everywhere. The internet is run on algorithms. Banks, smartphones, social media, GPS, smart devices, dating apps, everything that makes our lives easier is there thanks to the algorithms.

Personalized ads, recommendations and smart email organizers are nothing more than machine learning protocols running in the background to make your life easier. And bring more profit to those who run them.

Many will argue that this is not a problem per se but merely a product of a digital era. Nothing has changed compared to the early 90s, human nature is just wrapped in a cloak of zeroes and ones.

While this is true, there are aspects we need to be mindful of. Algorithms are frequently used by governments and authorities, with very mixed results.

  • Data analytics are used to predict which children are at risk of domestic violence and neglect. One such project in east London was recently abandoned due to inaccurate and biased outputs.
  • Benefit claims are also trusted to the risk-evaluation algorithms. Some councils in Britain have also abandoned these protocols because of inaccurate or completely nonsensical decisions.
  • The Home Office department in the UK stopped using an algorithm which supposedly helped with visa applications because it was found to have a racial bias.
  • Ofqual’s grading algorithm sparked nation-wide outrage after the A level grades were announced.

The list goes on. And while all the above might provoke the general public to grab pitchforks and shout “Death to HAL”, the situation is far from black and white. Many aspects of our lives would be completely crippled without algorithms, and most of those are tightly connected to our quality of living. For example:

  • Speech recognition software converts spoken words into text. There are countless applications, but one of the most important is aiding individuals with visual impairment. Having an easy, straightforward way of reading and writing is a huge leap for integration and empowerment.
  • Security features on digital devices have been greatly increased with the use of machine learning and data mining. Not only do these systems improve existing security protocols, they also discover hidden flaws and weaknesses that are virtually impossible for human beings to find.
  • Medical diagnosis has been significantly improved by algorithms. The large amount of data on collected medical knowledge and outcomes for certain diseases can help doctors make the right decision very quickly. 

As you can see, the benefits of using predictive analytics outweigh the disadvantages manifold. However, it is extremely important to keep the usage of the algorithms transparent, insist on their code being peer reviewed, and look for bias whenever there is a concern that one might exist.

Machine Learning

Consulting the public about implementation and possible concerns, especially when it comes to machine learning and algorithms used by governments, is essential. After all, human lives and wellbeing depend on the outcomes.

 “A core objective of a learner is to generalize from its experience.” While this statement is true for both human and artificial learners, there is a great danger in it. Human beings are not merely data, and their actions are never devoid of context. The subtlety needed to fully understand every individual’s potential or needs is something algorithms are not yet capable of.

And there is another problem. Algorithms are created and implemented by humans. Unfortunately, this means they can reflect the bias of their creators. To illustrate this point, we’ll take a closer look at the recent Ofquals grading scandal that we mentioned earlier.

What happened with Ofqual’s grading algorithm?

Due to the Covid-19 situation that made standard examination impossible, the Office of Qualifications and Examinations Regulation (Ofqual) resorted to using an algorithm to determine students’ final exam grades in England and Northern Ireland. This algorithm was supposed to generate A-Level (advanced level) grades for students nationwide accurately, avoiding bias and undesirable human factors.

Except that it didn’t.

It achieved quite the opposite. Let’s see how the algorithm (in this case, the whole thing is just a relatively simple formula) was supposed to work.

Exams in the UK were cancelled due to Covid-19

How it works

Here’s the Ofqual formula used to assess the students’ grades:

Pkj = (1-rj)Ckj + rj(Ckj + qkj - pkj)

Pkj stands for the final grade given to the student.

Centre assessed grades, or CAG is a teacher’s assessment of the particular student’s grade.. The teachers were asked to order these grades in ranks, the highest in rank being the students with the “strongest” grade.

Ckj  stands for the grade distribution at the given school over the last three years (from 2017 to 19). This factor was introduced to reduce teacher’s bias, avoiding personal bias towards the student or the (un)conscious desire to raise the overall average grade of the school.

pkj is the predicted grade distribution based of the class the student is attending. The average grade in previous GCSEs (General Certificate of Secondary Education) will determine the prediction for that class. For example, if the class had only 3% of A* students in the previous three years, any number of students over that number would automatically be downgraded to A.

qkj serves as a sort of a buffer. For example, if classes in the previous three year were predicted to do poorly and did well, the same might happen this year.

rtells the algorithm how many students have data from previous years and GCSEs. If there is every result available, then rj=1. If none, then rj=0. Basically, the equation is formulated such that this information is taken into account if available, and ignored if there is little to no data -- that is where the expression rj(Ckj + qkj - pkj) comes in.

This system has obvious flaws, for a start, it immediately penalizes above-average student in badly performing classes or schools. But overall seems like a workable approach as long as it was used equally for everyone and adjusted accordingly. Right?

There is a very important detail that was intentionally omitted in the previous paragraph. This formula was applied to schools with “n>=15”, meaning that it was in use for classes with 15 or more students. Those would be all state schools with open access policies, or your regular schools for average to low income families.

And what was the formula for the “n<15” or private schools?

Pkj = CAG

That was all. An utterly appalling, outrageous example of England’s class system at work. If a student is from the private school, their grade will be just the same as their teacher’s assessment. Everyone else’s knowledge and abilities are obviously not to be trusted, and postcodes take precedent over student’s abilities and teacher’s insights.

The aftermath

After announcing the results, the government received immense criticism. After some wiggling, desperate efforts to mitigate the damage and ridiculous statements in their defence, they gave in and decided to default to CAGs for everyone.

This might seem like a fair solution given the circumstances, but what it actually did is unload the burden to universities. Top-tier universities now have a capacity issue which is to be resolved by the beginning of the school year.

Obviously, the whole mess was blamed on a faulty algorithm, as if it was a living, thinking creature with malicious intent.

But it is not, it is merely a tool. The existence of the algorithm is far from harmful on its own. The incompetence and bias of those who created the system is the only real problem here.


There is no way to get around the use of algorithms in this day and age. If we were to wipe them all out and start from scratch, society as we know it would crumble into chaos. Algorithms are everywhere and this is not an issue per se. The issue is, as always, the human factor.

To quote Hannah Fry, the creator of the book Hello World: Being Human in the Age of Algorithms: “Algorithms are not perfect, and they often contain the biases of the people who create them, but they’re still incredibly effective and they’ve made all of our lives a lot easier. So I think the right attitude is somewhere in the middle: We shouldn’t blindly trust algorithms, but we also shouldn’t dismiss them altogether.”

Whether we like it or not, biases are a part of human nature. Even if the bias is not included intentionally, there are oversights that are purely cultural or environmental. One good way to fight such problems is to use open-source algorithms that can be modified under reasonable circumstances. Open debate, transparency and awareness can help us to avoid a societal nightmare.

And of course, if we didn’t have algorithms, we wouldn’t have Cryptocurrencies OR online gambling, and what kind of world would that be? So buy some bitcoin, choose your favourite game and try your luck here on BetBtc. And let the algorithms worry about the rest.

Customer support