In the sense of machine learning, advanced mathematical decision making is the process of feeding large amounts of data to AI (artificial intelligence) in order for it to gain “knowledge and experience” through the process of finding connections and rules. Mathematical algorithms are used to process data and draw conclusions from it. The aim is to create an unbiased, efficient and (in the long run) cheap way of processing information.
Strictly speaking, an algorithm is a set of steps which need to be fulfilled to achieve a certain result. If you want to make a cake, you follow a recipe. If you want to do your taxes, you have to follow certain mathematical formulas. All of these are algorithms.
In every-day speech the word algorithm is usually associated with a computer program or a formula that is too complicated for a layman to understand. This intuitive understanding is the one that is the focus of this discussion - machine learning and AI algorithms.
There are three major groups:
For the needs of this article, we will focus on simple algorithms and machine learning only.
Everywhere. The internet is run on algorithms. Banks, smartphones, social media, GPS, smart devices, dating apps, everything that makes our lives easier is there thanks to the algorithms.
Personalized ads, recommendations and smart email organizers are nothing more than machine learning protocols running in the background to make your life easier. And bring more profit to those who run them.
Many will argue that this is not a problem per se but merely a product of a digital era. Nothing has changed compared to the early 90s, human nature is just wrapped in a cloak of zeroes and ones.
While this is true, there are aspects we need to be mindful of. Algorithms are frequently used by governments and authorities, with very mixed results.
The list goes on. And while all the above might provoke the general public to grab pitchforks and shout “Death to HAL”, the situation is far from black and white. Many aspects of our lives would be completely crippled without algorithms, and most of those are tightly connected to our quality of living. For example:
As you can see, the benefits of using predictive analytics outweigh the disadvantages manifold. However, it is extremely important to keep the usage of the algorithms transparent, insist on their code being peer reviewed, and look for bias whenever there is a concern that one might exist.
Consulting the public about implementation and possible concerns, especially when it comes to machine learning and algorithms used by governments, is essential. After all, human lives and wellbeing depend on the outcomes.
“A core objective of a learner is to generalize from its experience.” While this statement is true for both human and artificial learners, there is a great danger in it. Human beings are not merely data, and their actions are never devoid of context. The subtlety needed to fully understand every individual’s potential or needs is something algorithms are not yet capable of.
And there is another problem. Algorithms are created and implemented by humans. Unfortunately, this means they can reflect the bias of their creators. To illustrate this point, we’ll take a closer look at the recent Ofquals grading scandal that we mentioned earlier.
Due to the Covid-19 situation that made standard examination impossible, the Office of Qualifications and Examinations Regulation (Ofqual) resorted to using an algorithm to determine students’ final exam grades in England and Northern Ireland. This algorithm was supposed to generate A-Level (advanced level) grades for students nationwide accurately, avoiding bias and undesirable human factors.
Except that it didn’t.
It achieved quite the opposite. Let’s see how the algorithm (in this case, the whole thing is just a relatively simple formula) was supposed to work.
Here’s the Ofqual formula used to assess the students’ grades:
Pkj = (1-rj)Ckj + rj(Ckj + qkj - pkj)
Pkj stands for the final grade given to the student.
Centre assessed grades, or CAG is a teacher’s assessment of the particular student’s grade.. The teachers were asked to order these grades in ranks, the highest in rank being the students with the “strongest” grade.
Ckj stands for the grade distribution at the given school over the last three years (from 2017 to 19). This factor was introduced to reduce teacher’s bias, avoiding personal bias towards the student or the (un)conscious desire to raise the overall average grade of the school.
pkj is the predicted grade distribution based of the class the student is attending. The average grade in previous GCSEs (General Certificate of Secondary Education) will determine the prediction for that class. For example, if the class had only 3% of A* students in the previous three years, any number of students over that number would automatically be downgraded to A.
qkj serves as a sort of a buffer. For example, if classes in the previous three year were predicted to do poorly and did well, the same might happen this year.
rj tells the algorithm how many students have data from previous years and GCSEs. If there is every result available, then rj=1. If none, then rj=0. Basically, the equation is formulated such that this information is taken into account if available, and ignored if there is little to no data -- that is where the expression rj(Ckj + qkj - pkj) comes in.
This system has obvious flaws, for a start, it immediately penalizes above-average student in badly performing classes or schools. But overall seems like a workable approach as long as it was used equally for everyone and adjusted accordingly. Right?
There is a very important detail that was intentionally omitted in the previous paragraph. This formula was applied to schools with “n>=15”, meaning that it was in use for classes with 15 or more students. Those would be all state schools with open access policies, or your regular schools for average to low income families.
And what was the formula for the “n<15” or private schools?
Pkj = CAG
That was all. An utterly appalling, outrageous example of England’s class system at work. If a student is from the private school, their grade will be just the same as their teacher’s assessment. Everyone else’s knowledge and abilities are obviously not to be trusted, and postcodes take precedent over student’s abilities and teacher’s insights.
After announcing the results, the government received immense criticism. After some wiggling, desperate efforts to mitigate the damage and ridiculous statements in their defence, they gave in and decided to default to CAGs for everyone.
This might seem like a fair solution given the circumstances, but what it actually did is unload the burden to universities. Top-tier universities now have a capacity issue which is to be resolved by the beginning of the school year.
Obviously, the whole mess was blamed on a faulty algorithm, as if it was a living, thinking creature with malicious intent.
But it is not, it is merely a tool. The existence of the algorithm is far from harmful on its own. The incompetence and bias of those who created the system is the only real problem here.
There is no way to get around the use of algorithms in this day and age. If we were to wipe them all out and start from scratch, society as we know it would crumble into chaos. Algorithms are everywhere and this is not an issue per se. The issue is, as always, the human factor.
To quote Hannah Fry, the creator of the book Hello World: Being Human in the Age of Algorithms: “Algorithms are not perfect, and they often contain the biases of the people who create them, but they’re still incredibly effective and they’ve made all of our lives a lot easier. So I think the right attitude is somewhere in the middle: We shouldn’t blindly trust algorithms, but we also shouldn’t dismiss them altogether.”
Whether we like it or not, biases are a part of human nature. Even if the bias is not included intentionally, there are oversights that are purely cultural or environmental. One good way to fight such problems is to use open-source algorithms that can be modified under reasonable circumstances. Open debate, transparency and awareness can help us to avoid a societal nightmare.
And of course, if we didn’t have algorithms, we wouldn’t have Cryptocurrencies OR online gambling, and what kind of world would that be? So buy some bitcoin, choose your favourite game and try your luck here on BetBtc. And let the algorithms worry about the rest.
Customer support