The World Economic Forum claimed in 2016 that we were going through the fourth Industrial Revolution, stating that “Now a Fourth Industrial Revolution is building on the Third, the digital revolution that has been occurring since the middle of the last century. It is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres.”

As with the previous milestones throughout the industrial revolution, the technologies that have brought about this forth wave have replaced many of the roles previously undertaken by humans. However, in doing this, these new technologies bring an array of new ethical concerns, facial recognition in particular gaining a lot of attention recently, as researchers used this technology to predict criminality. one research member from the team stating that:

“Identifying the criminality of [a] person from their facial image will enable a significant advantage for law-enforcement agencies and other intelligence agencies to prevent crime from occurring.”

This sparked considerable retaliation to the research, with the Coalition for Critical Technology commenting that: “Such claims are based on unsound scientific premises, research, and methods, which numerous studies spanning our respective disciplines have debunked over the years.”

This research has renewed one of the many debates in unfair, and unwanted, bias across these technologies. With human-made models will come human-biases, machine learning models in particular can be inclined to reflect the biases of the team who built it, as well as the data it is fed.

Is Bias Necessary for Machine Learning?

Whilst there are certain unfortunate biases than can come with AI and machine learning technologies, broadly speaking, bias is also the foundation to machine learning. For example, a model that predicts breast cancer will be biased towards certain results, depending on the information it is fed and the details of the patient – and this is an essential part in the model’s function.


A better way to help phase out any unwanted bias from machine learning is to instead build these technologies around thoroughly ethical guidelines.

In 2019, The High-Level Expert Group on AI presented its “Ethics Guidelines for Trustworthy Artificial Intelligence” stating that trustworthy AI should adhere to the following guidelines:

  1. Lawful – respecting all applicable laws and regulations
  2. Ethical – respecting ethical principles and values
  3. Robust – both from a technical perspective while taking into account its social environment

These guidelines further claim that AI systems should also “empower human beings, allowing them to make informed decisions and fostering their fundamental rights” and that “Unfair bias must be avoided, as it could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination.”

Ensuring models are fair and ethical is an elemental component in the development of AI. Whilst much of machine learning and artificial intelligence well surpasses the capabilities of humans, so long as we create and train these models, they are at risk of inheriting unfair an unwanted human prejudices. 

These prejudices can be managed through a raised awareness towards them whilst developing machine learning technologies, paying greater attention to areas where unethical bias could arise, and helping to inhibit this to make these models fairer.