Decision-making has always been an important part of commercial life. Organisations have to make decisions about their customers and employees on a daily basis. In the past, most of these decisions have been made manually by human beings. It is arguable whether human beings make good or bad decisions but in recent times we have seen that more and more decision-making is being automated – in various forms. This means that decisions are no longer made by human beings but the actual decision-making process is automated.

Big data needs artificial intelligence in order to make sense of it. Many organisations are feeding big data to artificial intelligence (AI) to create profiles and then through machine learning the AI is making autonomous decisions (a type of automated decision) about data subjects that have serious consequences for them. The machines are deciding what insurance people get, if and how much credit they get, what to watch, what to buy, what gets marketed to them, what political messages they receive. This has obvious potential to cause harm.

How do we know that the machine, robot or AI is not discriminating or being biased? What happens if it makes a mistake? Understandably people are concerned that machines should not be making decisions about humans without the necessary protection being in place. Machines and artificial intelligence will struggle to distinguish between right and wrong.

This is why the law has introduced regulations relating to automated decision-making. There are data protection laws that regulate how AI makes automated decisions and it is important that you (as a controller) ensure your decision-making is lawful.


  1. Know the regulatory framework regards automated decision-making by getting a practical overview.
  2. Ensure your automated decision-making is lawful by knowing what suitable measures to put in place.
  3. Know when it is lawful in terms of data protection law to use artificial intelligence to make decisions by looking at examples.

Recommended Articles