Video game law for noobs
These days, everything’s a game.
One of the core uses of technology is to streamline the ever-more-complicated human world. Things that used to require separate trips, separate buildings – even separate parts of the city – can now be done in one place, on one device. You can bank, shop, communicate and play games on your phone, and these once-distant spheres of your life have been bundled together, merged and overlapping.
And so have the laws regulating them.
The result of this is that areas of law that were once far and distinct from each other have steadily crept closer together, causing businesses to have to pay attention to areas they had likely not considered before. Businesses that never set out to get involved in digital entertainment are now finding themselves heavily invested in these relationships as they follow their customers into social media, video games, streaming, and more.
You may not be focusing on digital entertainment law, but digital entertainment law is certainly focusing on you.
Outcomes:
- Understand how changes in the law affect the content you produce on the internet by listening to experts.
- Prepare your business for broader and more involved social media regulation by understanding the definitions and regulatory patterns.
- Consider how to leverage the human need for play to promote your business by looking at examples.
Lawful automated decision-making using big data and profiling
Decision-making has always been an important part of commercial life. Organisations have to make decisions about their customers and employees on a daily basis. In the past, most of these decisions have been made manually by human beings. It is arguable whether human beings make good or bad decisions but in recent times we have seen that more and more decision-making is being automated – in various forms. This means that decisions are no longer made by human beings but the actual decision-making process is automated.
Big data needs artificial intelligence in order to make sense of it. Many organisations are feeding big data to artificial intelligence (AI) to create profiles and then through machine learning the AI is making autonomous decisions (a type of automated decision) about data subjects that have serious consequences for them. The machines are deciding what insurance people get, if and how much credit they get, what to watch, what to buy, what gets marketed to them, what political messages they receive. This has obvious potential to cause harm.
How do we know that the machine, robot or AI is not discriminating or being biased? What happens if it makes a mistake? Understandably people are concerned that machines should not be making decisions about humans without the necessary protection being in place. Machines and artificial intelligence will struggle to distinguish between right and wrong.
This is why the law has introduced regulations relating to automated decision-making. There are data protection laws that regulate how AI makes automated decisions and it is important that you (as a controller) ensure your decision-making is lawful.
Outcomes:
- Know the regulatory framework regards automated decision-making by getting a practical overview.
- Ensure your automated decision-making is lawful by knowing what suitable measures to put in place.
- Know when it is lawful in terms of data protection law to use artificial intelligence to make decisions by looking at examples.
Regulating AI in the Middle East and Africa (MEA)
The future of AI is in the Middle East and Africa.
Many countries in the region have heeded this call by advancing their national development plans to provide for a future in which AI plays a significant role.
Yet, we know that AI can harm people and organisations. So, it must be regulated.
But the world’s grappling with how to regulate AI. The reason is that it’s an advanced and dynamic technology. There’s also the difficult task of balancing human rights against technological advancement. Plus, regulators need to navigate the power asymmetries between consumers and technology companies.
While the EU is close to passing the Artificial Intelligence Act, surely the MEA region could also lead AI regulation. Or, it could follow what’s happening in the EU.
The essence is that businesses should respond proactively to AI regulations by developing a robust AI governance programme that informs the AI lifecycle.
Outcomes:
- Discovery of the latest developments by hearing from thought leaders.
- Create AI your stakeholders can trust by considering the regulatory environment.
- Develop an AI goverance programme for your organisation by seeing the latest industry trends.
The Metaverse – a tour
To Enliven our brains