The European Commission presented this week its legislation on Artificial Intelligence, a first step to have this technology under control, so that it is known that not everything goes, that there are certain rights that must be respected.
This new regulation has two objectives:
– Guarantee security in different areas. Fundamental rights of citizens (including privacy) cannot be stepped on. – Enabling smarter investment in AI in Europe.
This proposal has four levels of risk, but military use will not have as many restrictions. The most dangerous level is unacceptable risk, and if a system enters that level, it will be banned in all member countries. High risk systems will have to be analyzed, determining risks and always with human supervision.
Unacceptable risk and high risk
Among the most important points that we have in said legislation, are:
– Manipulate human behavior and incite violence. Unacceptable risk.– Rate citizens to differentiate them. Unacceptable risk.– Cheating on exams or creating systems that affect education. High risk.– Design components to perform surgeries. High risk.– Design professional recruitment systems. High risk.– Remote biometric identification. High risk. Facial recognition in public areas will be prohibited, for example, but we will have exceptions (to search for missing children or prevent terrorist acts).
Limited risk and low risk
Other points with less risk, within limited risk, are:
– Chatbots. Users will need to be warned that there is a machine behind.
And with minimal risk we enter video games and image editors, for example.
At the moment it has only been presented, now it has to be approved before being implanted, so there are a few months left before we can accept it.
You can read more at ec.europa.eu