The EU's AI Act takes on the 'seven-headed hydra' of tech

The EU's AI Act takes on the 'seven-headed hydra' of tech

Crowd of Business People Tracked with Technology Walking on Busy

Wednesday saw the European Parliament approve the artificial Intelligence (AI) Act, making it one of the first regions to implement AI specific legislation at this level.

“As it is one of the first and most comprehensive AI laws, when the AI EU Act comes into effect, it sets the bar for other markets and regions such as the UK and US," said Peter van der Putten, head of the AI Lab at Pegasystems, and assistant professor of AI at Leiden University exclusively to Capacity.

"Global organisations dealing with European consumers will have to comply anyway, regardless of where they are based, and will default to applying the most strict and available regulation. Specifically, the US is behind in regulation. The White House Bill of Rights for AI, whilst providing sensible recommendations, at this stage is nothing more than a soft law policy paper with suggestions for self-regulation.”

The vote which took place earlier this week saw the legislation pass with 499 votes in favour, 28 against and 93 abstentions ahead of talks with EU member states on what the final law will look like.

The fine print

“Regulating Artificial Intelligence is one of the most important political challenges of our time, and the EU should be congratulated for attempting to tame the risks associated with technologies that are already revolutionising our daily lives," commented Dr Ventsislav Ivanov, AI expert and lecturer at Oxford Business College, in a statement to Capacity.

“Taking on the global tech companies and other interested parties will be akin to Hercules battling the seven-headed hydra. The AI Act that passed today is not perfect, but the EU has at least laid some ground rules for ChatGPT and created an EU AI Office to oversee future regulations."

Under the Act, AI developed and used in Europe must adhere to certain rules. Employing a risk-based approach, any AI deemed to pose an unacceptable level of risk to people’s safety would be prohibited, this includes those used for social scoring.


The ban also extends to any real-time remote biometric ID systems in public spaces; Post remote biometric ID systems, excluding law enforcement for the prosecution of serious crimes and only after judicial authorisation; biometric systems that use sensitive characteristics such as gender, race, ethnicity etc.

According to Ivanov, the discussion around biometric security in particular is “one ongoing battle" adding that "it will take some time to agree on what uses are acceptable, if any".

Other concerns around use of AI and biometric data is that of cultural and racial bias. Capacity examined the theme of AI and diversity at length in 2021, and although AI is not inherently bias, it is only as varied as the teams who build them.

Therefore "… external, diverse and representative voices need to be present before, during and after the development process, so that we can ensure our systems will carry respect, diversity and inclusion forward,” said Carmen Del Solar, senior conversational AI engineer at Artificial Solutions, at the time.


Lowering the risk

In addition, the classification of what is considered high risk AI under the Act has been expanded to include any systems that 'pose significant harm to people’s health, safety, fundamental rights or the environment' as well as those 'used to influence voters and the outcome of elections and in recommendation systems used by social media platforms'.

In the case of general-purpose AI, developers of foundation models are required to assess and mitigate possible risks to health, safety, fundamental rights, the environment, democracy and rule of law, as well as register their models in the EU database before their release on the EU market.

Generative AI and innovation

As for Generative AI systems, such as those built of ChatGPT will have to comply with transparency requirements, by disclosing that the content was AI-generated, helping distinguish so-called deep-fake images from real ones, and ensure safeguards against generating illegal content.

To its credit, the act is described as 'a sensible, outcome oriented, regulatory framework' that regulates AI not at the general technology level, but at the level of specific uses of AI, according to van der Putten.

"This is key as AI can be used for good and bad purposes. The act keeps the definition of AI very broad, as a tool for autonomous decision-making using technologies such as machine learning, predictive statistical methods, rule-based systems and generative AI. It classifies AI systems into different categories based on their purpose and risk to do harm, from prohibited to high risk, intermediate level and low risk AI applications."

In attempt to foster greater innovation, exemptions have been included for research activities and AI components delivered under open-source licenses. And the EU AI Office has been given new powers to monitor how the AI rulebook is rolled out.

"Some technology players do make statements that regulations stifle innovation, or even state they may no longer operate in certain markets," said van der Putten.

"This regulation versus innovation framing is a false dichotomy. Regardless of regulation, there is no long-term sustainable future for irresponsible use of AI, and sensible AI regulation will reward and foster innovation of trustworthy AI that benefits both consumers and companies alike.”

Gift this article