Spread the love

In March 2024, the EU made history by approving a groundbreaking law on Artificial Intelligence. This legislation establishes unprecedented guidelines for the development and utilization of AI, with the goal of promoting transparency, accountability, and ethical behavior. This momentous event represents a major stride towards the responsible deployment of Artificial Intelligence on a global scale.

The European Union has recently introduced a groundbreaking legislation known as the Artificial Intelligence Act, marking a significant milestone not only for the EU but also for the global community. This Act aims to establish a comprehensive framework to address the potential risks associated with artificial intelligence while fostering innovation and safeguarding fundamental rights. With this new law in place, it is crucial to understand its implications, particularly concerning the regulation of AI tools such as Chat GPT and the prevention of issues like deep fakes.

The EU AI Act, as outlined by the European Parliament, seeks to prioritize a human-centric approach to technology. It proposes regulations based on the level of risk posed by AI systems, categorizing them into low, mid, and high-risk classifications. High-risk systems, such as those used in banking or critical infrastructure, are subject to stricter rules, including mandatory human oversight and monitoring. Moreover, the Act prohibits certain AI systems that could potentially cause harm, such as social scoring systems.

credit: European Parliament (YouTube)

How does this new law address the use of tools like Chat GPT? Does it tackle the issue of deep fakes?

While the Act exempts certain AI tools, such as those used for military defense or scientific research, it imposes requirements on others, like facial recognition tools used by law enforcement. Additionally, it addresses concerns surrounding deep fakes by requiring the labeling of artificially generated or altered content and promoting transparency in the development and deployment of generative AI systems.

What penalties are outlined in the Act for non-compliance? How have tech companies reacted to these regulations?

Non-compliance with the EU AI Act carries significant penalties, ranging from fines of 7.5 million to 35 million euros for offenses such as providing incorrect information to regulators or deploying banned tools. Despite some mixed responses from tech companies, the Act is set to become law in May 2024, with implementation beginning in 2025. As other countries observe the EU’s approach to regulating AI, there is growing recognition of the need for similar measures worldwide, with the United States and China also taking steps to address the challenges posed by artificial intelligence.