Do a quick scroll through LinkedIn and most posts will contain some reference to a person’s latest discovery with AI. People are fascinated about the capabilities and speed of AI, how quickly it’s evolving, and most importantly the potential.
Amid this freight train of enthusiasm, there are a handful of people reaching for the brakes. “Slow down.” They say, “Do we really know what we’re doing here? What about risk? What about regulation?” But the train is making so much noise they’re barely heard. Besides all the passengers are too busy looking at all the buttons in front of them thinking… “I wonder what this can do?”
Despite knowing that curiosity killed the cat, we still as humans don’t have a good track record of learning from our mistakes. We may be aware of potential risks, but they’re easy to dismiss. We’re so attuned to accepting opportunity costs that with AI dangling all these “look what I can do” features, it’s so easy to gloss over the risks.
It’s perhaps a good thing then that there remain a few undeterred in creating a regulatory framework for AI. The EU was the first to introduce an AI regulatory framework in February 2025. The stated purpose of the EU AI Act was: “To make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.”
A key part of this framework has been to establish a tech neutral, internationally accepted definition of AI, as well as identify risks, especially those deemed unacceptable risks. Even a cursory overview of the regulation gives the impression that the primary objectives are transparency and safety, rather than merely being a blockade for further AI development.
The problem is that some of the specific AI applications that are banned in the EU are already in use. Biometric identification and classification of people, real-time remote biometric identification systems in public spaces, and cognitive behavioral manipulation, especially of minors and vulnerable people are all deemed unacceptable risks.
Add to these high-risk applications which include AI technologies applied to the operation of critical infrastructure, medical devices, aviation, public services, law enforcement or education and you can start to see what all the fuss is about.
These are exactly some of the use cases that AI developers are targeting because of the opportunities to create efficiencies. Regulation may well slow down development and if there’s one thing those in the AI race definitely do not want to do, it’s slow down.
So where does that leave business or the average user? Nobody wants to be left behind. We don’t want to be in the dark either. If AI regulation is calling for greater transparency, will developers play ball? Do we identify with the unacceptable or high-risk applications as defined in the AI regulatory act? Are they the side effects of development that just need to be accepted? And do we even get a choice if we want to use AI technologies?