It’s the latest buzzword in AI advancement. A breakthrough in AI applications that is likely to have a major impact. The questions are: as exciting as Agentic AI is, are we ready to let it make decisions for us… and should we?
We’ve become accustomed to GenAI and adoption of it is becoming mainstream in business. Agentic AI takes GenAI a step further. Rather than just coming up with ideas and putting together concepts, it’s designed to be able to make decisions and take actions autonomously. It’s being sold on the advantage of not requiring any human guidance or input at all.
For most companies looking to gain efficiencies, the problem is not a lack of data, it’s how to make use of all the data they have, more efficiently. AI has always been good at processing structured data, less so with unstructured data because learning parameters were difficult to pin down. It required broader understanding of context which was challenging. Hence the continued requirement for human intervention. AI would do the slog work and humans would still do the more complex analysis or give the final ok.
Only problem is that by comparison, human processing is slow and inconsistent, making it less efficient. With AI’s ability to process vast quantities of data more efficiently, moving towards enabling it to make decisions was a natural progression.
The idea is that Agentic AI will be able to operate independently. How efficient! Less work for humans to do. It will be able to learn from its environment adapting its decision-making parameters in the process. This can benefit complex workflows where automation requires decision making based on what is happening at the time. Think manufacturing processes, supply chain or adapting online marketing campaigns according to user behavior, or even making those annoying automated chatbots more efficient.
All of those seem like beneficial use cases, but there is a small voice that questions: will it stop there?
There are guidelines and discussions on ethics regarding the development of AI. There are considerations of risks and acknowledgement that some of these could be very detrimental. But none of this is stopping or even slowing down the development and deployment of AI.
If enabling AI to make decisions for us, who gets to decide what decisions are made? More importantly, what is defining how those decisions are being made, and whether the data being used to make them is even accurate?
Most people believe in the benefits of tech – the efficiencies it offers, not to mention the business opportunities – and nobody wants to be a late adopter for fear that it will leave them trailing behind competitors. Has this clouded perception when it comes to embracing new technologies?
With the never-ending race to be part of the next big thing, tech companies are always on the lookout for new ways to apply their technologies. Are these always wholly beneficial, and how many people are stopping to ask if handing over control to make decisions for us is entirely a good thing?