Advancing AI and Cybersecurity in 2025 – How will success be defined?

ChannelBytes

There are a number of trends predicted for AI and cybersecurity in 2025. The problem is that if you’re looking to trends to try determine the risks or opportunities for your business, there’s little that provides a clear direction.

If anything, the trends indicate a rocky road ahead. There are so many factors pulling in opposite directions, and so many unknowns, it’s questionable if companies even have a real chance at keeping up?

As much as AI is advancing and augmenting cybersecurity capabilities, it’s placing equally powerful capabilities in the hands of threat actors. It’s an inconvenient truth that whatever is developed for good, can invariably be used for bad. More concerning is that what’s developed in the name of progress, can also complicate things to a degree, that they become a nightmare to solve and manage.

Take for example agentic AI. Designed to be able make decisions independently and adapt to changing circumstances, the idea is to improve automation and efficiency. But what if they go rogue? What if they’re hijacked and it takes days before it’s discovered that they’re operating maliciously? This is not a far-fetched scenario. Already in cybersecurity, AI is being used to create metamorphic code that constantly changes to avoid detection. A combination with agentic AI could be rather dangerous.

Additionally, there’s still much work to be done on ethics in AI, reducing bias and defining what parameters should govern the deployment of AI. Everyone agrees it’s an important discussion to be had, yet the focus seems to be more on the race to show what could be done with AI, rather than what should…

Will AI in cybersecurity change that?

For companies, AI will continue to be a major part of both strategy and compliance. As governance and regulation eventually start to catch up, companies will be subject to more rigid data protection and reporting requirements. Better to lay the ground work for that now and be able to build on it as the AI landscape evolves, than to be caught trying to play catch up.

As for strategy, conversations are likely to focus on what AI to invest in to improve the company security posture.  But the need for skills in this area, should not be forgotten. AI can do a lot, very efficiently, but humans are still needed. The skills gap of expertise capable of programming and managing AI integration and advancing cybersecurity is widening.

If companies are to have a hope of keeping up with cybersecurity risks and AI advancement, skills investment is an important part of that. We operate in an advanced, interconnected world, and it only takes one vulnerability to cause chaos. Last year’s Crowdstrike incident is a prime example of that.

As much as AI is advancing capabilities, humans are still needed in the driver’s seat. Perhaps the success of 2025 will be determined not by how much more AI can do, but how skills are evolving so that we can make the best use of AI while making more progress in both governance and responsible development.

Want to be featured on ChannelBytes?