Spend five minutes on LinkedIn and half your feed will be posts about how a connection has used AI to become more efficient. There are prompt cheat sheets, reviews on the latest AI hack, and multiple commentaries on how to make AI work for you.
The proof is there, AI does create efficiencies. So why are there still people so hesitant to adopt it? Isn’t any tool that helps people do their job more effectively, a good thing? As strange as it may sound there are still many people wary of AI.
Is it about doubt or fear that their jobs will be taken over by AI? Is there genuinely any substance to their concerns are they simply naysayers that will be left behind because they didn’t want to adapt?
Why not AI?
There’s the general belief that those not adopting AI aren’t the progressives or forward thinkers, and that they’re doomed to become dinosaurs. But could it be that they’re actually looking much further ahead? Beyond the immediate advantages and evaluating the bigger picture of what a future with AI could look like?
One of the major concerns that keeps resurfacing regarding AI, is the lack of transparency. Being told it’s good, that it gives you an advantage is not enough to convince the sceptics. They want to know how data is being used. How are the models being trained? What is happening to all of the inputs and outputs? How broadly are they being shared? In truth… we don’t know.
It’s one thing to seek out the benefits and advantages and ride the wave, but others see it as getting on a raft and heading off, not knowing that flood waters could be headed downstream. It’s not just the unknown that keeps people wary. It’s that AI developers seem to pay more lip service to regulation, governance or any level of transparency, than taking real action.
And as history tells us the experts aren’t always right. Back in the 1930’s and 40’s cigarettes were the new thing. Bizarrely, they were physician approved. Doctors even recommended smoking for the supposed calming effect on the body. It took almost half a century for the real facts about smoking to be revealed – how harmful it is to health, and for medical opinion to be reversed. The point is, just because supposed authorities rave about a new thing, doesn’t mean it is a good thing for people.
It may not be a good thing for the planet either and this is another major concern. The vast resources required to build and run AI data centres will accelerate over-reaching planetary boundaries at a time when we desperately need to be reversing this. It’s not to say that data centres can’t be sustainably built and operated, but this takes deliberate effort to do – something that’s been noticeably missing on the AI development agenda
What’ll change opinion?
It’s doubtful whether more AI features or even fear-mongering that they’ll be left behind will change people’s opinion. Rather, a robust, enforceable framework for AI governance, ethics and regulation might make people less weary. This, and more sustainable development of data centres, where circular economy principles are deployed, have more chances at success. The naysayers aren’t blind to the value of AI technology, they just want to see it developed in a responsible and sustainable way.