Is the AI Honeymoon over or is the good stuff just beginning?

ChannelBytes

Life without technology is unimaginable and it’s not just teenagers that panic when there’s no connectivity. Admit it! When last time you went more than 10 minutes without checking your phone… or rage when the internet was slow?

It’s become so normalized to believe that the latest tech is will be better, offer more benefits, and make our lives easier. How often do we stop to think things through and evaluate if that’s genuinely the case? We get so enamoured by the next shiny thing that we blindly accept that the promises are true.

Last year, when ChatGPT put AI into the hands of everyday people, they pounced on the opportunity to see what generative AI could do. Suddenly everyone was pumping out content and marvelling at how quick and easy it was to produce.

Then it was almost as if companies suddenly got FOMO, thinking that if they couldn’t offer an AI element to their products, they’d be considered old school too. Now you can’t find a platform where AI isn’t offering to help.

Currently there are 14 equivalent alternatives to ChatGPT available on the market – and that’s an AI generated answer. Given how quickly these tools have evolved, it’s natural to assume that AI is the future of tech. But do we really have a holistic view of what AI can do? For better or worse?

AI isn’t new but it’s evolving faster than most of us can keep up with. That on its own makes some cautious, and with good reason. Is AI safe? Is it accurate? Is it biased? How do we even define what safe or biased looks like, or find the answers to those questions?

Being ChannelBytes, we’re not ones to shy away from a challenge or debate. As much as we love the tech advantage, we’re well aware that it’s an industry rife with risks and vulnerabilities. Sales people are quick to punt the advantages, but we’re taking a considered look at the dark side that no one wants to talk about.

 

The AI advantage – what are you really gaining for your business?

 

Speed

In the quest for greater efficiency, the ability to do more, much faster is a distinct advantage. There’s little doubt that this is primary advantage of AI. Humans don’t come close to the processing ability of AI especially when it comes to numbers. What would take an analyst days to work manually, AI can process in minutes. An advantage for sure, for companies that could afford to invest in developing AI.

Now with generative AI becoming more integrated into solutions, the benefit of speed has significantly expanded far beyond numbers. For example, when teams are tasked with a project but don’t know where to start, a prompt to generative AI can deliver ideas and help give direction. This saves hours of procrastination and can help focus brainstorming sessions on specific ideas deemed most workable.

 

Scale

Added to the speed at which AI can operate at, it’s forte is dealing with massive data sets. Where large volumes of data quickly become overwhelming for humans, for AI, the more data the better. For decades companies have collected data, most of which was filed away and forgotten about because it was too difficult to access and too cumbersome to process. This is now changing.

There’s a growing trend for companies to digitize their data. Once this is achieved, AI can be applied ways to extract more value from that data. Historic data can be leveraged for forecasting and budgeting. Insights can be used to train AI to look for trends and identify opportunities to create efficiencies.

 

Broadening use cases

To build on the advantage of both speed and scale is the broadening of use cases for AI. It’s not just about numbers and massive data sets, but how to use the data available more effectively. Acceptance and adoption of AI is rapidly increasing, fuelling further growth in how AI is being applied to different industry sectors.

In manufacturing, for example, AI is being used to optimize operational times of machinery, identify maintenance issues, and even manage materials inventory. In e-commerce, AI is learning from customer behavior to improve the customer experience. In regulated industries, AI can be used to support compliance. Of specific interest to many companies is how AI can also help advance security by detecting potential threats.

Advantageous to use AI? The popular notion is: “If you don’t, you’ll be left behind”. The problem is that advantages aren’t only leveraged for the good of businesses and society. Not everyone wants to play nice, and to believe that AI is entirely without risk can leave a business vulnerable.

 

What are the unanswered AI questions?

We’re so attuned to believing that tech makes life easier. It’s natural to focus on the advantages. Most likely because we can see those at work. But the risks? They’re harder to define.

AI has evolved so rapidly we’re not even sure what questions to ask, never mind finding the right answers. Despite voices suggesting AI development should pause until there’s some form of governance or ethics in place, these have largely been ignored. Business is business after all, and AI is big business.

Dollar signs aside. What are some of the concerns and how worried should we be about them?

 

Copyright

With generative AI you can, without any skill at all, create graphics and images. Fantastic to be on the receiving end of that output, but how was AI trained to be able to produce that imagery? AI can’t create from nothing. It needs data and where better to find that data than what’s already in the public domain?

As artists and creatives around the world started to raise their concerns and businesses in their quest to become early adopters fell into the trap of exposing themselves. Samsung found this out the hard way when their engineers used ChatGPT to code. Unknowingly they fed proprietary data to AI which gave everyone access to it.

Since then, companies have started to implement policies governing the use of generative AI solutions, but is it enough to keep data safe? What about creators? People whose livelihoods are based on creating something original of value. With AI learning from every available source, how is originality to be defined? How do people and companies protect what’s unique while trying to market it commercially?

 

Ethics

Another really tricky one to debate. It’s one of those “don’t start that conversation” type topics that can get heated really quickly. We might like to think that it’s a clear-cut case of what’s right and what’s wrong. Using tech for good should be simple. Should be, but it’s not.

Defining good and acceptable uses of AI are subjective. Different people with different motives can have entirely opposite perspectives, so how can we hope to come to any form of consensus? There are global organizations trying to work towards an ethics framework and major tech companies have made token commitments to be part of the process. But are we making any progress?

 

Bias

AI can learn just about anything, but what it produces is very telling about how it’s being programmed and trained. Bias is a very real problem with AI. Especially when you consider the potential to influence opinion. Hear a single perspective for long enough and it becomes truth. When it’s no longer recognized as bias, it becomes an even bigger problem.

There’s an interesting story told by Dr Joy Muolamwini – an advocate for equitable and accountable AI policy. She was working on developing facial recognition software but when she tested it on herself, it didn’t pick up her face as a black woman. Curious, she put on a white Halloween mask and it then picked up her face, but identified her as a man. Bias? No question about it, but to dig down into the code to find where that bias was rooted was another challenge entirely.

 

Security and governance

There are few industry sectors where AI isn’t already having an impact. Should there be? Given the potential for bias and exposure and with no clear frameworks for ethics, should there be protection for vulnerable populations – like children?

As companies roll out their AI integrations, how is governance factoring into it? There are currently no formal global or national guidelines relating to AI, there’s only debate. Use AI to help improve security and identify threats can be a noble cause. But there are as many ill-intentioned people and organizations using the same AI to identify vulnerabilities to exploit. Use tech for good and gain the advantages but don’t forget that it can also be used against you.

One of the most challenging aspects of security is trying to mitigate for unknowns. Use AI to help develop code quickly, but do you know how it got there? Can you identify biases or vulnerabilities when AI was left to figure it out on its own?

 

All in or out with AI?

The development of AI is not stopping or slowing, if anything it’s continuing to accelerate and integrate into every day systems. Does this mean greater advancement? The answer to that may depend on your perspective. Can you ignore it? Not likely. AI is already learning from your clicks, shares, and searches.

The futuristic AI concept is a reality that most individuals are ill equipped for. We want the advantages, but don’t understand all the risks. It’s hard to when you don’t really know what they are or how they could evolve. It’s a tricky place to be in, but also a great opportunity.

It’s a collective need to create governance, identify bias, and risks. Being informed and staying aware is the best way to do that. Here at ChannelBytes, we love tech and how it can be leveraged to gain an advantage. AI is the next chapter, one we all have a stake in.

Curious to see how it evolves?

Stay tuned and subscribe to ChannelBytes for more tech news and opinions.

Want to be featured on ChannelBytes?