Who is Responsible for Responsible AI? Part 2

Written by Chris Lee

August 24, 2023

AI

How Tech Giants are adapting to the disruption of AI

No tech company wants to be seen to be lagging behind. Finding the balance innovative and being prudently cautious can be challenging, especially when you see competitors forging ahead. Most will recall the embarrassing demo when Google launched Bard, their answer to Microsoft’s ChatGPT. This demonstrates how rushing to keep ahead and remain competitive can have its downside. For both the business and it’s many users.

The buzz of generative AI may have died down a bit compared to the start of the year, but its development is still progressing at breakneck speed. What responsibility are companies taking for the way they develop AI and its impact? And who will hold them to account?

Is there such a thing a responsible AI?

Even industry leaders have cautioned about potential of AI and the need to have some form of governance or code of ethics. Tech Giants such as Microsoft, IBM, Samsung and Google have all engaged in research to try determine the impact on individual users, businesses and society as a whole. Globally organizations and communities are being formed to understand the potential harms and benefits of AI.

The International Research Centre for AI and Governance is one such body that has formed with the objective of ensuring AI is developed for good, both in terms of humanity and the environment. It’s positioning itself as a transdisciplinary and cross-cultural organization. All admirable goals. But will these types of mutual interests bring experts together, or will it merely result in more bias?

Getting around bias in AI

As much as generative AI applications are geared towards creating efficiencies, it’s widely acknowledged that one of the biggest challenges is bias. AI learns from what it knows and reinforces this learning by adding more of the same. The world is already struggling with inequalities where the “have’s” easily gain advantage, and the “have not’s” struggle. Even if given a so called equal opportunity, they’re not as equipped to take advantage of it.

Overcoming bias while exploring the potential of generative AI comes down to priorities. Is development being approached with security and user benefit in mind or is it all about efficiency, brand reputation and profit? Is there an awareness that goes beyond application to the broader impacts to society as a whole? Most business leaders are well aware of the “right” answer, but will they do the “right” thing?

 

 

Post Categories

Related Articles

NURTURING LOCAL NETWORKS: A Path to Sustainable IT

NURTURING LOCAL NETWORKS: A Path to Sustainable IT

Download the full solution brief here. In our increasingly interconnected world, local and county governments are facing a unique set of challenges. They are dealing with a vast, constantly changing landscape of network infrastructures while keeping pace with rapid...

Who is Responsible for Responsible AI? Part 1

Who is Responsible for Responsible AI? Part 1

Less than a year after ChatGPT and Bard put generative AI into the hands of ordinary people and there are few in commerce that haven’t experimented with it. Yet people are divided on whether it’s a good thing. Sci-fi movie horror scenarios aside, there are some...

Revolutionizing Security Solutions

Revolutionizing Security Solutions

In an era where safety and security are paramount concerns, technology has emerged as a pivotal force in fortifying our collective defenses against potential threats. As our world becomes increasingly interconnected, the evolution of surveillance technology takes...