Who is Responsible for Responsible AI? Part 1

Written by Chris Lee

August 15, 2023


Less than a year after ChatGPT and Bard put generative AI into the hands of ordinary people and there are few in commerce that haven’t experimented with it. Yet people are divided on whether it’s a good thing. Sci-fi movie horror scenarios aside, there are some legitimate concerns.

The pace at which generative AI is being developed and deployed is causing major disruption and because there’s no precedent governing the technology it is largely unregulated.

Despite many claiming generative AI will create exponential efficiencies, there are concerns that if development continues unchecked, there’s potential for disaster. But who is responsible to managing risk and regulating how weas humans interface with AI?

The challenge is that any form of regulation is a slow process. Even though the EU and China already have draft policies, they’re focused on existing technologies. Those analysing the policies have two major concerns:

  • The first is that the pace of AI development is accelerating and continually evolving. By the time regulations are passed, they may no longer be relevant because the technology will have already evolved.
  • The second concern related to enforcement. Even if the regulations are robust, what mechanisms will be used to monitor and enforce the applications of AI?

If AI development won’t wait, what’s the alternative?

Currently, the tech industry knows significantly more about the capabilities and risks of AI than most governing agencies. This highlights the need for collaboration and the creation of industry wide associations to facilitate this. It will also enable to government to hear a unified voice from the industry where concerns can be addressed collectively.

The tech industry is significantly better positioned to conduct research, provide guidelines for best practice and propose regulatory frameworks which can help speed up the policy making process.

They can act as a sounding board for policy makers, able to provide insight of the complexities and impacts involved when attempting to regulate AI. The reality is that for every benefit that AI has to offer, there will be threat actors looking for ways to leverage AI technologies for their own nefarious means.

Why tech industry participation is vital

The industry has a better understanding of the form these risks could take as well as potential impacts.  As AI applications continue develop, increased transparency and open communication can help develop a broader understanding and response to potential risks.

A more proactive approach could be to everyone’s benefit. But will the industry be willing to collaborate, or will the desire for competitive advantage keep humanity in the dark? Answering that question may determine if responsible development of AI becomes a reality.

Stay tuned for Part 2..

Post Categories

Related Articles

Who is Responsible for Responsible AI? Part 2

Who is Responsible for Responsible AI? Part 2

How Tech Giants are adapting to the disruption of AI No tech company wants to be seen to be lagging behind. Finding the balance innovative and being prudently cautious can be challenging, especially when you see competitors forging ahead. Most will recall the...



Download the full solution brief here. In our increasingly interconnected world, local and county governments are facing a unique set of challenges. They are dealing with a vast, constantly changing landscape of network infrastructures while keeping pace with rapid...

Revolutionizing Security Solutions

Revolutionizing Security Solutions

In an era where safety and security are paramount concerns, technology has emerged as a pivotal force in fortifying our collective defenses against potential threats. As our world becomes increasingly interconnected, the evolution of surveillance technology takes...