Where will the billions go?

ChannelBytes

It’s not often that a founder says: “We need investment, billions of it!” and everyone immediately jumps on board and promises to open their wallets. Yet this was one of the headlines that accompanied the many others when the new government took office. Billions would be made available for AI.

Whether this was spurred on by the launch of DeepSeek, the Chinese answer to ChatGPT, or companies looking to pounce on an opportunity to accelerate their own AI development, there’s been only a little clarity on the specifics of the investment.

What has been talked about is economic growth and the desire to be at the forefront of AI development. This is the main reason given for the plan to develop bigger, better data centers capable of powering AI advancement and innovation.

It’s interesting that one of the reasons cited for the development of the AI data centers is the amount of energy needed to power them, as well as the concern that this would exceed current grid capacity. They’ll be built in Texas, but how they’ll be powered isn’t clear.

As with any new builds there’s an opportunity to draw sustainability into the design. Elements such as heat capture or solar – there’s certainly enough sunshine in Texas. These design considerations have the potential to improve energy efficiency and power the data centers. Possibly even supplying surrounding communities, without placing additional strain on local grids.

Will this even come into consideration, or is it all about being seen to be an AI leader? One that looks only at the tech gains and nothing else? Especially not environmental or resource risks. When reading some of the responses by these AI innovators, one might get the impression that risk is a four-letter word.

As much as the need for innovation remains the primary focus of AI development, there are still very real concerns about privacy and risk. Especially as there are so many unknowns with AI development. For the innovators, they see regulations as a hinderance, claiming that they’re slowing down innovation. Besides, with innovation there will always be risk. It should be accepted. Perhaps, but not everyone agrees.

The challenge is that with widespread deployment of AI, the risk factors to companies, communities and individuals at this stage are yet to be quantified or qualified. If the innovators want to continue to accelerate innovation, shouldn’t this be accompanied with a level of responsibility with what they’re building? Those pushing for regulation and governance are not against innovation, they’re aiming to understand and build a greater awareness of potential risks.

Talking about economic growth and innovation get’s people excited and seemingly brings in the money. Will any of this money be spent on analysing risks and building strategies to reduce them? Or is that going to be left for someone else to figure out?

In all the news reports expounding on the benefits of these new data centers, risk doesn’t seem to be a priority. But if the project will be worth $500 billion, surely some capital can be spared to ensure this mega advancement of AI is handled responsibly?

Want to be featured on ChannelBytes?