3 Big Generative AI Problems Yet To Be Addressed

Adopting generative AI into technology is potentially more significant than when the internet was introduced. It is disrupting most creative efforts and isn’t near as capable as it will be by the end of the decade.

Gen AI will force us to rethink how we communicate, how we collaborate, how we create, how we solve problems, how we govern, and even how and whether we travel — and that is far from an exhaustive list. I expect that once this technology reaches maturity, the list of things that have not changed will be far shorter than the list of things that were.

This week, I’d like to focus on three things we should begin discussing that represent some of the bigger risks of generative AI. I’m not against the technology, nor am I foolish enough to suggest it be paused because pausing it would be impossible now.

What I suggest is that we begin to consider mitigating these problems before they do substantial damage. The three problems are data center loading, security, and relationship damage.

We’ll close with my Product of the Week, which may be the best electric SUV coming to the market. I’m suddenly in the market for a new electric car, but more on that later.

Data Center Loading

Regardless of all the hype, few people are using generative AI yet, let alone using it to its full potential. The technology is processor- and data-intensive while it is very personally focused, so having it reside only in the cloud will not be feasible, mainly because the size, cost, and resulting latency would be unsustainable.

Much like we have done with other data and performance-focused applications, the best approach will likely be a hybrid where the processing power is kept close to the user. Still, the massive data, which will need aggressive updating, will need to be more centrally loaded and accessed to protect the limited storage capacities of the client devices, smartphones, and PCs.

But, because we are talking about an increasingly intelligent system that will, at times — like when it is used for gaming, translation, or conversations — require very low latency. How the load is divided without damaging the performance will likely determine whether a particular implementation is successful.

Achieving low latency won’t be easy because while wireless technology has improved, it can still be unreliable due to weather, placement of the towers or user, maintenance outages, manmade or natural disasters, and less than complete global coverage. The AI must work both online and offline while limiting data traffic and avoiding catastrophic outages.

Even if we could centralize all of this, the cost would be excessive, though we do have underused performance in our personal devices that could mitigate much of that expense. Qualcomm is one of the first firms to flag this as a problem and is putting a lot of effort into fixing it. Still, expect it will be too little and too late, given how fast generative AI is advancing and how relatively slowly technology like this is developed and brought to market.