Evan Schuman has covered IT issues for a lot longer than he'll ever admit. The founding editor of retail technology site StorefrontBacktalk, he's been a columnist for CBSNews.com, RetailWeek, Computerworld and eWeek and his byline has appeared in titles ranging from BusinessWeek, VentureBeat and Fortune to The New York Times, USA Today, Reuters, The Philadelphia Inquirer, The Baltimore Sun, The Detroit News and The Atlanta Journal-Constitution. Evan can be reached at eschuman@thecontentfirm.com and he can be followed at twitter.com/eschuman. Look for his blog twice a week.
The opinions expressed in this blog are those of Evan Schuman and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.
As US representatives try to negotiate with Japan and the Netherlands to deny China the tools to make faster chips for AI work, some observers doubt they will succeed.
Controlling genAI is critical for IT leaders. But is there any effective way to do it?
It’s bad enough when an employee goes rogue and does an end-run around IT; but when a vendor does something similar, the problems could be broadly worse.
Given the plethora of privacy rules already in place in Europe, how are companies with shiny, new, not-understood genAI tools supposed to comply? (Hint: they can’t.)
From unfettered control over enterprise systems to glitches that go unnoticed, LLM deployments can go wrong in subtle but serious ways.
When McDonald's in March suffered a global outage preventing it from accepting payments, it issued a lengthy statement about the incident that was vague, misleading and yet still allowed many of the technical details to be figured out.
Wall Street’s obsession with quarterly earnings has made it extraordinarily difficult for most enterprises to spend on long-term investments, or even mid-term investments.
Sponsored Links