Welcome to The Long View—where we peruse the news of the week and strip it to the essentials. Let’s work out what really matters.
This week: Nvidia’s CEO grabs headlines by saying your career is toast, and Intel is still fighting.
1. ‘It’s so easy to use’
First up this week: “Jensen” Huáng Rénxūn’s Computex keynote: He says AI is coming for many jobs—including software development. According to the brave billionaire, anyone can now tell computers what to do just by talking to them.
Analysis: Utter garbage, obvs.
We’ve been here countless times before: Magical new technology makes programming obsolete. But no-code nirvana never happened—mainly because those making the predictions were supremely ignorant of the software development arts. Huang is no exception—and he’s simply trying to sell more GPUs.
Eleanor Olcott and Madhumita Murgia: Nvidia chief Jensen Huang says AI is creating a ‘new computing era’
“Demand has soared”
Nvidia’s chief executive hailed a new era of computing in which “everyone is a programmer,” [saying AI] had dramatically lowered the barrier to entry to computer coding: “We have reached the tipping point of a new computing era,” Huang said … arguing that AI now enabled individuals to create programs simply by plugging in commands.
…
ChatGPT can generate code, cutting the human labour required to create software, a development set to revolutionise programming. Huang’s speech to the Computex conference in Taipei came days after Nvidia revealed forecasts of rapid sales growth.
…
Demand has soared for Nvidia’s data centre chips, including the H100, an advanced graphics processor unit (GPU) that substantially cuts the time required to train … models such as ChatGPT. … Huang also announced a new AI supercomputer platform called DGX GH200 to assist tech companies in building generative AI models.
Ben Blanchard: Everyone can now be a programmer
“You just have to say something”
Artificial intelligence means everyone can now be a computer programmer as all they need to do is speak to the computer … Jensen Huang said on Monday, hailing the end of the “digital divide.” … Speaking to thousands of people at the Computex forum … Huang, who was born in southern Taiwan before his family emigrated to the United States when he was a child, said AI was leading a computing revolution. “There’s no question,” … he said in a speech, occasionally dropping in words of Mandarin or Taiwanese to the delight of the crowd.
…
“The programming barrier is incredibly low. We have closed the digital divide. Everyone is a programmer now—you just have to say something to the computer,” he said. “The rate of progress, because it’s so easy to use, is the reason why it’s growing so fast. This is going to touch literally every single industry.”
Catherine Shu fits: The elephants in the room
“Drought”
Several topics were barely hinted at. … The fact of the matter is, that amid issues like geopolitical tensions and AI-induced chip shortages, the semiconductor industry is in a lot of turmoil: …
- As relationships between the U.S. and Chinese governments continue to get frostier, things are getting messy in the semiconductor industry. …
- Employee attrition and lack of talent in general has the potential to be a big headache for semiconductor companies. …
- Generative AI computing runs on chips, mostly GPUs made by Nvidia, but those are getting increasingly scarce. …
- Meanwhile, startups and large companies like Intel and NTT are working on alternatives like photonic chips [but] it may be years before photonic tech becomes mainstream. …
- Taiwan is undergoing yet another drought. The previous one in 2021 had a negative impact on the country’s semiconductor manufacturing because producing chips takes a huge amount of water.
But why are we “suddenly” so interested in GPUs? John Burek explains: Big Takeaway
Not long ago … $10 million … would buy you 960 CPU-based servers, burning 11 gigawatt hours … to train one … LLM. The same money would now buy you 48 GPU-based servers using about a third of the energy … with the capability of training a whopping 44 LLMs. Conversely … training one LLM would now run you about $400,000 instead of the original $10 million.
It’s snake oil, thinks SilverBirch:
This is really interesting. Because if … Huang believes what he says, then Nvidia stock should be worthless shortly. [If] he genuinely believes this then we’re about 6 weeks away from Nvidia firing all their software engineers, hiring in a bunch of cheap idiots to talk to their magic AI, and as a result completely tank the company.
…
All the real running in the AI story has been done by incredibly well-qualified engineers with cumulative millennia of experience. … This trend actually makes engineers more valuable.
…
This is the oldest trick in the book: “You don’t need to do X, we can automate it!” … It turns out they’re just doing X with a markup.
And XXongo doesn’t mince words:
Nvidia is clueless. … “Do that thing I want” turns out not to be an actual command.
…
Programming means being able to clearly envision and sequence the logical flow of a process. If you don’t know how to turn your requirements into a clear sequence of operations, you can’t “program” even if you have an AI.
With a more nuanced reaction, here’s dtagames:
Programming has always been about “telling a computer what to do.” That aspect isn’t changing with LLMs. What’s changing is the type of language we use to do that telling.
…
The hard parts are architecture, logic, UI development, and debugging. While LLMs can be a great help to programmers who are already doing that work, they’re not going to turn someone into a programmer who does not already have an aptitude for these.
This colorful metaphor comes courtesy of enriquevagu:
Typewriters mean now everyone can be a writer.
2. Intel VPUs Everywhere
Not everything AI at Computex was about Nvidia. For example, Intel unveiled plans to add an AI coprocessor core to all its 14th-gen SoCs.
Analysis: It’s a vision thing
A VPU is similar to an NPU or TPU. The idea appears far more suited to AI workloads than Nvidia’s legacy GPU hardware.
Simon Sharwood says: All Meteor Lakes get a VPU
“Offloaded to VPUs”
Intel will add the VPU tech it acquired along with Movidius in 2016 to all models of its forthcoming Meteor Lake. … Curiously, Intel didn’t elucidate the acronym, but has previously said it stands for Vision Processing Unit. [It is] dedicated AI silicon.
…
The VPU gets to handle “sustained AI and AI offload.” CPUs will still be asked to do simple inference jobs with low latency. … GPUs will get to do jobs involving performance parallelism and throughput. Other AI-related work will be offloaded to VPUs.
But what is it? DamnOregonian explains:
Vision Processing Unit [is a] stupid name—and one I haven’t heard used since they were popular on old TI OMAP parts. What it really is, is a chunk of dedicated MMA hardware that makes running [neural network] inference engines really efficient and snappy.
These days, people like to call them NPUs (Apple) [or] TPUs (NV, Google). They’re obscenely faster than a normal GPU shader core at this particular line of work, and much more efficient.
It’s an existential question, thinks mevets:
Chipzilla is name challenged. … Recently we had Raptor Lake; which poses great imagery: One of the last line of dinosaurs before they transformed into a modern species. Now Meteor Lake, which summons an idyllic lake, where dinosaurs play in the water as a world-changing meteor descends on the planet.
The Moral of the Story:
Too many of us are not living our dreams because we are living our fears
—Les Brown
You have been reading The Long View by Richi Jennings. You can contact him at @RiCHi or [email protected].
Image: Christian Cueni (via Unsplash; leveled and cropped)