Supercomputers Google uses to train its artificial intelligence models are “faster and more power-efficient” than similar tech developed by Nvidia, the company has claimed.
A recent scientific paper authored by Google engineers David Patterson and Norm Joupi said its fourth-generation Tensor Processing Unit (TPU) – an AI accelerator application which Google developed using its TensorFlow software -- is up to 1.7 times faster and 1.9 times more power-efficient than Nvidia’s comparable A100 chip.
"Circuit switching makes it easy to route around failed components," the authors said. "This flexibility even allows us to change the topology of the supercomputer interconnect to accelerate the performance of a machine learning model."
The new paper also explained that Google’s latest fourth-generation TPU outperforms TPU version three by 2.1 times and improves performance by 2.7 times.
Google recently launched its own chatbot named Bard, which is intended to rival OpenAI’s ChatGPT.
As competition ramps up in the AI space, a number of practitioners have called for a pause on training AI systems.
Over 1,000 people recently signed a letter issued by the Elon Musk-funded Future of Life Institute which warned against the risk of economic and political disruptions by “human-competitive” AI systems.
Apple co-founder Steve Wozniak and Emma Bluemke, Centre for the Governance of AI, PhD Engineering, University of Oxford, were among the letter’s signatories.
Recent Stories