Nvidia CEO Jensen Huang said the future of artificial intelligence (AI) will be services that can “reason,” but such a stage requires first reducing computing costs.
Next-generation tools will be able to answer queries by going through hundreds or thousands of steps and thinking about their own conclusions, Arm Holdings Plc CEO Rene Haas said during a podcast. This will give this future software the ability to think and set it apart from current systems like OpenAI’s ChatGPT, which Huang says he uses every day.
Nvidia will lay the groundwork for this progress by increasing the performance of its chips two to three times each year, while maintaining the same level of cost and power consumption, Huang said. This will change the way AI systems handle inference — the ability to spot patterns and draw conclusions.
“We are able to achieve incredible cost reductions for intelligence,” he said. “We are all aware of the value of this. If we can get the cost down significantly, we could do things at the time of inference like reasoning.”
The company, based in Santa Clara, California, has more than 90 percent of the market for so-called accelerator chips – processors that speed up the work of artificial intelligence. It has also branched out into selling computers, software, artificial intelligence models, networking and other services – part of an effort to get more companies to embrace artificial intelligence.
Nvidia faces attempts to loosen its grip on the market. Data center operators such as Amazon.com Inc. AWS and Microsoft are developing internal alternatives. And Advanced Micro Devices, already Nvidia’s rival in gaming chips, emerged as an AI competitor. AMD plans to share the latest information about its AI products at an event on Thursday.
© 2024 Bloomberg LP
(This story was not edited by NDTV staff and was automatically generated from a syndicated feed.)