U.S. entrepreneur Elon Musk recently informed investors about his artificial intelligence venture, xAI, which aims to construct a supercomputer for the next iteration of its AI chatbot, Grok. This information comes from a report by The Information, citing Musk’s presentation to investors.
Musk expressed his ambition to have the supercomputer operational by the fall of 2025. The report also mentioned that xAI might collaborate with Oracle to develop this substantial computing system.
Attempts to reach xAI for comments were unsuccessful, and Oracle did not respond to Reuters’ request for comments. According to The Information, Musk mentioned that the interconnected network of Nvidia’s flagship H100 graphics processing units (GPUs) would be at least four times larger than the largest existing GPU clusters.
Nvidia’s H100 GPUs are renowned for their power in the data center chip market for AI applications, but their high demand makes them difficult to acquire.
Musk established xAI last year as a competitor to OpenAI, which is supported by Microsoft, and Google, a subsidiary of Alphabet. Notably, Musk is also a co-founder of OpenAI. Earlier this year, he revealed that training the Grok 2 model required approximately 20,000 Nvidia H100 GPUs, and the upcoming Grok 3 model will need around 100,000 of these powerful chips.