OpenAI Explores AI Chip Development Strategy
In a strategic move to address the ongoing chip shortage plaguing AI model training, OpenAI, a well-funded leader in the AI industry, is deliberating the possibility of developing its own AI chips. Discussions surrounding this initiative have been taking place within the company since the previous year, reflecting the growing urgency to overcome the chip scarcity hindering AI model advancement. OpenAI is actively exploring several avenues to realize its chip aspirations, including the potential acquisition of an established AI chip manufacturer or undertaking an in-house chip design endeavor.
CEO Sam Altman's Emphasis on AI Chip Acquisition
OpenAI’s CEO, Sam Altman, has underscored the significance of acquiring more AI chips as a top priority for the organization, according to reports from Reuters.
Currently, OpenAI, akin to its competitors, heavily relies on GPU-based hardware to facilitate the development of advanced AI models like ChatGPT, GPT-4, and DALL-E 3. The parallel processing capabilities of GPUs render them exceptionally well-suited for training today’s most sophisticated AI systems.
However, the burgeoning demand for generative AI models has exerted immense pressure on the GPU supply chain, affecting industry players such as Microsoft, who have cautioned about potential service disruptions due to the severe server hardware shortage required for AI operations. Additionally, reports suggest that Nvidia’s most high-performing AI chips are virtually unavailable until 2024.
The dependence on GPUs extends to running and serving OpenAI’s models, necessitating clusters of GPUs in cloud computing environments. Yet, the cost associated with GPUs remains notably high.
An analysis conducted by Bernstein analyst Stacy Rasgon reveals that if ChatGPT queries were to reach a scale equivalent to one-tenth of Google Search, it would necessitate an initial investment of approximately $48.1 billion worth of GPUs and an annual expenditure of about $16 billion on chips to sustain operations.
OpenAI’s contemplation of developing its own AI chips follows a precedent set by tech giants like Google, which employs the Tensor Processing Unit (TPU) for training large generative AI systems, Amazon, with its proprietary chips (Trainium and Inferentia) for AWS customers, and Microsoft, reportedly collaborating with AMD to create an in-house AI chip named Athena, a chip that OpenAI is currently testing.
OpenAI’s robust financial position, buoyed by more than $11 billion in venture capital funding and nearing $1 billion in annual revenue, positions it favorably to invest substantially in research and development endeavors. Additionally, the organization is contemplating a share sale that could potentially escalate its secondary-market valuation to a staggering $90 billion, as indicated in a recent Wall Street Journal report.
Nonetheless, the hardware sector, particularly AI chip development, poses formidable challenges and risks.
Last year, AI chip manufacturer Graphcore faced a significant valuation setback of $1 billion when a prospective deal with Microsoft fell through, prompting the company to consider job cuts amidst an “extremely challenging” macroeconomic climate. Furthermore, Habana Labs, owned by Intel and specializing in AI chips, reduced its workforce by approximately 10%. Meta’s customized AI chip initiatives have also encountered complications, leading to the abandonment of certain experimental hardware projects.
Even if OpenAI commits to the arduous task of bringing a custom chip to market, this endeavor could span several years and require an annual expenditure reaching hundreds of millions of dollars. The willingness of OpenAI’s investors, including Microsoft, to embark on such a daring and uncertain venture remains uncertain.