- ASICs are much more efficient than GPUs for inference, similar to cryptocurrency mining
- The market for Inference AI chips is expected to grow exponentially by the end of this decade
- Hyperscalers like Google have already jumped on the bandwagon
Nvidia, already a leader in AI and GPU technologies, is entering the Application-Specific Integrated Circuit (ASIC) market to address growing competition and changing trends in AI semiconductor design.
The global rise of generative AI and large language models (LLMs) has significantly increased demand for GPUs, and Nvidia CEO Jensen Huang confirmed in 2024 that the company will recruit 1,000 engineers in Taiwan.
Now, as reported by Taiwan Commercial times (originally published in Chinese), the company has now established a new ASIC department and is actively recruiting talent.
The rise of inference chips
Nvidia’s H-series GPUs optimized for AI learning tasks are widely adopted for training AI models. However, the AI semiconductor market is undergoing a shift toward inference chips, or ASICs.
This increase is driven by the demand for chips that are optimized for real AI applications, such as large language models and generative AI. Unlike GPUs, ASICs offer superior efficiency for inference tasks and for cryptocurrency mining.
According to Verified market researchThe AI derivative chip market is expected to rise from a valuation of $15.8 billion in 2023 to $90.6 billion in 2030.
Major tech players, including Google, have already embraced custom ASIC designs in their “Trillium” AI chip, which was made generally available in December 2024.
The shift to custom AI chips has intensified competition among semiconductor giants. Companies like Broadcom and Marvell have soared in relevance and stock value as they partner with cloud service providers to develop specialized chips for data centers.
To stay ahead, Nvidia’s new ASIC division is focusing on leveraging local expertise by recruiting from leading companies like MediaTek.