According to industry news, NVIDIA has established an Application-Specific Integrated Circuit (ASIC) department to enhance its custom chip set capabilities, marking a significant move highlighting the rapid development of the AI semiconductor market. The establishment of the ASIC department coincides with the booming growth of global generative AI and Large Language Models (LLM). Data from Omdia indicates that the AI inference chip market is expected to grow from 6billionin2023to143 billion by 2030, with exponential growth prompting major technology companies to accelerate the development of custom ASIC chips.
Currently, Amazon, Microsoft, Google, Meta, and Apple are all actively engaged in AI semiconductor projects. The soaring stock prices of chip set designers such as Broadcom and Marvell further highlight the competitive dynamics of the semiconductor industry. Broadcom has confirmed collaborations with three major cloud companies to develop custom AI chips, underscoring the growing demand for dedicated semiconductor solutions in data centers.
In addition to advancements in custom chip design, significant changes are also occurring in the memory market. SK Hynix will begin mass production of the sixth-generation High Bandwidth Memory (HBM4) in the second half of this year. Samsung Electronics' HBM4 tape-out is expected to have started in the fourth quarter of 2024, with test products expected to be released around the beginning of this year, aiming to complete preparations for mass production of 12-layer HBM4 before June this year.
HBM4 is expected to feature 2,048 data entry points, double that of its predecessor. This development is crucial for handling the large datasets required for AI applications. Furthermore, Samsung Electronics and SK Hynix will enhance custom services starting with HBM4, meaning that detailed HBM designs will vary for each customer.
|