Samsung Electronics Co., has officially launched a team dedicated to developing advanced high-bandwidth (HBM) memory, a core chip to power artificial intelligence (AI) devices.

The Suwon, South Korea-based tech giant also launched a separate team handling advanced chip packaging, which combines the HBM chip with the graphics processing unit (GPU) to produce AI chip accelerators.

The launch of the two dedicated teams is part of Samsung’s organizational revamp on Thursday – its first reshuffle since Vice Chairman Jun Young-hyun in May took the helm of Samsung’s Device Solutions (DS) division, which oversees the company’s chip business.

Samsung previously set up two HBM task force teams and this time it pulled the two teams together and placed them under its memory department’s DRAM development division.

The new HBM development team will be led by DRAM Vice President Sohn Young-soo.

HBM has become an essential part of the AI boom because it provides a much faster processing speed than traditional memory chips.

The new HBM team will concentrate its resources on developing the fifth-generation HBM3E and sixth-generation HBM4 chips, sources said.

Samsung, which vows to triple its HBM output this year, is eager to pass quality testing currently underway by Nvidia.

The company in April began mass production of its HBM chips for generative AI chipsets, called 8-layer HBM3E.

In February, Samsung said it developed HBM3E 12H, the industry's first 12-stack HBM3E DRAM and the highest-capacity HBM product to date.

HBM3E is expected to power Nvidia’s new AI chips such as B100 and GB200 and AMD’s MI350 and MI375, which are set for launch later this year.

Samsung launched a separate chip packaging team and placed it under chip leader Jun’s direct control.

Advanced chip packaging is an integral part of Samsung’s ambitious turn-key chip manufacturing services.