SK Hynix announced on Monday that it has started mass production of its 192GB small outline compression attached memory module 2 (SOCAMM2). This next-generation chip, designed specifically for artificial intelligence (AI) servers, is produced using the company’s advanced sixth-generation 10-nanometer-class (1c) process.
SOCAMM2 is a server memory module that uses low-power double data rate (LPDDR) memory chips, commonly found in smartphones. Its main goal is to significantly cut power consumption, reducing it to roughly one-third of what conventional server memory modules use.
This new memory solution features a thin, high-density design, making it ideal for AI servers. This form factor also improves signal integrity and simplifies future upgrades or replacements. With power efficiency becoming increasingly critical for managing the total cost of ownership in AI data centers, SOCAMM2 is quickly drawing attention from operators.
Unlike high-bandwidth memory (HBM), which is typically integrated directly within the package of logic chips like GPUs or CPUs, SOCAMM2 sits next to these logic chips on the system board. In this setup, HBM handles computing acceleration, while SOCAMM2 enhances overall system-level power efficiency by working alongside traditional DDR-based memory modules.
Notably, SK Hynix uses its advanced 1c process for manufacturing the LPDDR5X memory within SOCAMM2. The 1c process is one of the most advanced nodes available today, offering both performance gains and improved power efficiency. Industry experts say DDR5 built on the 1c process delivers about 11% faster speeds and over 9% better power efficiency compared to DDR5 based on the 1b process.
SK Hynix stated that SOCAMM2 provides more than twice the bandwidth of conventional RDIMMs (registered dual in-line memory modules) while improving energy efficiency by over 75%. This makes it a highly effective solution for demanding, high-performance AI workloads. The company also confirmed that the product has been optimized for Nvidia’s Vera Rubin, a next-generation AI computing platform.
The company expects this new module to significantly reduce memory bottlenecks, where data delivery struggles to keep up with GPU processing speeds during the training and inference of massive AI models. These models often have hundreds of billions of parameters, which are learned values indicating the model’s scale.
With the introduction of SOCAMM2, the memory structure in AI servers will evolve into a multi-tier system. This hierarchy will include HBM, SOCAMM, DDR5 memory modules, and Compute Express Link (CXL) memory for expanded capacity.
SK Hynix predicts that as the AI market shifts more towards inference from pure training, SOCAMM2 will become a key next-generation memory solution for running large language models with high energy efficiency. The company has rapidly scaled up mass production to meet demand from global cloud service providers.
Kim Ju-seon, chief marketing officer of SK Hynix, stated, “With the launch of the 192GB SOCAMM2, the company has set a new standard for AI memory performance.” He added, “Through close collaboration with global AI customers, we will strengthen our position as a trusted AI memory solutions provider.”










