Qualcomm And AMD Look To SOCAMM2 Memory To Supercharge AI Hardware
The AI hardware race keeps accelerating, and memory design now sits at the center of innovation. After Microsoft entered the field with its Maia 200 chip and secured supply through a deal with SK Hynix, other chipmakers have started to explore alternative ways to scale AI performance.
SOCAMM2 gains attention as AI memory demands grow
According to TechPowerUp, Qualcomm and AMD are reportedly evaluating the use of SOCAMM2 memory in future AI-focused products. The approach targets higher memory capacity and bandwidth, two critical factors for modern AI workloads.
The renewed interest follows NVIDIA’s “Vera” CPU, which uses LPDDR5X memory on the SOCAMM form factor. This design delivers up to 1.2 TB/s of memory bandwidth and supports as much as 1.5 TB of LPDDR5X memory, allowing large AI models to remain fully resident in fast system memory.
How Qualcomm and AMD could adopt SOCAMM
Qualcomm and AMD aim to use SOCAMM as a complement to HBM memory already deployed on AI accelerators. SOCAMM would act as a large, high-speed memory pool, reducing reliance on SSD or flash storage transfers and improving overall system efficiency.
For AMD, potential integration paths include pairing Instinct MI accelerators with EPYC CPUs using SOCAMM or developing a new AI system architecture built around the memory format. Qualcomm could expand its existing AI200 and AI250 accelerators, which already support up to 768 GB of LPDDR5 per card, by adding SOCAMM-based memory expansion instead of soldered designs.
Beyond performance gains, SOCAMM enables modular system configurations. Manufacturers can scale memory capacity by adding or removing modules without major PCB redesigns or complex soldering.
This flexibility becomes increasingly valuable as memory prices continue to rise, with even DDR4 experiencing notable price spikes.
Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more
User forum
0 messages