DeepSeek and Peking University have just found a clever way to make artificial intelligence more powerful without increasing its cost. They’ve introduced a new training method called “Engram” that fundamentally changes how AI models handle and store information.
Right now, building large AI models creates a massive headache for the tech industry. These models require large amounts of specialized, high-speed memory to operate. This intense demand recently caused certain memory chip prices to jump fivefold in just 10 weeks.
DeepSeek’s researchers realized that current models waste a lot of their “brainpower” on simple data retrieval tasks—work that often crowds out the space needed for actual thinking.
Engram solves this by separating memory from computation. Think of it like giving the AI a reference library. Instead of the model recalculating every piece of basic information using its main processor, it can quickly “look up” facts from a separate, static memory module. This frees up the AI’s core to focus on much harder tasks, like complex reasoning and logic.
In tests on a 27-billion-parameter model, the team saw measurable improvements. They discovered that by shifting about 20% to 25% of the AI’s resources to this new memory system, the model performed significantly better than traditional designs. It also handled long, complicated prompts much more efficiently.
This discovery is a big deal for the hardware world. Because Engram doesn’t rely as heavily on the most expensive, hard-to-find chips, companies can build better AI using more affordable, standard parts. This could be a huge advantage in regions like China, where trade restrictions limit access to high-end chips.
Engram allows AI to grow its knowledge and reasoning skills without requiring a massive, budget-breaking hardware upgrade. If this method catches on, it could help end the wild price hikes in the chip market and make advanced AI more accessible for everyone.











