Method and Design to Reduce the Current Swings on a DIMM When the DIMM Traffic Moves From a Idle State to a Maximum Utilization State
Publication Date: 2010-Nov-23
The IP.com Prior Art Database
Described is a method and design to reduce the current swings on a DIMM when the DIMM traffic moves from an idle state to a maximum utilization state.
Page 01 of 1
ˇ ˄ ˙ ˝ ˛ ˙ ˙ ˚ ˝ ~
ˇ ˄ ˙ ˝ ˛ ˙ ˙ ˚ ˝ ~ ˇ ˄ ˙ ˝ ˛ ˙ ˙ ˚ ˝ ~
Memory subsystems have moved to high-speed narrow interface channels that require periodic recalibration to sync data to clock edges. These recalibrations are required to re-adjust timings due to temperature and voltage drift. For a memory channel, this will require that all memory traffic is halted to allow the channel to be calibrated then traffic can resume. The problem here is that, even though memory traffic has halted, the processor requests do not stop, so by the time the channel is released, the memory queues are full of pending commands to memory. This will result in a transition from a memory DIMM at idle power to one of maximum power in a very short time frame (could be less then 40 ns). This power delta on large DIMMs could be over 100% delta and will result in power dips on the power rails as the power subsystem cannot respond that quickly. One way to fix this is to add a lot of high frequency capacitance to the DIMM to manage this short term burst of current, but this adds cost to the DIMM, lowers the reliability, and creates issues with physical real estate.
The current memory controller designs have a memory power management design that allows the total bandwidth to be restricted to a programmable number of operations within a given time window. Prior to this invention, this logic just limited the max sustained bandwidth, but it did not restrict...