Indeed - the 'dynamic' comes from 'dynamic logic'. Wikipedia: "It is distinguished from the so-called static logic by exploiting temporary storage of information in stray and gate capacitances." What Dennard realised was that you don't actually need to have a separate capacitor to hold the bit value - the bit value is just held on the stray and gate capacitance of the transistor that switches on when that bit's row and column are selected, causing the stray capacitance to discharge through the output line.
Because of that, the act of reading the bit's value means that the data is destroyed. Therefore one of the jobs of the sense amplifier circuit - which converts the tiny voltage from the bit cell to the external voltage - is to recharge the bit.
But that stray capacitance is so small that it naturally discharges through the high, but not infinite, resistance when the transistor is 'off'. Hence, you have to refresh DRAM, by regularly reading every bit frequently enough that it hasn't discharged before you got to it. Usually you might only need to read every row frequently enough, because there's actually a sense amplifier for each column, reading all the bit values in that row, with the column address strobe just selecting which column bit gets output.
Yes, it totally misses the crucial and non obvious trade off which unlocked the benefits. The rest of the system has to take care of periodically rewriting every memory cell so that the charge doesn't dissipate.
In fact it took a bit of time for the CPUs or memory controllers to do it automatically, i.e. without the programmer having to explicitly code the refresh.
DRAM uses a capacitor. Those capacitors essentially hit a hard limit at around 400MHz for our traditional materials a very long time ago. This means that if you need to sequentially read random locations from RAM, you can't do it faster than 400MHz. Our only answer here is better AI prefetchers and less-random memory patterns in our software (the penalty for not prefetching is so great that theoretically less efficient algorithms can suddenly become more efficient if they are simply more predictable).
As to capacitor sizes, we've been at the volume limit for quite a while. When the capacitor is discharged, we must amplify the charge. That gets harder as the charge gets weaker and there's a fundamental limit to how small you can go. Right now, each capacitor has somewhere in the range of a mere 40,000 electrons holding the charge. Going lower dramatically increases the complexity of trying to tell the signal from the noise and dealing with ever-increasing quantum effects.
Getting more capacitors closer means a smaller diameter, but keeping the same volume means making the cylinder longer. You quickly reach a point where even dramatic increases in height (something very complicated to do in silicon) give only minuscule decreases in diameter.
5nm can hold roughly a gigabyte of SRAM on a cpu-sized die, that's around $130/GB I believe. At some point 5nm will be cheap enough that we can start considering replacing DRAM with SRAM directly on the chip (aka L4 cache). I wonder how big of a latency and bandwidth bonus that'd be. You could even go for a larger node size without losing much capacity for half the price.
I have a recollection of a design where microprocessor reads were used to refresh DRAM contents. Late 1970s. I thought it was in a early 6800 Motorola book. Can find it now, or no mention of the technique, now. Would slow down program operation for sure. Maybe my recollection is wrong, not sure.
I miss RAM. I feel like if you lived through that 90s RAM frenzy, you probably miss RAM too. It was crazy how quickly we move through SDRAM/DDR, prices dropped and you could make real increases in performance year over year for not much money. I'm sure some of it was the software being able to capture the hw improvements, but that certainly was my fav period in tech so far.
I am confused by this comment. You said "RAM" (contrast to "DRAM" in the article title) but I think you are talking about DRAM sticks? But those have not gone away (other than with some laptops where it's soldered on and not upgradable).
Going from 8MB to 32MB in the 90s is still comparable to going from 8GB to 32GB today.
One difference is just that the price isn't dropping at the same rate anymore [1], so it doesn't make as much sense to buy small and re-buy next year when bigger chips are cheaper (they won't be much cheaper).
Another is that DRAM speed is at the top of an S-curve [2], so there's not that same increase in speed year-over-year, though arguably the early 2000's were when speeds most dramatically increased.
I started late but i rememeber when i upgraded my system with an additional 64mb stick, i was able to reduce the GTA 3 Loadtime between one island to another from 20 seconds to 1.
And at that time i also learned how critical it was to check your ram for errors. I reinstalled win98 and windows 2000 so often until i figured this out.
though i guess the 90's are _the_ best tech era by far and for some time to come, because that's where capable and modular computing machines became a real commodity.
"8K video recording" - does anyone really need this? Seems like for negligible gain in quality people are pushed to sacrifice their storage & battery, and so upgrade their hardware sooner...
Yes, they record with higher resolutions and then the director and the operateur has greater flexibility later when they realize they need a different framing - or just fixing the cameraman's errors cutting parts of the picture out. They need the extra pixels/captured area to be able to do this.
I think the studios and anyone doing video production probably would use a 8k toolchain if possible. As others have pointed out, this lets you crop and modify video while still being able to output 4k without having to upscale.
You are thinking from a consumer point of view, consumer as in Jane taking videos of her cats which 8K, even 4K would be overkill. You can set your recording device to record in 720p or 1080p and so on to suit the purpose.
For commercial purposes it's another story and it makes sense to consider shooting in 8K if possible, thus the option should exist.
I need more than 8K. I'm working at microscopic levels when I study minerals, I need as much resolution as I can possibly get, to the limit of optical diffraction.
The article doesn't properly explain how DRAM is different from SRAM. DRAM has to constantly refresh itself in order not to 'forget' its contents.
Indeed - the 'dynamic' comes from 'dynamic logic'. Wikipedia: "It is distinguished from the so-called static logic by exploiting temporary storage of information in stray and gate capacitances." What Dennard realised was that you don't actually need to have a separate capacitor to hold the bit value - the bit value is just held on the stray and gate capacitance of the transistor that switches on when that bit's row and column are selected, causing the stray capacitance to discharge through the output line.
Because of that, the act of reading the bit's value means that the data is destroyed. Therefore one of the jobs of the sense amplifier circuit - which converts the tiny voltage from the bit cell to the external voltage - is to recharge the bit.
But that stray capacitance is so small that it naturally discharges through the high, but not infinite, resistance when the transistor is 'off'. Hence, you have to refresh DRAM, by regularly reading every bit frequently enough that it hasn't discharged before you got to it. Usually you might only need to read every row frequently enough, because there's actually a sense amplifier for each column, reading all the bit values in that row, with the column address strobe just selecting which column bit gets output.
Yes, it totally misses the crucial and non obvious trade off which unlocked the benefits. The rest of the system has to take care of periodically rewriting every memory cell so that the charge doesn't dissipate.
In fact it took a bit of time for the CPUs or memory controllers to do it automatically, i.e. without the programmer having to explicitly code the refresh.
It isn't the point of the article, but this is true of every storage medium. It's just a question of milliseconds or years.
Why would we use DRAM, then? It seems better not to have to refresh it all the time.
(I think I more or less know, but I’d rather talk about it than look it up this morning.)
Dennard scaling for SRAM has certainly halted, as demonstrated by TSMC’s 3nm process vs 5 nm.
What’s the likely ETA for DRAM?
Years ago.
DRAM uses a capacitor. Those capacitors essentially hit a hard limit at around 400MHz for our traditional materials a very long time ago. This means that if you need to sequentially read random locations from RAM, you can't do it faster than 400MHz. Our only answer here is better AI prefetchers and less-random memory patterns in our software (the penalty for not prefetching is so great that theoretically less efficient algorithms can suddenly become more efficient if they are simply more predictable).
As to capacitor sizes, we've been at the volume limit for quite a while. When the capacitor is discharged, we must amplify the charge. That gets harder as the charge gets weaker and there's a fundamental limit to how small you can go. Right now, each capacitor has somewhere in the range of a mere 40,000 electrons holding the charge. Going lower dramatically increases the complexity of trying to tell the signal from the noise and dealing with ever-increasing quantum effects.
Getting more capacitors closer means a smaller diameter, but keeping the same volume means making the cylinder longer. You quickly reach a point where even dramatic increases in height (something very complicated to do in silicon) give only minuscule decreases in diameter.
5nm can hold roughly a gigabyte of SRAM on a cpu-sized die, that's around $130/GB I believe. At some point 5nm will be cheap enough that we can start considering replacing DRAM with SRAM directly on the chip (aka L4 cache). I wonder how big of a latency and bandwidth bonus that'd be. You could even go for a larger node size without losing much capacity for half the price.
Now? Prices have been flat for 15 years and DRAM has been stuck on 10 nm for a while.
> Dennard scaling for SRAM has certainly halted, as demonstrated by TSMC’s 3nm process vs 5 nm.
I don't think the latter (SRAM capacity remaining the same per area?) has anything to do with Dennard scaling.
Not soon as DRAM is mostly on older node. But overall cost reduction of DRAM is moving very very slowly.
I have a recollection of a design where microprocessor reads were used to refresh DRAM contents. Late 1970s. I thought it was in a early 6800 Motorola book. Can find it now, or no mention of the technique, now. Would slow down program operation for sure. Maybe my recollection is wrong, not sure.
updated June 2024
Update: Today, marking the 56th anniversary...1966
Please forgive my pedantry but 58th. It was a busy year.
I miss RAM. I feel like if you lived through that 90s RAM frenzy, you probably miss RAM too. It was crazy how quickly we move through SDRAM/DDR, prices dropped and you could make real increases in performance year over year for not much money. I'm sure some of it was the software being able to capture the hw improvements, but that certainly was my fav period in tech so far.
I am confused by this comment. You said "RAM" (contrast to "DRAM" in the article title) but I think you are talking about DRAM sticks? But those have not gone away (other than with some laptops where it's soldered on and not upgradable).
Going from 8MB to 32MB in the 90s is still comparable to going from 8GB to 32GB today.
One difference is just that the price isn't dropping at the same rate anymore [1], so it doesn't make as much sense to buy small and re-buy next year when bigger chips are cheaper (they won't be much cheaper).
Another is that DRAM speed is at the top of an S-curve [2], so there's not that same increase in speed year-over-year, though arguably the early 2000's were when speeds most dramatically increased.
[1] https://aiimpacts.org/trends-in-dram-price-per-gigabyte/
[2] http://blog.logicalincrements.com/2016/03/ultimate-guide-com...
Getting a new stick of RAM was so damn exciting in the 90s.
Sad indeed. All that was taken away once it became possible to download more ram[0].
0. https://downloadmoreram.com/
I started late but i rememeber when i upgraded my system with an additional 64mb stick, i was able to reduce the GTA 3 Loadtime between one island to another from 20 seconds to 1.
And at that time i also learned how critical it was to check your ram for errors. I reinstalled win98 and windows 2000 so often until i figured this out.
Nah the biggest jump in performance by far was SSDs. It was a huge step so software had no chance to "catch up" initially.
RAM speeds are still improving pretty fast. I'm running DDR5 6000 and DDR5 8300 is available. GDDR7 uses PAM3 to get 40Gbps
can relate
though i guess the 90's are _the_ best tech era by far and for some time to come, because that's where capable and modular computing machines became a real commodity.
"8K video recording" - does anyone really need this? Seems like for negligible gain in quality people are pushed to sacrifice their storage & battery, and so upgrade their hardware sooner...
Yes, they record with higher resolutions and then the director and the operateur has greater flexibility later when they realize they need a different framing - or just fixing the cameraman's errors cutting parts of the picture out. They need the extra pixels/captured area to be able to do this.
I think the studios and anyone doing video production probably would use a 8k toolchain if possible. As others have pointed out, this lets you crop and modify video while still being able to output 4k without having to upscale.
Well for starters 8k video lets you zoom in and crop and still get 4k in the end.
You are thinking from a consumer point of view, consumer as in Jane taking videos of her cats which 8K, even 4K would be overkill. You can set your recording device to record in 720p or 1080p and so on to suit the purpose.
For commercial purposes it's another story and it makes sense to consider shooting in 8K if possible, thus the option should exist.
Yes why not?
Different use cases exist:
Record 8k text and you could zoom in and read things. Record 8k and crop withot quality loss or 'zoom' in
Does everyone need this? Probably not but we are on hn not at a coffee party
I need more than 8K. I'm working at microscopic levels when I study minerals, I need as much resolution as I can possibly get, to the limit of optical diffraction.
8K is important for VR video; otherwise, not so much. There's a really noticeable step up from 4K in that area.
On a large TV though , it's probably an improvement over 4K for sports where you need to track a small item moving fast.
Yes, it makes post-production SO MUCH EASIER