> While working on one DARPA-funded project, Solomon stumbled upon a page in a century-old optics textbook that caught his eye. It described a method developed by noted physicist Gabriel Lippmann for producing color photographs. Instead of using film or dyes, Lippmann created photos by using a glass plate coated with a specially formulated silver halide emulsion.
This method of color photography is absolutely fascinating and resulted in some of the best color photographs of the early 20th century.
The Library of Congress has a collection [1] of plates by Prokudin-Gorskii who was hired by the Czar to ride around Russia on a train and photograph the country in the years before WWI and the Revolution. In the last couple of decades someone restored and digitally aligned each color plate so now we have nearly 1,500 relatively high resolution color photographs of imperial Russia. He took photos of everything from Emirs to peasant girls to Tolstoy and all the architecture and scenery in between.
Prokudin-Gorskii's images are fascinating, but he didn't use Lippmann plates. Gorskii took three images using red, green and blue filters. That was also much more practical, because I don't think you can reproduce Lippmann plates, while you can print a positive RGB image with CMY(K) dyes. That's why they're CMY after all (cyan absorbs red, magenta absorbs green, yellow absorbs blue).
This is the first I'm hearing about this lack of reproducibility, I can't make sense of it, you could always just take a picture of the resulting plate, no? Except color photos weren't a thing yet, so there just wasn't the technology at the time to make multiple copies?
A the start of the photo era, the state of the art for illustrations was for them to be drawn by an artist and then engraved on a wood block, manually, that was then used as a printing plate. There was a period when no method was available to convert photos to printing plates, so from that period you find prints of photos where someone has manually copied it to a wood engraving for publication.
Wow, reading the wiki article on Lippmann plates, sounds almost like a hologram - baking a diffraction pattern into glass that is then 'replayed' by white light. It puzzled me when it says one of the disadvantages was that the resulting plate could not be copied - like the optical effect doesn't work on film? I don't understand. Another citation regards this as a feature-not-a-bug, pointing to its use in security documents, apparently used on UK passports (identical hologram on all passports) and individuated holograms on new German passports. "Lippmann OVD" - optically variable device.
Lippmann photographs are like holograms, specifically reflection holograms. The image is exposed using self-interference, which works in what we would typically consider incoherent light due to a very short distance between the light and its reflection.
Development of the plate produces a superposition of many different volume diffraction gratings mostly parallel to the surface of the plate. If the plate is bleached, these diffraction gratings become high efficiency phase gratings.
For playback/viewing, light from the illumination source is both diffracted and filtered in wavelength by the volume diffraction gratings. In a hologram, the diffraction gives the multiple perspectives that make the medium so cool. For Lippmann photographs, the camera has removed most of the perspective information and the dichroic or interference filtering of the gratings is the primary effect.
In either case, the final image is a layer free image bearing volume in the emulsion.
That's why it can't be effectively copied using 2D techniques. Since the image is 2D (maybe slightly 3D? I'd have to think about that), it can be copied using a standard photographic technique. But the interference gratings in the emulsion have some angular dependence on the light source and viewer's angle that wouldn't be present in a 2D copy. In this way they also look a bit like a reflection hologram.
Thanks for the keywords, there's a lot I need to learn to grok holography
Regarding perspective through a lens, I'm imagining looking through my stereo microscope vs my SLR ... Does the fact that a lens has a single focal point get in the way of keeping depth information ? Or could I split the image landing on the mirror into two and have a 3d stereo viewfinders for my Nikon camera, such that the view is stereo and it's only the film that's throwing out what direction the light is coming from... I'm reminded of the Lytro Illum lightfield cameras, they only leaned on "focus after the fact" gimmicks, maybe if they tried it during a VR boom to share "spatial photographs" they would have had access to a new market
> Does the fact that a lens has a single focal point get in the way of keeping depth information ?
No, really single lens have separate focal point for each wavelength, but achromatic optical system in SLR have at least two lenses, so they have range of frequencies for which focal points are very close to one designed point.
Mirror optical system don't have chromatic distortions, but in SLR case lens just cheaper to produce.
Lytro using other idea - light field from short distances is not square but rounded (square light field from distant stars, because of very large distance, so radius too large to see rounding), and in Lythro each point seen via prism by few pixels (each angle with other pixel), so could measure distance to light source and using this info, restore distance information.
Unfortunately, Lythro technology means, with few Megapixels CCD, will have just some hundreds Kilo-pixels restored image, so in reality, need some hybrid approach, where will be classic high-resolution 2D and some sort of depth sensor. Plus, need additional processing power to calculate restored image.
So yes, in reality, latest Lythro camera using huge CCD and very powerful processor, all too expensive for COTS (but acceptable in some cinema niches).
Anybody could make Lythro-like setup from ordinary lens and CCD (and make calculations on for example Raspberry), but now Lythro patents prohibited to make money on this.
My read is that you can't make "prints" of it like you can with regular negatives. You probably could use a copy camera to make a copy of the plate but you're going to lose fidelity that way.
Adolf Miethe (one of Prokudin-Gorskii's mentors) made a lot of photographs but I think most were black and white. Hans Hildenbrand took color photos in the German trenches during WWI [1] and 1920/30s Germany [2], Poland [3], Hungary [4], and possibly some other countries. The Auguste and Louis Lumière autochrome collections are also worth mentioning.
Unfortunately none of them are as well restored and presented as the Library of Congress collection. A lot of their photos are in books like Endzeit Europa [5] and other commercial media instead of in the public domain.
It's difficult to understand the math on the storage density. Four colors out of a possible 32 colors is about 15 bits of information, not 40,000 bits of information.[1] If it's 15 bits per pixel and 115M pixels, then the capacity is 1.7Gb, not 4.6Tb.
Maybe I've misunderstood the coding. Corrections are welcome.
[1] Crude overestimate, assume 5 bits per color for 20 bits per pixel. More accurate is log2(32 choose 4), which you can type into Google to get 15 bits.
If, in our previous example, the 4 wavelengths were to be selected from a palette of 32 different wavelengths, a single worfel location could store ~36 Kilobits of data. Thus, a 1 cm2 media with 10μ2 data locations (8μ2 worfels with 2μ spacing on all sides) = 1,000,000 worfels/cm2. For example, (32!/((32-4)! • 4!)) = 35,960 distinct states. (An analogous use of formula (1) is drawing a hand of 5 playing cards from a 52-card deck yields 2,598,960 distinct hands.)
Applying the 35,960-state permutation table for k=4 (i.e., superimposing 4 wavelengths per worfel), and drawing from a palette, N, of 32 different wavelengths, yields 35,960,000,000 bits (≈35.9 gigabits) per cm2; or 35.9 x (6.42 cm2 per square inch) ≈ 230.4 gigabits/in2. And so for an example of a 4″x5″ media (20 in2), 20 x 230.4 ≈ 4.6 terabits per 4x5 inch media.
Yeah, there might be one more combinatorial explosion, so they can choose one of 16 combinations with each of those 35960 combinations... breaks one's brain.
In current fiber-optics solutions using multi-wavelength, so on fiber input mixed 8 or more laser waves of different lengths (to be strict not wl but bands) and on output mix divided with optical filter and all wavelengths processed separately.
But you don't have to use exact wavelength in this, you could use any wl within band (yes, 4 colors in each of 40000 bands looking possible), if you have light source with changeable wl and detector with enough precision, and number of wl's could be much larger than in for example LCD or OLED.
Unfortunately, authors of article for some reason don't write about this important nuance, but it mean, in reality to make this technology, need precision light source(s) with adjustable wl's (classic RGB is combination of just 3 wl's, and modern amoleds usually 4 wl's) and precision compact spectrometer (for example, what I seen myself gives 1024 lines for visual spectrum, so just 10 bits). All these are not impossible, but will not easy to achieve.
This reminds me of the IBM 1360 photodigital storage system, designed for the CIA to store a terabit of data in the 1960s. https://en.wikipedia.org/wiki/IBM_1360
It was basically cathode ray tubes to expose photographic strips, an automatic chemical photo wet lab, robotic storage and retrieval of the developed film strips, and optical readout.
In current fiber-optics solutions using multi-wavelength, so on fiber input mixed 8 or more laser waves of different lengths (to be strict not wl but bands) and on output mix divided with optical filter and all wavelengths processed separately.
But you don't have to use exact wavelength in this, you could use any wl within band (yes, 4 colors in each of 40000 bands looking possible), if you have light source with changeable wl and detector with enough precision, and number of wl's could be much larger than in for example LCD or OLED.
Unfortunately, authors of article for some reason don't write about this important nuance, but it mean, in reality to make this technology, need precision light source(s) with adjustable wl's (classic RGB is combination of just 3 wl's, and modern amoleds usually 4 wl's) and precision compact spectrometer (for example, what I seen myself gives 1024 lines for visual spectrum, so just 10 bits). All these are not impossible, but will not easy to achieve.
I have taken careful view and see, they write in each square huge number of interference pictures - somewhere about 1M x 1M, and in calculation they consider 1Billion points with 4 colors in each, so just boolean true/false give close to 2^32 number of variants.
And there are 35,960 possible permutations, if use in each point 4 sources of light from 32 possible. These are reasonable considerations about possibilities of current technology.
This also not easy way, and I prefer to name it micro spectrogram.
> While Rosenthal was visiting the International Space Station headquarters in Montgomery, Ala., in 2013, a top scientist said, “‘The data stored on the station gets erased every 24 hours by cosmic rays,’” Rosenthal recalls. “‘And we have to keep rewriting the data over and over and over again.’”
This doesn't seem right to me, considering the amount and age of COTS hardware with a variety of flash-storage in them (Thinkpads, Nikon DSLRs etc.)
Perhaps the more precise phrasing would be that the data is corrupted within a short enough period of time that they need to rewrite every 24 hours to ensure validity.
IIRC the shuttle’s magnetic-coil memory was hardened explicitly to defend against this sort of corruption, with additional windings to maintain a stronger charge state than would be used within the shield of the atmosphere.
DRAM/SRAM really have problems with cosmic rays, but on LEO enough to use ECC, as DRAM refresh it's contents every few milliseconds.
In Deep space missions (on Mars and beyond), even hardened electronics hangs, as I could remember, approximately every year.
Magnetic-core memory is not affected by cosmic rays (only support circuits affected), but unfortunately it is not dense enough for current storage demands (exists micro-electronics magnetic-core technology, but even it cannot compete with CMOS).
Unfortunately, ECC notebook (mobile) platforms are not produced anymore (yes, tens years ago exists Sun notebooks, built on server platform, but they not produced for long time), and on ISS used off-the-shelf technologies, that's why they have troubles with cosmic rays.
Because of this, Shuttle flight control computers was special radiation hardened but with DRAM.
Magnetic-core storage is just don't used in modern spacecrafts, but used radiation hardened CMOS and for storage used cylindrical magnetic domains technology (you could buy it on free market easy, but it is still magnitudes less dense than single layer flash).
PS to be strict, Shuttle flight computers was upgraded at least once, but from beginning was hardened semiconductor.
There still are mobile workstation laptops with ECC memory, e.g. the Dell Precision 5000/7000 series and some similar series at HP and Lenovo.
My own laptop is such a Dell Precision model.
EDIT:
Looking now at the Dell site, I see that buying a laptop with ECC memory has become much more difficult than a few years ago. For many of the "mobile workstations" ECC memory is not offered at this time, while for those where you can customize the laptop and choose ECC, the price is absolutely outrageous, e.g. $850 for 64 GB of ECC memory.
Of course, anyone sensible would buy the "mobile workstation" with the smallest and cheapest memory option, then they would buy separately 64 GB of ECC SODIMM memory at a price 4 times lower than demanded by Dell.
I most time avoid to read about top ultrabooks, because mostly this info useless for me, but now I first time hear about notebook with server processor (xeon 5xxx).
Maybe a bit gets flipped every 24 hours but yeah cosmic rays don't just erase a whole drive... bit of a case of telephone here tho, just relaying a moment of inspiration.
DRAM/SRAM/Flash bits and CMOS triggers flipped (not all at once, random). Magnetic media is much more resistant, but in almost all HDDs using classic CPU (and DRAM cache), which are affected by cosmic rays, so info corrupted when travel from HDD surface to computer motherboard.
Technique seems to me specific and technical. Method is more general and abstract. Hence my brain wiring and decoder seems ok with it. Wonder as the other post said how you would have written it, even assume these are the same word as you cannot repeat the word without sounding odd ?
Depends on the context, I guess. In some, a method can involve multiple techniques; some of these techniques can be borrowed from other, unrelated, methods. (You could say that a photograph is kind of data storage, but still.)
> While working on one DARPA-funded project, Solomon stumbled upon a page in a century-old optics textbook that caught his eye. It described a method developed by noted physicist Gabriel Lippmann for producing color photographs. Instead of using film or dyes, Lippmann created photos by using a glass plate coated with a specially formulated silver halide emulsion.
This method of color photography is absolutely fascinating and resulted in some of the best color photographs of the early 20th century.
The Library of Congress has a collection [1] of plates by Prokudin-Gorskii who was hired by the Czar to ride around Russia on a train and photograph the country in the years before WWI and the Revolution. In the last couple of decades someone restored and digitally aligned each color plate so now we have nearly 1,500 relatively high resolution color photographs of imperial Russia. He took photos of everything from Emirs to peasant girls to Tolstoy and all the architecture and scenery in between.
[1] https://www.loc.gov/collections/prokudin-gorskii/about-this-...
Prokudin-Gorskii's images are fascinating, but he didn't use Lippmann plates. Gorskii took three images using red, green and blue filters. That was also much more practical, because I don't think you can reproduce Lippmann plates, while you can print a positive RGB image with CMY(K) dyes. That's why they're CMY after all (cyan absorbs red, magenta absorbs green, yellow absorbs blue).
This is the first I'm hearing about this lack of reproducibility, I can't make sense of it, you could always just take a picture of the resulting plate, no? Except color photos weren't a thing yet, so there just wasn't the technology at the time to make multiple copies?
Sounds plausible.
A the start of the photo era, the state of the art for illustrations was for them to be drawn by an artist and then engraved on a wood block, manually, that was then used as a printing plate. There was a period when no method was available to convert photos to printing plates, so from that period you find prints of photos where someone has manually copied it to a wood engraving for publication.
Thank you for the correction! I didn’t realize that there were multiple different plate emulsion methods.
Wow, reading the wiki article on Lippmann plates, sounds almost like a hologram - baking a diffraction pattern into glass that is then 'replayed' by white light. It puzzled me when it says one of the disadvantages was that the resulting plate could not be copied - like the optical effect doesn't work on film? I don't understand. Another citation regards this as a feature-not-a-bug, pointing to its use in security documents, apparently used on UK passports (identical hologram on all passports) and individuated holograms on new German passports. "Lippmann OVD" - optically variable device.
https://holowiki.org/wiki/Lippmann_Security
Lippmann photographs are like holograms, specifically reflection holograms. The image is exposed using self-interference, which works in what we would typically consider incoherent light due to a very short distance between the light and its reflection.
Development of the plate produces a superposition of many different volume diffraction gratings mostly parallel to the surface of the plate. If the plate is bleached, these diffraction gratings become high efficiency phase gratings.
For playback/viewing, light from the illumination source is both diffracted and filtered in wavelength by the volume diffraction gratings. In a hologram, the diffraction gives the multiple perspectives that make the medium so cool. For Lippmann photographs, the camera has removed most of the perspective information and the dichroic or interference filtering of the gratings is the primary effect.
In either case, the final image is a layer free image bearing volume in the emulsion.
That's why it can't be effectively copied using 2D techniques. Since the image is 2D (maybe slightly 3D? I'd have to think about that), it can be copied using a standard photographic technique. But the interference gratings in the emulsion have some angular dependence on the light source and viewer's angle that wouldn't be present in a 2D copy. In this way they also look a bit like a reflection hologram.
Thanks for the keywords, there's a lot I need to learn to grok holography
Regarding perspective through a lens, I'm imagining looking through my stereo microscope vs my SLR ... Does the fact that a lens has a single focal point get in the way of keeping depth information ? Or could I split the image landing on the mirror into two and have a 3d stereo viewfinders for my Nikon camera, such that the view is stereo and it's only the film that's throwing out what direction the light is coming from... I'm reminded of the Lytro Illum lightfield cameras, they only leaned on "focus after the fact" gimmicks, maybe if they tried it during a VR boom to share "spatial photographs" they would have had access to a new market
> Does the fact that a lens has a single focal point get in the way of keeping depth information ?
No, really single lens have separate focal point for each wavelength, but achromatic optical system in SLR have at least two lenses, so they have range of frequencies for which focal points are very close to one designed point.
Mirror optical system don't have chromatic distortions, but in SLR case lens just cheaper to produce.
Lytro using other idea - light field from short distances is not square but rounded (square light field from distant stars, because of very large distance, so radius too large to see rounding), and in Lythro each point seen via prism by few pixels (each angle with other pixel), so could measure distance to light source and using this info, restore distance information.
Unfortunately, Lythro technology means, with few Megapixels CCD, will have just some hundreds Kilo-pixels restored image, so in reality, need some hybrid approach, where will be classic high-resolution 2D and some sort of depth sensor. Plus, need additional processing power to calculate restored image. So yes, in reality, latest Lythro camera using huge CCD and very powerful processor, all too expensive for COTS (but acceptable in some cinema niches).
Anybody could make Lythro-like setup from ordinary lens and CCD (and make calculations on for example Raspberry), but now Lythro patents prohibited to make money on this.
My read is that you can't make "prints" of it like you can with regular negatives. You probably could use a copy camera to make a copy of the plate but you're going to lose fidelity that way.
very cool thanks for the link - do we know of any other photographers with similar styles?
Adolf Miethe (one of Prokudin-Gorskii's mentors) made a lot of photographs but I think most were black and white. Hans Hildenbrand took color photos in the German trenches during WWI [1] and 1920/30s Germany [2], Poland [3], Hungary [4], and possibly some other countries. The Auguste and Louis Lumière autochrome collections are also worth mentioning.
Unfortunately none of them are as well restored and presented as the Library of Congress collection. A lot of their photos are in books like Endzeit Europa [5] and other commercial media instead of in the public domain.
[1] https://www.telegraph.co.uk/news/picturegalleries/worldnews/...
[2] https://www.nationalgeographic.com/history/article/autochrom...
[3] https://www.vintag.es/2013/03/color-photographs-of-life-in-p...
[4] https://www.vintag.es/2012/12/beautiful-color-photos-of-hung...
[5] https://www.amazon.de/Endzeit-Europa-kollektives-deutschspra...
It's difficult to understand the math on the storage density. Four colors out of a possible 32 colors is about 15 bits of information, not 40,000 bits of information.[1] If it's 15 bits per pixel and 115M pixels, then the capacity is 1.7Gb, not 4.6Tb.
Maybe I've misunderstood the coding. Corrections are welcome.
[1] Crude overestimate, assume 5 bits per color for 20 bits per pixel. More accurate is log2(32 choose 4), which you can type into Google to get 15 bits.
Here's the math in the cited paper... I feel like they're making an error of 35960 as 36kilobits instead of, yeah, 15bits ?
https://ieeexplore.ieee.org/document/9438269
If, in our previous example, the 4 wavelengths were to be selected from a palette of 32 different wavelengths, a single worfel location could store ~36 Kilobits of data. Thus, a 1 cm2 media with 10μ2 data locations (8μ2 worfels with 2μ spacing on all sides) = 1,000,000 worfels/cm2. For example, (32!/((32-4)! • 4!)) = 35,960 distinct states. (An analogous use of formula (1) is drawing a hand of 5 playing cards from a 52-card deck yields 2,598,960 distinct hands.)
Applying the 35,960-state permutation table for k=4 (i.e., superimposing 4 wavelengths per worfel), and drawing from a palette, N, of 32 different wavelengths, yields 35,960,000,000 bits (≈35.9 gigabits) per cm2; or 35.9 x (6.42 cm2 per square inch) ≈ 230.4 gigabits/in2. And so for an example of a 4″x5″ media (20 in2), 20 x 230.4 ≈ 4.6 terabits per 4x5 inch media.
Thanks. I believe choosing from 35,960 possible states only takes 15 bits, not 35,960 bits. But it's late on a Friday.
Yeah, there might be one more combinatorial explosion, so they can choose one of 16 combinations with each of those 35960 combinations... breaks one's brain.
I hope so. That would be cool.
Looks like typo or "noisy phone".
In current fiber-optics solutions using multi-wavelength, so on fiber input mixed 8 or more laser waves of different lengths (to be strict not wl but bands) and on output mix divided with optical filter and all wavelengths processed separately.
But you don't have to use exact wavelength in this, you could use any wl within band (yes, 4 colors in each of 40000 bands looking possible), if you have light source with changeable wl and detector with enough precision, and number of wl's could be much larger than in for example LCD or OLED.
Unfortunately, authors of article for some reason don't write about this important nuance, but it mean, in reality to make this technology, need precision light source(s) with adjustable wl's (classic RGB is combination of just 3 wl's, and modern amoleds usually 4 wl's) and precision compact spectrometer (for example, what I seen myself gives 1024 lines for visual spectrum, so just 10 bits). All these are not impossible, but will not easy to achieve.
Sorry, I made correction comment: https://news.ycombinator.com/item?id=41974605
This reminds me of the IBM 1360 photodigital storage system, designed for the CIA to store a terabit of data in the 1960s. https://en.wikipedia.org/wiki/IBM_1360
It was basically cathode ray tubes to expose photographic strips, an automatic chemical photo wet lab, robotic storage and retrieval of the developed film strips, and optical readout.
Absolutely bonkers.
Looks like typo or "noisy phone".
In current fiber-optics solutions using multi-wavelength, so on fiber input mixed 8 or more laser waves of different lengths (to be strict not wl but bands) and on output mix divided with optical filter and all wavelengths processed separately.
But you don't have to use exact wavelength in this, you could use any wl within band (yes, 4 colors in each of 40000 bands looking possible), if you have light source with changeable wl and detector with enough precision, and number of wl's could be much larger than in for example LCD or OLED.
Unfortunately, authors of article for some reason don't write about this important nuance, but it mean, in reality to make this technology, need precision light source(s) with adjustable wl's (classic RGB is combination of just 3 wl's, and modern amoleds usually 4 wl's) and precision compact spectrometer (for example, what I seen myself gives 1024 lines for visual spectrum, so just 10 bits). All these are not impossible, but will not easy to achieve.
Sorry for all, I've mistaken.
I have taken careful view and see, they write in each square huge number of interference pictures - somewhere about 1M x 1M, and in calculation they consider 1Billion points with 4 colors in each, so just boolean true/false give close to 2^32 number of variants.
And there are 35,960 possible permutations, if use in each point 4 sources of light from 32 possible. These are reasonable considerations about possibilities of current technology.
This also not easy way, and I prefer to name it micro spectrogram.
> need precision light source(s) with adjustable wl's..
Or use some substitute technology, for example, calculate diffraction strips digitally and then implement them with modern femto-second laser.
So could achieve similar result, but (in case of femto-second laser) trade for much larger time and much more energy to create forever storage.
> While Rosenthal was visiting the International Space Station headquarters in Montgomery, Ala., in 2013, a top scientist said, “‘The data stored on the station gets erased every 24 hours by cosmic rays,’” Rosenthal recalls. “‘And we have to keep rewriting the data over and over and over again.’”
This doesn't seem right to me, considering the amount and age of COTS hardware with a variety of flash-storage in them (Thinkpads, Nikon DSLRs etc.)
Perhaps the more precise phrasing would be that the data is corrupted within a short enough period of time that they need to rewrite every 24 hours to ensure validity.
IIRC the shuttle’s magnetic-coil memory was hardened explicitly to defend against this sort of corruption, with additional windings to maintain a stronger charge state than would be used within the shield of the atmosphere.
It all depends on used technology.
DRAM/SRAM really have problems with cosmic rays, but on LEO enough to use ECC, as DRAM refresh it's contents every few milliseconds. In Deep space missions (on Mars and beyond), even hardened electronics hangs, as I could remember, approximately every year.
Magnetic-core memory is not affected by cosmic rays (only support circuits affected), but unfortunately it is not dense enough for current storage demands (exists micro-electronics magnetic-core technology, but even it cannot compete with CMOS).
Unfortunately, ECC notebook (mobile) platforms are not produced anymore (yes, tens years ago exists Sun notebooks, built on server platform, but they not produced for long time), and on ISS used off-the-shelf technologies, that's why they have troubles with cosmic rays.
Because of this, Shuttle flight control computers was special radiation hardened but with DRAM.
Magnetic-core storage is just don't used in modern spacecrafts, but used radiation hardened CMOS and for storage used cylindrical magnetic domains technology (you could buy it on free market easy, but it is still magnitudes less dense than single layer flash).
PS to be strict, Shuttle flight computers was upgraded at least once, but from beginning was hardened semiconductor.
There still are mobile workstation laptops with ECC memory, e.g. the Dell Precision 5000/7000 series and some similar series at HP and Lenovo.
My own laptop is such a Dell Precision model.
EDIT: Looking now at the Dell site, I see that buying a laptop with ECC memory has become much more difficult than a few years ago. For many of the "mobile workstations" ECC memory is not offered at this time, while for those where you can customize the laptop and choose ECC, the price is absolutely outrageous, e.g. $850 for 64 GB of ECC memory.
Of course, anyone sensible would buy the "mobile workstation" with the smallest and cheapest memory option, then they would buy separately 64 GB of ECC SODIMM memory at a price 4 times lower than demanded by Dell.
Thank you for info!
I most time avoid to read about top ultrabooks, because mostly this info useless for me, but now I first time hear about notebook with server processor (xeon 5xxx).
Maybe a bit gets flipped every 24 hours but yeah cosmic rays don't just erase a whole drive... bit of a case of telephone here tho, just relaying a moment of inspiration.
DRAM/SRAM/Flash bits and CMOS triggers flipped (not all at once, random). Magnetic media is much more resistant, but in almost all HDDs using classic CPU (and DRAM cache), which are affected by cosmic rays, so info corrupted when travel from HDD surface to computer motherboard.
Free version of the research paper:
https://www.researchgate.net/publication/350499602_WORF_Writ...
Here is a much more technical paper analyzing Lippmann's photography:
https://www.pnas.org/doi/10.1073/pnas.2008819118
>19th-century photography technique employed in novel data storage method
TECHNIQUE and METHOD are synonymous terms (don't quibble). Does anybody else find it irksome to build a sentence this way?
Technique seems to me specific and technical. Method is more general and abstract. Hence my brain wiring and decoder seems ok with it. Wonder as the other post said how you would have written it, even assume these are the same word as you cannot repeat the word without sounding odd ?
Depends on the context, I guess. In some, a method can involve multiple techniques; some of these techniques can be borrowed from other, unrelated, methods. (You could say that a photograph is kind of data storage, but still.)
I find the sentence easy to parse. So I’m curious, how would you have phrased it?
Would order a ton of these immediately. I have a hoarding prob^H^H^H^H feature.
Bait And switch title....
We've put the subtitle up there now