This post greatly over simplifies how many issues come up optimizing images. Also, the Github workflow doesn't commit the optimized images back to git, so this would have to run before packaging and would have to run on every image, not just new images added.
Ideally, images are compressed _before_ getting committed to git. The other issue is that compression can leave images looking broken. Any compressed image should be verified before deploying. Using lossless encoders is safer. However, even then, many optimizers will strip ICC profile data which will make colors look off or washed out (especially if the source is HDR).
Finally, use webp. It's supported everywhere and doesn't have all the downsides of png and jpg. It's not worth it to deploy these older formats anymore. Jpgxl is ever better, but support will take a while.
Anyway, I made an ImageOptim clone that supports webp encoding a while ago[0]. I usually just chuck any images in there first, then commit them.
Where do you store high quality original images in case future edits or recompression with better codecs are needed? Generation loss is a thing.
I view the high-quality originals as “source” and resized+optimized images as “compiled” binaries. You generally want source in Git, not your compiled binaries.
There's one thing JPEG has the edge on- true Progressive loading.
If you're clever you can use fetch freqests to render a thumbnail based off the actual image by manually parsing the JPEG and stopping after some amount of detail. I'm more than a little suprised that no self-hosted photo solution uses this in any capacity (at least when I last checked).
This post greatly over simplifies how many issues come up optimizing images. Also, the Github workflow doesn't commit the optimized images back to git, so this would have to run before packaging and would have to run on every image, not just new images added.
Ideally, images are compressed _before_ getting committed to git. The other issue is that compression can leave images looking broken. Any compressed image should be verified before deploying. Using lossless encoders is safer. However, even then, many optimizers will strip ICC profile data which will make colors look off or washed out (especially if the source is HDR).
Finally, use webp. It's supported everywhere and doesn't have all the downsides of png and jpg. It's not worth it to deploy these older formats anymore. Jpgxl is ever better, but support will take a while.
Anyway, I made an ImageOptim clone that supports webp encoding a while ago[0]. I usually just chuck any images in there first, then commit them.
[0]: https://github.com/blopker/alic
Where do you store high quality original images in case future edits or recompression with better codecs are needed? Generation loss is a thing.
I view the high-quality originals as “source” and resized+optimized images as “compiled” binaries. You generally want source in Git, not your compiled binaries.
I like this take. I tend to agree.
There's one thing JPEG has the edge on- true Progressive loading.
If you're clever you can use fetch freqests to render a thumbnail based off the actual image by manually parsing the JPEG and stopping after some amount of detail. I'm more than a little suprised that no self-hosted photo solution uses this in any capacity (at least when I last checked).
No trying to diss, but Figma literally has a drop down change the image quality thus the file size for PNG.
https://help.figma.com/hc/en-us/articles/13402894554519-Expo...
You're looking at JPGs. PNGs do not have that dropdown.
TFA for me gives the same link for the "Compressed using pngquant - 59,449 bytes" as the first image: "test1.png" and it's about 191 KiB.
I think it's a copy/paste error for replacing the link with test3.png instead of test1.png gives the correct file.
For the curious ones here are the result of compressing both losslessly with WEBP:
P.S: that picture is called a "test card": https://en.wikipedia.org/wiki/Test_card