Yeah you’d really only say it on the theoretical side of things, I’ve definitely heard it in research and academia but even then people usually point to the particulars of their work first
- 0 Posts
- 7 Comments
It depends on the model but I’ve seen image generators range from 8.6 wH per image to over 100 wH per image. Parameter count and quantization make a huge difference there. Regardless, even at 10 wH per image that’s not nothing, especially given that most ML image generation workflows involve batch generation of 9 or 10 images. It’s several orders of magnitude less energy intensive than training and fine tuning, but it is not nothing by any means.
The training is a huge power sink, but so is inference (I.e. generating the images). You are absolutely spinning up a bunch of silicon that’s sucking back hundreds of watts with each image that’s output, on top of the impacts of training the model.
NewOldGuard@lemmy.mlto Privacy@lemmy.ml•Advice For The Unfortunate Windows User?English1·2 months agoWhen I’m forced to use windows it’s the LTSC IOT version with telemetry disabled via group policy and a local account. I run O&O shut up after that, then install portmaster. I don’t run it as a daily OS but I think that’s private enough for my limited use case. My only other random recommendations are using either scoop or winget for package management, and komorebi with whkd for tiling window management.
Haskell mentioned λ 💪 λ 💪 λ 💪 λ 💪 λ
NewOldGuard@lemmy.mlto Privacy@lemmy.ml•What are you going to do when the internet starts asking for ID for everything?English3·2 months agoGopher and I2P as well
They are adding “AI” features in a collaboration with Intel, but luckily they’re minor additions like ML based noise reduction