So why then are the power inputs written in the minimal-medium-maximum way? That’s a bit confusing; I thought that medium value marks the point from which overclocking begins.
If I got it right, the overclocking is when a recipe that takes, let’s say, 16 GU/t is processed in the machine that is supplied more than this? And if it goes way far from 16 GU/t (how much?), then the energy gets wasted? Except for lossless-overclocking machines. Is that it?