Recently I picked up a Dell Poweredge 2950 and when I did initial power testing it didn't seem too bad with a peak of 350W under stress
However now with all 6 Hard Drives & 8 slots of RAM full its measuring around 260W at idle and 390W at peak.
While the peak wattage can't be dropped much unless I downgraded the CPU or switched to SSDs of which the SSD upgrade would cost way too much and the CPU downgrade might not be possible.
So I started looking at methods to save power!
To begin with after a bit of searching I found out about powertop. Installed and run it and it seems like it automatically configures itself. One blog post I saw recommended to add it as a service so I did.
The next step was to look at CPU usage. If you run lscpu on a linux box you should say the CPU MHz fluctuate, on the computer I'm typing this on this fluctuates up to 4.5Ghz and a few checks sees values between 3Ghz and 1.8Ghz, perfect.
However on the server the lowest I saw was 2Ghz which is only a 1Ghz decrease from the 3Ghz it normally runs at. As our load average is usually under 2-3 then we can look at decreasing this more.
Finally the CPUs, an interesting thing is that intel released two CPUs with the same model number, the X5450 and the E5450 and it seems that they are virtually identical in specification but with 40W less per CPU! Combining these we get a total saving of 80W on paper!
Further more as I hadn't found the upgrade from the 4GB when I got the server to the 28 GB that I upgraded it to (Was going to be 32GB but the seller got two sticks wrong)
(Measured power usage based off measurements taken by my Metrel DeltaPAT with only one power supply active).
The E5450s arrived and after updating the BIOS (no thanks to there being no decent linux tools for servers) it droppde by a total of!
Around only 10W at idle. It seems that the 40W Difference per CPU is more when its at load which while the server isn't all of the time has made about 10-20P difference per day.
That's it for now!