load average value from uptime command and r value from vmstat command
807567Jan 23 2009 — edited Jan 23 2009Hi,
I'm examining performance of one of our production servers and find that the load average from uptime command and kthr/r from vmstat command report different values:
# uptime
1:27pm up 117 day(s), 2 min(s), 15 users, load average: 8.04, 8.27, 8.28
# vmstat 1 5
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr m0 m1 m3 m4 in sy cs us sy id
0 0 17 54000456 24558672 47 588 37 4 9 0 166 1 1 1 0 3972 20938 3329 22 2 76
0 0 32 54514264 24972728 7 19 0 0 0 0 0 0 0 0 0 13423 31970 8258 55 3 42
0 0 32 54514208 24972728 0 3 0 0 0 0 0 0 0 0 0 11483 29839 7322 49 3 48
0 0 32 54514192 24972712 0 0 0 0 0 0 0 0 0 0 0 11755 27792 7234 47 3 50
0 0 32 54514192 24972712 0 0 0 0 0 0 0 0 0 1 0 12987 44258 7758 49 5 46
according to man pages, the load average from uptime command is "the average number of jobs in the run queue over the last 1, 5 and 15 minutes", and the kthr/r value is "the number of kernel threads in run queue". Do they refer to the same thing? if yes, why they report different values? if not, then what is the difference between them?
Much thanks!
Alex