Quantcast
Channel: Windows Performance Toolkit (WPT) v5 forum
Viewing all articles
Browse latest Browse all 275

Differences in CPU usage measured via Perfmon vs. xperf/ETW CPU sampling

$
0
0

Hoping someone out there can help me with this, been looking around but can't seem to find any specific answer for how the \Processor(_Total)\% Processor Time counter is calculated and why it might differ from what is observed when looking at CPU samples using Event-Tracing-for-Windows (ETW) logs.

Here's my scenario...  I recently ran a load test on a service and collected data against it in two ways...

1) I collected various performance counters throughout the perf run including \Processor(_Total)\% Processor Time.  To measure the relative load on the CPU.

2) I collected an ETW CPU stack trace on the box for about 60 seconds of the run using perfview.

I noticed something interesting when looking into these two datasets and comparing them.

In the data from #1 that coincided with the time I collected #2, the processor looked like it was very lightly loaded.  The average value of the % Processor Time counter was around 7%.  However, when looking at the ETW CPU stack trace, the % Weight of the CPU samples in the Idle process was around ~60%... which seems to imply that the CPU usage was actually around 30%.

I thought that the % Processor Time performance counter was calculated by taking the number of % CPU samples taken in the Idle process and subtracting that from 100%.  But if that's the case, when I looked at the ETW CPU trace, why did following the same algorithm lead to a different result?


Viewing all articles
Browse latest Browse all 275

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>