From: dongsheng The current implementation mistakenly limits the width of fixed counters to the width of GP counters. Correct the logic to ensure fixed counters are properly masked according to their own width. Opportunistically refine the GP counter bitwidth processing code. Signed-off-by: dongsheng Co-developed-by: Dapeng Mi Signed-off-by: Dapeng Mi Tested-by: Yi Lai [sean: keep measure_for_overflow() for fixed counter (see commit 7ec3b67a)] Signed-off-by: Sean Christopherson --- x86/pmu.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index bd16211d..96b76d04 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -547,19 +547,19 @@ static void check_counter_overflow(void) uint64_t status; int idx; - cnt.count = overflow_preset; - if (pmu_use_full_writes()) - cnt.count &= (1ull << pmu.gp_counter_width) - 1; - if (i == pmu.nr_gp_counters) { if (!pmu.is_intel) break; cnt.ctr = fixed_events[0].unit_sel; cnt.count = measure_for_overflow(&cnt); - cnt.count &= (1ull << pmu.gp_counter_width) - 1; + cnt.count &= (1ull << pmu.fixed_counter_width) - 1; } else { cnt.ctr = MSR_GP_COUNTERx(i); + + cnt.count = overflow_preset; + if (pmu_use_full_writes()) + cnt.count &= (1ull << pmu.gp_counter_width) - 1; } if (i % 2) -- 2.52.0.rc2.455.g230fcf2819-goog