From: dongsheng The current implementation mistakenly limits the width of fixed counters to the width of GP counters. Corrects the logic to ensure fixed counters are properly masked according to their own width. Opportunistically refine the GP counter bitwidth processing code. Signed-off-by: dongsheng Co-developed-by: Dapeng Mi Signed-off-by: Dapeng Mi Tested-by: Yi Lai --- x86/pmu.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/x86/pmu.c b/x86/pmu.c index 04946d10..44c728a5 100644 --- a/x86/pmu.c +++ b/x86/pmu.c @@ -556,18 +556,16 @@ static void check_counter_overflow(void) int idx; cnt.count = overflow_preset; - if (pmu_use_full_writes()) - cnt.count &= (1ull << pmu.gp_counter_width) - 1; - if (i == pmu.nr_gp_counters) { if (!pmu.is_intel) break; cnt.ctr = fixed_events[0].unit_sel; - cnt.count = measure_for_overflow(&cnt); - cnt.count &= (1ull << pmu.gp_counter_width) - 1; + cnt.count &= (1ull << pmu.fixed_counter_width) - 1; } else { cnt.ctr = MSR_GP_COUNTERx(i); + if (pmu_use_full_writes()) + cnt.count &= (1ull << pmu.gp_counter_width) - 1; } if (i % 2) -- 2.34.1