From: Bobby Eshleman binding->dev is protected on the write-side in mp_dmabuf_devmem_uninstall() against concurrent writes, but due to the concurrent bare read in net_devmem_get_binding() it should be wrapped in a READ_ONCE/WRITE_ONCE pair to make sure no compiler optimizations play with the underlying register in unforeseen ways. Fixes: bd61848900bf ("net: devmem: Implement TX path") Signed-off-by: Bobby Eshleman --- Note1: This didn't crop up in a discrete error, but just something that didn't seem to quite follow my understanding of memory-barriers.txt, as frail and feeble as that understanding may be. Note2: the "Fixes" commit I referenced is the first one to introduce binding->dev bare accesses, but the later patch '6a2108c78069 ("net: devmem: refresh devmem TX dst in case of route invalidation")' carried that forward. I wasn't sure which was the ideal one to select for the "Fixes" label. --- net/core/devmem.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/net/core/devmem.c b/net/core/devmem.c index 63f093f7d2b2..cb989949d43c 100644 --- a/net/core/devmem.c +++ b/net/core/devmem.c @@ -398,7 +398,8 @@ struct net_devmem_dmabuf_binding *net_devmem_get_binding(struct sock *sk, * net_device. */ dst_dev = dst_dev_rcu(dst); - if (unlikely(!dst_dev) || unlikely(dst_dev != binding->dev)) { + if (unlikely(!dst_dev) || + unlikely(dst_dev != READ_ONCE(binding->dev))) { err = -ENODEV; goto out_unlock; } @@ -515,7 +516,7 @@ static void mp_dmabuf_devmem_uninstall(void *mp_priv, xa_erase(&binding->bound_rxqs, xa_idx); if (xa_empty(&binding->bound_rxqs)) { mutex_lock(&binding->lock); - binding->dev = NULL; + WRITE_ONCE(binding->dev, NULL); mutex_unlock(&binding->lock); } break; --- base-commit: d4f687fbbce45b5e88438e89b5e26c0c15847992 change-id: 20260223-devmem-membar-fix-3a5cd9618f8a Best regards, -- Bobby Eshleman