sock_ops_convert_ctx_access() reads rtt_min without the is_locked_tcp_sock guard used for every other tcp_sock field. On request_sock-backed sock_ops callbacks, sk points at a tcp_request_sock and the converted load reads past the end of the allocation. Use SOCK_OPS_LOAD_TCP_SOCK_FIELD() so the load is guarded, and compute the offset via offsetof(struct minmax_sample, v). Found via AST-based call-graph analysis using sqry. Fixes: 44f0e43037d3 ("bpf: Add support for reading sk_state and more") Cc: stable@vger.kernel.org Signed-off-by: Werner Kasselman --- net/core/filter.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/net/core/filter.c b/net/core/filter.c index 385fc3e9eb4a..88fa290caeaa 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -10836,14 +10836,12 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type, sizeof(struct minmax)); BUILD_BUG_ON(sizeof(struct minmax) < sizeof(struct minmax_sample)); + BUILD_BUG_ON(offsetof(struct tcp_sock, rtt_min) + + offsetof(struct minmax_sample, v) > S16_MAX); - *insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF( - struct bpf_sock_ops_kern, sk), - si->dst_reg, si->src_reg, - offsetof(struct bpf_sock_ops_kern, sk)); - *insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->dst_reg, - offsetof(struct tcp_sock, rtt_min) + - sizeof_field(struct minmax_sample, t)); + off = offsetof(struct tcp_sock, rtt_min) + + offsetof(struct minmax_sample, v); + SOCK_OPS_LOAD_TCP_SOCK_FIELD(BPF_W, off); break; case offsetof(struct bpf_sock_ops, bpf_sock_ops_cb_flags): -- 2.43.0