When iwlwifi firmware crashes (e.g., NMI_INTERRUPT_UNKNOWN on Intel BE201/Wi-Fi 7), iwl_mld_nic_error() sets mld->fw_status.in_hw_restart to true. However, iwl_mld_tx_from_txq() does not check this flag before dequeuing frames from mac80211 and pushing them to the transport layer. Since the firmware is dead, iwl_trans_tx() returns -EIO for each frame, which then gets freed immediately. Under high-throughput conditions (e.g., Tailscale UDP traffic or active SSH sessions), this creates a tight dequeue-send-fail-free loop that wastes CPU cycles and generates rapid skb allocation churn, leading to memory pressure from slab fragmentation. The RX path already has this guard (iwl_mld_rx_mpdu checks in_hw_restart at rx.c:1906), and so does the TXQ allocation worker (iwl_mld_add_txqs_wk at tx.c:156). Add the same guard to iwl_mld_tx_from_txq() to stop all TX during firmware restart. Frames left in mac80211's TXQs are naturally drained after restart completes, when queue reallocation triggers iwl_mld_tx_from_txq() via iwl_mld_add_txq_list(), or when new upper-layer traffic invokes wake_tx_queue. Tested on ASUS Zenbook 14 UX3405CA with Intel BE201 (Wi-Fi 7) on kernel 6.19.5 where the firmware crashes approximately every 10-15 minutes under Tailscale traffic. Fixes: d1e879ec600f ("wifi: iwlwifi: add iwlmld sub-driver") Cc: stable@vger.kernel.org Signed-off-by: Sheroz Juraev --- drivers/net/wireless/intel/iwlwifi/mld/tx.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/net/wireless/intel/iwlwifi/mld/tx.c b/drivers/net/wireless/intel/iwlwifi/mld/tx.c index 3b4b575aa..c5eb36652 100644 --- a/drivers/net/wireless/intel/iwlwifi/mld/tx.c +++ b/drivers/net/wireless/intel/iwlwifi/mld/tx.c @@ -959,6 +959,16 @@ void iwl_mld_tx_from_txq(struct iwl_mld *mld, struct ieee80211_txq *txq) struct sk_buff *skb = NULL; u8 zero_addr[ETH_ALEN] = {}; + /* + * Don't transmit during firmware restart. The firmware is dead, + * so iwl_trans_tx() would return -EIO for each frame. Avoid the + * overhead of dequeuing from mac80211 only to immediately free + * the skbs, and the potential memory pressure from rapid skb + * allocation churn during high-throughput restart scenarios. + */ + if (unlikely(mld->fw_status.in_hw_restart)) + return; + /* * No need for threads to be pending here, they can leave the first * taker all the work. -- 2.47.2