From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1A01FA04C8; Fri, 18 Sep 2020 13:32:38 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0F7061D986; Fri, 18 Sep 2020 13:32:37 +0200 (CEST) Received: from tc-sys-mailedm02.tc.baidu.com (mx135-tc.baidu.com [61.135.168.135]) by dpdk.org (Postfix) with ESMTP id 69F431D953; Fri, 18 Sep 2020 13:32:35 +0200 (CEST) Received: from localhost (cp01-cos-dev01.cp01.baidu.com [10.92.119.46]) by tc-sys-mailedm02.tc.baidu.com (Postfix) with ESMTP id 129E711C0039; Fri, 18 Sep 2020 19:32:32 +0800 (CST) From: Li RongQing To: dev@dpdk.org, stable@dpdk.org, ciara.loftus@intel.com Date: Fri, 18 Sep 2020 19:32:31 +0800 Message-Id: <1600428751-30105-1-git-send-email-lirongqing@baidu.com> X-Mailer: git-send-email 1.7.1 Subject: [dpdk-dev] [PATCH][V2] net/af_xdp: avoid deadlock due to empty fill queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" when receive packets, it is possible to fail to reserve fill queue, since buffer ring is shared between tx and rx, and maybe not available temporary. at last, both fill queue and rx queue are empty. then kernel side will be unable to receive packets due to empty fill queue, and dpdk will be unable to reserve fill queue because dpdk has not pakcets to receive, at last deadlock will happen so move reserve fill queue before xsk_ring_cons__peek to fix it Acked-by: Ciara Loftus Signed-off-by: Li RongQing Signed-off-by: Dongsheng Rong Cc: stable@dpdk.org --- diff with v1: cc stable@dpdk.org change af_xdp: as net/af_xdp drivers/net/af_xdp/rte_eth_af_xdp.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 7ce4ad04a..2dc9cab27 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -304,6 +304,10 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) uint32_t free_thresh = fq->size >> 1; struct rte_mbuf *mbufs[ETH_AF_XDP_RX_BATCH_SIZE]; + if (xsk_prod_nb_free(fq, free_thresh) >= free_thresh) + (void)reserve_fill_queue(umem, ETH_AF_XDP_RX_BATCH_SIZE, NULL); + + if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, nb_pkts) != 0)) return 0; @@ -317,9 +321,6 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) goto out; } - if (xsk_prod_nb_free(fq, free_thresh) >= free_thresh) - (void)reserve_fill_queue(umem, ETH_AF_XDP_RX_BATCH_SIZE, NULL); - for (i = 0; i < rcvd; i++) { const struct xdp_desc *desc; uint64_t addr; -- 2.16.2