From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id AA239A04C8; Fri, 18 Sep 2020 11:27:25 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 494D71D95D; Fri, 18 Sep 2020 11:27:25 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 8102B1D95C; Fri, 18 Sep 2020 11:27:23 +0200 (CEST) IronPort-SDR: 1FV3IDVlM7igGTIpTDmZS5dtOPpkqzJ/w/zsQZepIQSB+sPzgmxUQpQ8A/af8+oex0OQxkRQJI QGElyl5ez5fg== X-IronPort-AV: E=McAfee;i="6000,8403,9747"; a="221457768" X-IronPort-AV: E=Sophos;i="5.77,274,1596524400"; d="scan'208";a="221457768" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Sep 2020 02:27:19 -0700 IronPort-SDR: wNRVxuR0Vv30j36t2qgvHmMDs5HgzttbyovA0Q1hst1oCr2ZX0pYp/fxvJdMkTyTiUwyPi/9T7 4vrHpOBTTfSA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,274,1596524400"; d="scan'208";a="346939075" Received: from irsmsx604.ger.corp.intel.com ([163.33.146.137]) by orsmga007.jf.intel.com with ESMTP; 18 Sep 2020 02:27:18 -0700 Received: from irsmsx604.ger.corp.intel.com (163.33.146.137) by IRSMSX604.ger.corp.intel.com (163.33.146.137) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Fri, 18 Sep 2020 10:27:17 +0100 Received: from irsmsx604.ger.corp.intel.com ([163.33.146.137]) by IRSMSX604.ger.corp.intel.com ([163.33.146.137]) with mapi id 15.01.1713.004; Fri, 18 Sep 2020 10:27:17 +0100 From: "Loftus, Ciara" To: Li RongQing , "dev@dpdk.org" CC: "stable@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH] af_xdp: avoid deadlock due to empty fill queue Thread-Index: AQHWjMmOfi44bm2AAEaUOXMMBQXk4aluH9Hg Date: Fri, 18 Sep 2020 09:27:17 +0000 Message-ID: <3450466f3f104aa29cdfc3a6c7828050@intel.com> References: <1600330014-22019-1-git-send-email-lirongqing@baidu.com> In-Reply-To: <1600330014-22019-1-git-send-email-lirongqing@baidu.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.5.1.3 x-originating-ip: [163.33.253.164] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] af_xdp: avoid deadlock due to empty fill queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > when receive packets, it is possible to fail to reserve > fill queue, since buffer ring is shared between tx and rx, > and maybe not available temporary. at last, both fill > queue and rx queue are empty. >=20 > then kernel side will be unable to receive packets due to > empty fill queue, and dpdk will be unable to reserve fill > queue because dpdk has not pakcets to receive, at last > deadlock will happen >=20 > so move reserve fill queue before xsk_ring_cons__peek > to fix it >=20 > Signed-off-by: Li RongQing Thanks for the fix. I tested and saw no significant performance drop. Minor: the first line of the commit should read "net/af_xdp: ...." Acked-by: Ciara Loftus CC-ing stable as I think this fix should be considered for inclusion. Thanks, Ciara > --- > drivers/net/af_xdp/rte_eth_af_xdp.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) >=20 > diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c > b/drivers/net/af_xdp/rte_eth_af_xdp.c > index 7ce4ad04a..2dc9cab27 100644 > --- a/drivers/net/af_xdp/rte_eth_af_xdp.c > +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c > @@ -304,6 +304,10 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, > uint16_t nb_pkts) > uint32_t free_thresh =3D fq->size >> 1; > struct rte_mbuf *mbufs[ETH_AF_XDP_RX_BATCH_SIZE]; >=20 > + if (xsk_prod_nb_free(fq, free_thresh) >=3D free_thresh) > + (void)reserve_fill_queue(umem, > ETH_AF_XDP_RX_BATCH_SIZE, NULL); > + > + > if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, nb_pkts) > !=3D 0)) > return 0; >=20 > @@ -317,9 +321,6 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, > uint16_t nb_pkts) > goto out; > } >=20 > - if (xsk_prod_nb_free(fq, free_thresh) >=3D free_thresh) > - (void)reserve_fill_queue(umem, > ETH_AF_XDP_RX_BATCH_SIZE, NULL); > - > for (i =3D 0; i < rcvd; i++) { > const struct xdp_desc *desc; > uint64_t addr; > -- > 2.16.2