From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CE12CA0577; Tue, 7 Apr 2020 17:58:07 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1F05F2B96; Tue, 7 Apr 2020 17:58:07 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id CD9EB2B86; Tue, 7 Apr 2020 17:58:05 +0200 (CEST) IronPort-SDR: kx+lpS0y4zRP2FaCUCyxS3GuRU2RdxHquGUv3ml9YyQSNNnVPwkPTnKYm976P6tpI8oK9Pu53b emlz/X33YWZw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Apr 2020 08:58:04 -0700 IronPort-SDR: nZ1yWqJlPPsCHrxO3UhAekOgrN5ntoJ9KPKfHcf0o8+NXiIT7imADGr/YYJqEUYFaUTzTb71U7 9kAk4e7wjKNg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,355,1580803200"; d="scan'208";a="452476513" Received: from fyigit-mobl.ger.corp.intel.com (HELO [10.252.29.132]) ([10.252.29.132]) by fmsmga006.fm.intel.com with ESMTP; 07 Apr 2020 08:58:03 -0700 From: Ferruh Yigit To: wangyunjian , dev@dpdk.org Cc: keith.wiles@intel.com, jerry.lilijun@huawei.com, xudingke@huawei.com, stable@dpdk.org References: <1586233383-1084-1-git-send-email-wangyunjian@huawei.com> <7da1388e-c4c6-27d5-f038-0526a39d9a8c@intel.com> Autocrypt: addr=ferruh.yigit@intel.com; prefer-encrypt=mutual; keydata= mQINBFXZCFABEADCujshBOAaqPZpwShdkzkyGpJ15lmxiSr3jVMqOtQS/sB3FYLT0/d3+bvy qbL9YnlbPyRvZfnP3pXiKwkRoR1RJwEo2BOf6hxdzTmLRtGtwWzI9MwrUPj6n/ldiD58VAGQ +iR1I/z9UBUN/ZMksElA2D7Jgg7vZ78iKwNnd+vLBD6I61kVrZ45Vjo3r+pPOByUBXOUlxp9 GWEKKIrJ4eogqkVNSixN16VYK7xR+5OUkBYUO+sE6etSxCr7BahMPKxH+XPlZZjKrxciaWQb +dElz3Ab4Opl+ZT/bK2huX+W+NJBEBVzjTkhjSTjcyRdxvS1gwWRuXqAml/sh+KQjPV1PPHF YK5LcqLkle+OKTCa82OvUb7cr+ALxATIZXQkgmn+zFT8UzSS3aiBBohg3BtbTIWy51jNlYdy ezUZ4UxKSsFuUTPt+JjHQBvF7WKbmNGS3fCid5Iag4tWOfZoqiCNzxApkVugltxoc6rG2TyX CmI2rP0mQ0GOsGXA3+3c1MCdQFzdIn/5tLBZyKy4F54UFo35eOX8/g7OaE+xrgY/4bZjpxC1 1pd66AAtKb3aNXpHvIfkVV6NYloo52H+FUE5ZDPNCGD0/btFGPWmWRmkPybzColTy7fmPaGz cBcEEqHK4T0aY4UJmE7Ylvg255Kz7s6wGZe6IR3N0cKNv++O7QARAQABtCVGZXJydWggWWln aXQgPGZlcnJ1aC55aWdpdEBpbnRlbC5jb20+iQJUBBMBCgA+AhsDAh4BAheABQsJCAcDBRUK CQgLBRYCAwEAFiEE0jZTh0IuwoTjmYHH+TPrQ98TYR8FAl1meboFCQlupOoACgkQ+TPrQ98T YR9ACBAAv2tomhyxY0Tp9Up7mNGLfEdBu/7joB/vIdqMRv63ojkwr9orQq5V16V/25+JEAD0 60cKodBDM6HdUvqLHatS8fooWRueSXHKYwJ3vxyB2tWDyZrLzLI1jxEvunGodoIzUOtum0Ce gPynnfQCelXBja0BwLXJMplM6TY1wXX22ap0ZViC0m714U5U4LQpzjabtFtjT8qOUR6L7hfy YQ72PBuktGb00UR/N5UrR6GqB0x4W41aZBHXfUQnvWIMmmCrRUJX36hOTYBzh+x86ULgg7H2 1499tA4o6rvE13FiGccplBNWCAIroAe/G11rdoN5NBgYVXu++38gTa/MBmIt6zRi6ch15oLA Ln2vHOdqhrgDuxjhMpG2bpNE36DG/V9WWyWdIRlz3NYPCDM/S3anbHlhjStXHOz1uHOnerXM 1jEjcsvmj1vSyYoQMyRcRJmBZLrekvgZeh7nJzbPHxtth8M7AoqiZ/o/BpYU+0xZ+J5/szWZ aYxxmIRu5ejFf+Wn9s5eXNHmyqxBidpCWvcbKYDBnkw2+Y9E5YTpL0mS0dCCOlrO7gca27ux ybtbj84aaW1g0CfIlUnOtHgMCmz6zPXThb+A8H8j3O6qmPoVqT3qnq3Uhy6GOoH8Fdu2Vchh TWiF5yo+pvUagQP6LpslffufSnu+RKAagkj7/RSuZV25Ag0EV9ZMvgEQAKc0Db17xNqtSwEv mfp4tkddwW9XA0tWWKtY4KUdd/jijYqc3fDD54ESYpV8QWj0xK4YM0dLxnDU2IYxjEshSB1T qAatVWz9WtBYvzalsyTqMKP3w34FciuL7orXP4AibPtrHuIXWQOBECcVZTTOdZYGAzaYzxiA ONzF9eTiwIqe9/oaOjTwTLnOarHt16QApTYQSnxDUQljeNvKYt1lZE/gAUUxNLWsYyTT+22/ vU0GDUahsJxs1+f1yEr+OGrFiEAmqrzpF0lCS3f/3HVTU6rS9cK3glVUeaTF4+1SK5ZNO35p iVQCwphmxa+dwTG/DvvHYCtgOZorTJ+OHfvCnSVjsM4kcXGjJPy3JZmUtyL9UxEbYlrffGPQ I3gLXIGD5AN5XdAXFCjjaID/KR1c9RHd7Oaw0Pdcq9UtMLgM1vdX8RlDuMGPrj5sQrRVbgYH fVU/TQCk1C9KhzOwg4Ap2T3tE1umY/DqrXQgsgH71PXFucVjOyHMYXXugLT8YQ0gcBPHy9mZ qw5mgOI5lCl6d4uCcUT0l/OEtPG/rA1lxz8ctdFBVOQOxCvwRG2QCgcJ/UTn5vlivul+cThi 6ERPvjqjblLncQtRg8izj2qgmwQkvfj+h7Ex88bI8iWtu5+I3K3LmNz/UxHBSWEmUnkg4fJl Rr7oItHsZ0ia6wWQ8lQnABEBAAGJAjwEGAEKACYCGwwWIQTSNlOHQi7ChOOZgcf5M+tD3xNh HwUCXWZ5wAUJB3FgggAKCRD5M+tD3xNhH2O+D/9OEz62YuJQLuIuOfL67eFTIB5/1+0j8Tsu o2psca1PUQ61SZJZOMl6VwNxpdvEaolVdrpnSxUF31kPEvR0Igy8HysQ11pj8AcgH0a9FrvU /8k2Roccd2ZIdpNLkirGFZR7LtRw41Kt1Jg+lafI0efkiHKMT/6D/P1EUp1RxOBNtWGV2hrd 0Yg9ds+VMphHHU69fDH02SwgpvXwG8Qm14Zi5WQ66R4CtTkHuYtA63sS17vMl8fDuTCtvfPF HzvdJLIhDYN3Mm1oMjKLlq4PUdYh68Fiwm+boJoBUFGuregJFlO3hM7uHBDhSEnXQr5mqpPM 6R/7Q5BjAxrwVBisH0yQGjsWlnysRWNfExAE2sRePSl0or9q19ddkRYltl6X4FDUXy2DTXa9 a+Fw4e1EvmcF3PjmTYs9IE3Vc64CRQXkhujcN4ZZh5lvOpU8WgyDxFq7bavFnSS6kx7Tk29/ wNJBp+cf9qsQxLbqhW5kfORuZGecus0TLcmpZEFKKjTJBK9gELRBB/zoN3j41hlEl7uTUXTI JQFLhpsFlEdKLujyvT/aCwP3XWT+B2uZDKrMAElF6ltpTxI53JYi22WO7NH7MR16Fhi4R6vh FHNBOkiAhUpoXRZXaCR6+X4qwA8CwHGqHRBfYFSU/Ulq1ZLR+S3hNj2mbnSx0lBs1eEqe2vh cA== Message-ID: Date: Tue, 7 Apr 2020 16:58:02 +0100 MIME-Version: 1.0 In-Reply-To: <7da1388e-c4c6-27d5-f038-0526a39d9a8c@intel.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [dpdk-stable] [PATCH v3 3/5] net/tap: fix check for mbuf's nb_segs failure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 4/7/2020 4:15 PM, Ferruh Yigit wrote: > On 4/7/2020 5:23 AM, wangyunjian wrote: >> From: Yunjian Wang >> >> Now the rxq->pool is mbuf concatenation, But its nb_segs is 1. >> When do some sanity checks on the mbuf, it fails. > > +1, 'rxq->pool' seems Rx ring representation as linked mbufs and empty ones has > 'nb_segs' values as 1. > >> >> Fixes: 0781f5762cfe ("net/tap: support segmented mbufs") >> CC: stable@dpdk.org >> >> Signed-off-by: Yunjian Wang >> --- >> drivers/net/tap/rte_eth_tap.c | 27 ++++++++++++++++++++++----- >> 1 file changed, 22 insertions(+), 5 deletions(-) >> >> diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c >> index a9ba0ca68..703fcceb9 100644 >> --- a/drivers/net/tap/rte_eth_tap.c >> +++ b/drivers/net/tap/rte_eth_tap.c >> @@ -339,6 +339,23 @@ tap_rx_offload_get_queue_capa(void) >> DEV_RX_OFFLOAD_TCP_CKSUM; >> } >> >> +static void >> +tap_rxq_pool_free(struct rte_mbuf *pool) >> +{ >> + struct rte_mbuf *mbuf = pool; >> + uint16_t nb_segs = 1; >> + >> + if (mbuf == NULL) >> + return; >> + >> + while (mbuf->next) { >> + mbuf = mbuf->next; >> + nb_segs++; >> + } >> + pool->nb_segs = nb_segs; >> + rte_pktmbuf_free(pool); >> +} > > Since you are already iterating the chain, why not free immediately instead of > calculating the nb_segs and making API go through the chain again, what about > following: > > tap_rxq_pool_free(struct rte_mbuf *pool) > { > struct rte_mbuf *next; > while (pool) { > next = pool->next; > rte_pktmbuf_free(pool); > pool = next; > } > } Ignore this please, this may be still complaining in mbuf sanity check, so OK to your usage. > >> + >> /* Callback to handle the rx burst of packets to the correct interface and >> * file descriptor(s) in a multi-queue setup. >> */ >> @@ -389,7 +406,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) >> goto end; >> >> seg->next = NULL; >> - rte_pktmbuf_free(mbuf); >> + tap_rxq_pool_free(mbuf); > > As far as I can see 'mbuf' should have correct 'nb_segs' value, and it can > continue to use 'rte_pktmbuf_free()'. If you can observe the problem can you > please try this? > >> >> goto end; >> } >> @@ -1033,7 +1050,7 @@ tap_dev_close(struct rte_eth_dev *dev) >> rxq = &internals->rxq[i]; >> close(process_private->rxq_fds[i]); >> process_private->rxq_fds[i] = -1; >> - rte_pktmbuf_free(rxq->pool); >> + tap_rxq_pool_free(rxq->pool); >> rte_free(rxq->iovecs); >> rxq->pool = NULL; >> rxq->iovecs = NULL; >> @@ -1072,7 +1089,7 @@ tap_rx_queue_release(void *queue) >> if (process_private->rxq_fds[rxq->queue_id] > 0) { >> close(process_private->rxq_fds[rxq->queue_id]); >> process_private->rxq_fds[rxq->queue_id] = -1; >> - rte_pktmbuf_free(rxq->pool); >> + tap_rxq_pool_free(rxq->pool); >> rte_free(rxq->iovecs); >> rxq->pool = NULL; >> rxq->iovecs = NULL; >> @@ -1480,7 +1497,7 @@ tap_rx_queue_setup(struct rte_eth_dev *dev, >> return 0; >> >> error: >> - rte_pktmbuf_free(rxq->pool); >> + tap_rxq_pool_free(rxq->pool); >> rxq->pool = NULL; >> rte_free(rxq->iovecs); >> rxq->iovecs = NULL; >> @@ -2435,7 +2452,7 @@ rte_pmd_tap_remove(struct rte_vdev_device *dev) >> rxq = &internals->rxq[i]; >> close(process_private->rxq_fds[i]); >> process_private->rxq_fds[i] = -1; >> - rte_pktmbuf_free(rxq->pool); >> + tap_rxq_pool_free(rxq->pool); >> rte_free(rxq->iovecs); >> rxq->pool = NULL; >> rxq->iovecs = NULL; >> >