From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 745B1A0540 for ; Fri, 8 Jul 2022 11:25:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6F9BA4021E; Fri, 8 Jul 2022 11:25:21 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id 3AF984021E for ; Fri, 8 Jul 2022 11:25:19 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1657272319; x=1688808319; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=LuEhMZ4QfpItcmRWLDJ5npQqGiD3Nc/g0N4SEgBR69A=; b=nDQZicB+mX+ESzQk75iGuR+5MfXGAQfEWLROH+5FnGCTj6BlwwzuhDps bzIa8/vzOcKMY+Ht+pCCyY5lrlqhnm+zd2RStoel/ZInKaQZplpU+umf7 Uvxekf/VRO5WiRSKJf2v8T+8iyGGMiXzC6wvJum6CphMKF+oDVfY7GdTw VHFqAzX4q770zCk2L4VUK00Qn0y9qzRNYXb/W/ReGZyUtuNZ5G+Gamrhg PQ5f41P/bOruqnbUWbMjjiwzQ86i74ULVlbA56Lgk/+XMJdyiMNuIcflv jc+7PYZ134Gyfo9xsE5z5wnpO6ouSzRYFhNjdLd7j37W+P8Qj/IblpgbH A==; X-IronPort-AV: E=McAfee;i="6400,9594,10401"; a="264020839" X-IronPort-AV: E=Sophos;i="5.92,255,1650956400"; d="scan'208";a="264020839" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jul 2022 02:25:18 -0700 X-IronPort-AV: E=Sophos;i="5.92,255,1650956400"; d="scan'208";a="568877954" Received: from unknown (HELO localhost.localdomain) ([10.239.252.55]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jul 2022 02:25:15 -0700 From: Yuan Wang To: luca.boccassi@gmail.com, stable@dpdk.org Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com, jiayu.hu@intel.com, cheng1.jiang@intel.com, weix.ling@intel.com, Yuan Wang Subject: [PATCH 21.11] examples/vhost: fix retry logic on Rx path Date: Sat, 9 Jul 2022 01:14:35 +0800 Message-Id: <20220708171435.60845-1-yuanx.wang@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org [ upstream commit 1907ce4baec392a750fbeba5e946920b2f00ae73 ] drain_eth_rx() uses rte_vhost_avail_entries() to calculate the available entries to determine if a retry is required. However, this function only works with split rings, and calculating packed rings will return the wrong value and cause unnecessary retries resulting in a significant performance penalty. This patch fix that by using the difference between tx/rx burst as the retry condition. Fixes: be800696c26e ("examples/vhost: use burst enqueue and dequeue from lib") Signed-off-by: Yuan Wang Reviewed-by: Chenbo Xia --- examples/vhost/main.c | 79 ++++++++++++++++++------------------------- 1 file changed, 33 insertions(+), 46 deletions(-) diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 84844da68f..f9e932061f 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -900,31 +900,43 @@ sync_virtio_xmit(struct vhost_dev *dst_vdev, struct vhost_dev *src_vdev, } } -static __rte_always_inline void -drain_vhost(struct vhost_dev *vdev) +static __rte_always_inline uint16_t +enqueue_pkts(struct vhost_dev *vdev, struct rte_mbuf **pkts, uint16_t rx_count) { - uint16_t ret; - uint32_t buff_idx = rte_lcore_id() * MAX_VHOST_DEVICE + vdev->vid; - uint16_t nr_xmit = vhost_txbuff[buff_idx]->len; - struct rte_mbuf **m = vhost_txbuff[buff_idx]->m_table; + uint16_t enqueue_count; if (builtin_net_driver) { - ret = vs_enqueue_pkts(vdev, VIRTIO_RXQ, m, nr_xmit); + enqueue_count = vs_enqueue_pkts(vdev, VIRTIO_RXQ, pkts, rx_count); } else if (async_vhost_driver) { uint16_t enqueue_fail = 0; complete_async_pkts(vdev); - ret = rte_vhost_submit_enqueue_burst(vdev->vid, VIRTIO_RXQ, m, nr_xmit); - __atomic_add_fetch(&vdev->pkts_inflight, ret, __ATOMIC_SEQ_CST); + enqueue_count = rte_vhost_submit_enqueue_burst(vdev->vid, + VIRTIO_RXQ, pkts, rx_count); + __atomic_add_fetch(&vdev->pkts_inflight, enqueue_count, __ATOMIC_SEQ_CST); - enqueue_fail = nr_xmit - ret; + enqueue_fail = rx_count - enqueue_count; if (enqueue_fail) - free_pkts(&m[ret], nr_xmit - ret); + free_pkts(&pkts[enqueue_count], enqueue_fail); + } else { - ret = rte_vhost_enqueue_burst(vdev->vid, VIRTIO_RXQ, - m, nr_xmit); + enqueue_count = rte_vhost_enqueue_burst(vdev->vid, VIRTIO_RXQ, + pkts, rx_count); } + return enqueue_count; +} + +static __rte_always_inline void +drain_vhost(struct vhost_dev *vdev) +{ + uint16_t ret; + uint32_t buff_idx = rte_lcore_id() * MAX_VHOST_DEVICE + vdev->vid; + uint16_t nr_xmit = vhost_txbuff[buff_idx]->len; + struct rte_mbuf **m = vhost_txbuff[buff_idx]->m_table; + + ret = enqueue_pkts(vdev, m, nr_xmit); + if (enable_stats) { __atomic_add_fetch(&vdev->stats.rx_total_atomic, nr_xmit, __ATOMIC_SEQ_CST); @@ -1217,44 +1229,19 @@ drain_eth_rx(struct vhost_dev *vdev) if (!rx_count) return; - /* - * When "enable_retry" is set, here we wait and retry when there - * is no enough free slots in the queue to hold @rx_count packets, - * to diminish packet loss. - */ - if (enable_retry && - unlikely(rx_count > rte_vhost_avail_entries(vdev->vid, - VIRTIO_RXQ))) { - uint32_t retry; + enqueue_count = enqueue_pkts(vdev, pkts, rx_count); - for (retry = 0; retry < burst_rx_retry_num; retry++) { + /* Retry if necessary */ + if (enable_retry && unlikely(enqueue_count < rx_count)) { + uint32_t retry = 0; + + while (enqueue_count < rx_count && retry++ < burst_rx_retry_num) { rte_delay_us(burst_rx_delay_time); - if (rx_count <= rte_vhost_avail_entries(vdev->vid, - VIRTIO_RXQ)) - break; + enqueue_count += enqueue_pkts(vdev, &pkts[enqueue_count], + rx_count - enqueue_count); } } - if (builtin_net_driver) { - enqueue_count = vs_enqueue_pkts(vdev, VIRTIO_RXQ, - pkts, rx_count); - } else if (async_vhost_driver) { - uint16_t enqueue_fail = 0; - - complete_async_pkts(vdev); - enqueue_count = rte_vhost_submit_enqueue_burst(vdev->vid, - VIRTIO_RXQ, pkts, rx_count); - __atomic_add_fetch(&vdev->pkts_inflight, enqueue_count, __ATOMIC_SEQ_CST); - - enqueue_fail = rx_count - enqueue_count; - if (enqueue_fail) - free_pkts(&pkts[enqueue_count], enqueue_fail); - - } else { - enqueue_count = rte_vhost_enqueue_burst(vdev->vid, VIRTIO_RXQ, - pkts, rx_count); - } - if (enable_stats) { __atomic_add_fetch(&vdev->stats.rx_total_atomic, rx_count, __ATOMIC_SEQ_CST); -- 2.25.1