From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4B615A00C5; Fri, 8 May 2020 13:19:26 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C0EF31DA08; Fri, 8 May 2020 13:19:25 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 729101DA07; Fri, 8 May 2020 13:19:24 +0200 (CEST) IronPort-SDR: fgkMuK13PyAtAQ7a2KA4ggBwqXFkOrt6hxWOAFjg53DSNT/ormHbQc+9jhQ63/SHH++pmSAMoH NP1ZtHyiTQhA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 May 2020 04:19:23 -0700 IronPort-SDR: iUT5QsHoyV43fCyztG67CEoqSSLoCk4m4PPRGVB5eyQIEgZ9ZAiQPE7BeFu8NVLA2JmiNlnhJY ZyqLEouDFC+g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,367,1583222400"; d="scan'208";a="249615017" Received: from unknown (HELO localhost.localdomain) ([10.190.212.182]) by orsmga007.jf.intel.com with ESMTP; 08 May 2020 04:19:21 -0700 From: Sivaprasad Tummala To: Maxime Coquelin , Zhihong Wang , Xiaolong Ye Cc: dev@dpdk.org, fbl@sysclose.org, stable@dpdk.org Date: Fri, 8 May 2020 16:47:51 +0530 Message-Id: <20200508111751.82341-1-Sivaprasad.Tummala@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200504171118.93782-1-Sivaprasad.Tummala@intel.com> References: <20200504171118.93782-1-Sivaprasad.Tummala@intel.com> Subject: [dpdk-dev] [PATCH v3] vhost: fix mbuf allocation failures X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" vhost buffer allocation is successful for packets that fit into a linear buffer. If it fails, vhost library is expected to drop the current packet and skip to the next. The patch fixes the error scenario by skipping to next packet. Note: Drop counters are not currently supported. Fixes: c3ff0ac70acb ("vhost: improve performance by supporting large buffer") Cc: fbl@sysclose.org Cc: stable@dpdk.org v3: * fixed review comments - Flavio Leitner v2: * fixed review comments - Maxime Coquelin * fixed mbuf alloc errors for packed virtqueues - Maxime Coquelin * fixed mbuf copy errors - Flavio Leitner Signed-off-by: Sivaprasad Tummala --- lib/librte_vhost/virtio_net.c | 70 +++++++++++++++++++++++++++-------- 1 file changed, 55 insertions(+), 15 deletions(-) diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 1fc30c681..a85d77897 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -1674,6 +1674,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, { uint16_t i; uint16_t free_entries; + uint16_t dropped = 0; + static bool allocerr_warned; if (unlikely(dev->dequeue_zero_copy)) { struct zcopy_mbuf *zmbuf, *next; @@ -1737,13 +1739,35 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, update_shadow_used_ring_split(vq, head_idx, 0); pkts[i] = virtio_dev_pktmbuf_alloc(dev, mbuf_pool, buf_len); - if (unlikely(pkts[i] == NULL)) + if (unlikely(pkts[i] == NULL)) { + /* + * mbuf allocation fails for jumbo packets when external + * buffer allocation is not allowed and linear buffer + * is required. Drop this packet. + */ + if (!allocerr_warned) { + VHOST_LOG_DATA(ERR, + "Failed mbuf alloc of size %d from %s on %s.\n", + buf_len, mbuf_pool->name, dev->ifname); + allocerr_warned = true; + } + dropped += 1; + i++; break; + } err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i], mbuf_pool); if (unlikely(err)) { rte_pktmbuf_free(pkts[i]); + if (!allocerr_warned) { + VHOST_LOG_DATA(ERR, + "Failed to copy desc to mbuf on %s.\n", + dev->ifname); + allocerr_warned = true; + } + dropped += 1; + i++; break; } @@ -1753,6 +1777,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, zmbuf = get_zmbuf(vq); if (!zmbuf) { rte_pktmbuf_free(pkts[i]); + dropped += 1; + i++; break; } zmbuf->mbuf = pkts[i]; @@ -1782,7 +1808,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, } } - return i; + return (i - dropped); } static __rte_always_inline int @@ -1914,6 +1940,7 @@ vhost_dequeue_single_packed(struct virtio_net *dev, uint32_t buf_len; uint16_t nr_vec = 0; int err; + static bool allocerr_warned; if (unlikely(fill_vec_buf_packed(dev, vq, vq->last_avail_idx, desc_count, @@ -1924,14 +1951,24 @@ vhost_dequeue_single_packed(struct virtio_net *dev, *pkts = virtio_dev_pktmbuf_alloc(dev, mbuf_pool, buf_len); if (unlikely(*pkts == NULL)) { - VHOST_LOG_DATA(ERR, - "Failed to allocate memory for mbuf.\n"); + if (!allocerr_warned) { + VHOST_LOG_DATA(ERR, + "Failed mbuf alloc of size %d from %s on %s.\n", + buf_len, mbuf_pool->name, dev->ifname); + allocerr_warned = true; + } return -1; } err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, *pkts, mbuf_pool); if (unlikely(err)) { + if (!allocerr_warned) { + VHOST_LOG_DATA(ERR, + "Failed to copy desc to mbuf on %s.\n", + dev->ifname); + allocerr_warned = true; + } rte_pktmbuf_free(*pkts); return -1; } @@ -1946,21 +1983,24 @@ virtio_dev_tx_single_packed(struct virtio_net *dev, struct rte_mbuf **pkts) { - uint16_t buf_id, desc_count; + uint16_t buf_id, desc_count = 0; + int ret; - if (vhost_dequeue_single_packed(dev, vq, mbuf_pool, pkts, &buf_id, - &desc_count)) - return -1; + ret = vhost_dequeue_single_packed(dev, vq, mbuf_pool, pkts, &buf_id, + &desc_count); - if (virtio_net_is_inorder(dev)) - vhost_shadow_dequeue_single_packed_inorder(vq, buf_id, - desc_count); - else - vhost_shadow_dequeue_single_packed(vq, buf_id, desc_count); + if (likely(desc_count > 0)) { + if (virtio_net_is_inorder(dev)) + vhost_shadow_dequeue_single_packed_inorder(vq, buf_id, + desc_count); + else + vhost_shadow_dequeue_single_packed(vq, buf_id, + desc_count); - vq_inc_last_avail_packed(vq, desc_count); + vq_inc_last_avail_packed(vq, desc_count); + } - return 0; + return ret; } static __rte_always_inline int -- 2.17.1