From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4861FA0564 for ; Mon, 16 Mar 2020 06:01:11 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 38626CF3; Mon, 16 Mar 2020 06:01:11 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 09DFFCF3 for ; Mon, 16 Mar 2020 06:01:08 +0100 (CET) IronPort-SDR: GDnNFHoP0ALfc2QuqTZ7gt1NQDwuvGiAJkpInTCqboLlZsBK5gQA96/+nqHzyi5+QXGI9Q/lKC txYq6hsffOew== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2020 22:01:08 -0700 IronPort-SDR: wHGY7Fo3kXIfkHkm7l0vbxejlJowfls7gpr9V+tNcuyr7ncD7ARJCuiivDciqTfkDrqFbxcqaH d3GTcWGRCu9A== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,559,1574150400"; d="scan'208";a="443191416" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.58]) by fmsmga005.fm.intel.com with ESMTP; 15 Mar 2020 22:01:07 -0700 From: Marvin Liu To: stable@dpdk.org Cc: ktraynor@redhat.com, Marvin Liu Date: Mon, 16 Mar 2020 20:35:56 +0800 Message-Id: <20200316123556.33594-1-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 Subject: [dpdk-stable] [PATCH 18.11] net/virtio: fix mbuf data and packet length mismatch X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" [ upstream commit 1ae55ad38e5e00b61704e4cb29037098b143688a ] If reserve virtio header room by function rte_pktmbuf_prepend, both segment data length and packet length of mbuf will be increased. Data length will be equal to descriptor length, while packet length should be decreased as virtio-net header won't be taken into packet. Thus will cause mismatch in mbuf structure. Fix this issue by calculate mbuf data address directly. Fixes: 58169a9c8153 ("net/virtio: support Tx checksum offload") Fixes: 892dc798fa9c ("net/virtio: implement Tx path for packed queues") Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx") Reported-by: Stephen Hemminger Signed-off-by: Marvin Liu Reviewed-by: Maxime Coquelin diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 306009d96..915674f31 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -413,8 +413,8 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq, dxp->cookie = (void *)cookies[i]; dxp->ndescs = 1; - hdr = (struct virtio_net_hdr *) - rte_pktmbuf_prepend(cookies[i], head_size); + hdr = rte_pktmbuf_mtod_offset(cookies[i], + struct virtio_net_hdr *, -head_size); cookies[i]->pkt_len -= head_size; /* if offload disabled, it is not zeroed below, do it now */ @@ -430,8 +430,9 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq, virtqueue_xmit_offload(hdr, cookies[i], vq->hw->has_tx_offload); - start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookies[i], vq); - start_dp[idx].len = cookies[i]->data_len; + start_dp[idx].addr = + VIRTIO_MBUF_DATA_DMA_ADDR(cookies[i], vq) - head_size; + start_dp[idx].len = cookies[i]->data_len + head_size; start_dp[idx].flags = 0; vq_update_avail_ring(vq, idx); @@ -456,6 +457,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, uint16_t seg_num = cookie->nb_segs; uint16_t head_idx, idx; uint16_t head_size = vq->hw->vtnet_hdr_size; + bool prepend_header = false; struct virtio_net_hdr *hdr; head_idx = vq->vq_desc_head_idx; @@ -471,13 +473,9 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, if (can_push) { /* prepend cannot fail, checked by caller */ - hdr = (struct virtio_net_hdr *) - rte_pktmbuf_prepend(cookie, head_size); - /* rte_pktmbuf_prepend() counts the hdr size to the pkt length, - * which is wrong. Below subtract restores correct pkt size. - */ - cookie->pkt_len -= head_size; - + hdr = rte_pktmbuf_mtod_offset(cookie, struct virtio_net_hdr *, + -head_size); + prepend_header = true; /* if offload disabled, it is not zeroed below, do it now */ if (!vq->hw->has_tx_offload) { ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0); @@ -521,6 +519,11 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, do { start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq); start_dp[idx].len = cookie->data_len; + if (prepend_header) { + start_dp[idx].addr -= head_size; + start_dp[idx].len += head_size; + prepend_header = false; + } start_dp[idx].flags = cookie->next ? VRING_DESC_F_NEXT : 0; idx = start_dp[idx].next; } while ((cookie = cookie->next) != NULL); -- 2.17.1