From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4061DA2EFC for ; Mon, 14 Oct 2019 17:16:00 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EDDC91C2B8; Mon, 14 Oct 2019 17:15:58 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 076EE1C1DA; Mon, 14 Oct 2019 17:15:54 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us2.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 2F23D280074; Mon, 14 Oct 2019 15:15:51 +0000 (UTC) Received: from [192.168.38.17] (91.220.146.112) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 14 Oct 2019 16:15:45 +0100 To: Marvin Liu , , , , CC: , , Kevin Traynor , Luca Boccassi References: <20190923140511.107939-1-yong.liu@intel.com> From: Andrew Rybchenko Message-ID: Date: Mon, 14 Oct 2019 18:15:42 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: <20190923140511.107939-1-yong.liu@intel.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-GB X-Originating-IP: [91.220.146.112] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24974.003 X-TM-AS-Result: No-13.080100-8.000000-10 X-TMASE-MatchedRID: 9d2LtCNB3NK2uvwZYhrIwHIXgPJ4AG/74ktW4MijEy0kJm4xb2+/2SGU b2JNxi1qm+cP2QHSFAe+Ncmtq1dmiMhJhcldb+0M0x8on4GtZ8fpVMb1xnESMioBmWjkLcuL5TL 6apB3DsovPUZhEG3lqNWHvGFXsGeXT8T8t9BFA68aLqFbxdotGXFHqsgruohZxzfbZxx4F4Tg6b s4ahvpJntzkEFLGZ8aGohHcJYlZPzabBVcqlDcFz8Ckw9b/GFeTJDl9FKHbrns6eKzsbCgr6Fni bRla2AIHaB1KOr+LBphyP497GV1bARmYT4GLLlhRTO9mhIXG409kgdakpwcTVc/CedjlcvkS3ol hw8CytYrPwaCTVVtNLIHELCqqMCeBiGh5tdLiocSEYfcJF0pRbvGYJkNeu61rAWzIx1BA/Ds5+n hNoK8Jy/n9O8jxYB43VwmjhsKdtPBm5Pnp+5TT+hsg0dmQfnGukqRCdoXqVqbKItl61J/ycnjLT A/UDoAoTCA5Efyn8CNo+PRbWqfRDsAVzN+Ov/sJqLP1/VLpO5PskhABM9DJHM9KfNKFpuYRmVjo zVF2HA1/F54uRUrrg== X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--13.080100-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24974.003 X-MDID: 1571066154-17QdjnHCXpeM Subject: Re: [dpdk-dev] [PATCH] net/virtio: fix mbuf data and pkt length mismatch X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, as far as I can see the patch introduces regressions. CC Kevin and Luca to be careful with stable branches patches. See details below. Andrew. On 9/23/19 5:05 PM, Marvin Liu wrote: > If reserve virtio header room by function rte_pktmbuf_prepend, both > segment data length and packet length of mbuf will be increased. > Data length will be equal to descriptor length, while packet length > should be decreased as virtio-net header won't be taken into packet. > Thus will cause mismatch in mbuf structure. Fix this issue by access > mbuf data directly and increase descriptor length if it is needed. > > Fixes: 58169a9c8153 ("net/virtio: support Tx checksum offload") > Fixes: 892dc798fa9c ("net/virtio: implement Tx path for packed queues") > Fixes: 4905ed3a523f ("net/virtio: optimize Tx enqueue for packed ring") > Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx") > Cc: stable@dpdk.org > > Reported-by: Stephen Hemminger > Signed-off-by: Marvin Liu > > diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c > index 27ead19fb..822cce06d 100644 > --- a/drivers/net/virtio/virtio_rxtx.c > +++ b/drivers/net/virtio/virtio_rxtx.c > @@ -597,9 +597,8 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq, > dxp->cookie = (void *)cookies[i]; > dxp->ndescs = 1; > > - hdr = (struct virtio_net_hdr *) > - rte_pktmbuf_prepend(cookies[i], head_size); > - cookies[i]->pkt_len -= head_size; > + hdr = (struct virtio_net_hdr *)(char *)cookies[i]->buf_addr + > + cookies[i]->data_off - head_size; > > /* if offload disabled, hdr is not zeroed yet, do it now */ > if (!vq->hw->has_tx_offload) > @@ -608,9 +607,10 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq, > virtqueue_xmit_offload(hdr, cookies[i], true); > > start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookies[i], vq); As I understand the problem is here. It points to start of the packet (Ethernet header) since data_off is not changed above now, but should point to virtio_net_hdr before the packet. I think the patch fixes the bug in a wrong direction. It looks better to simply remove cookies[i]->pkt_len -= head_size; above and care about real packet length difference in virtio_update_packet_stats() or when it is called from Tx path. If it is OK for maintainers I'm ready to send patches to rollback back this one and fix it as described above. > - start_dp[idx].len = cookies[i]->data_len; > + start_dp[idx].len = cookies[i]->data_len + head_size; > start_dp[idx].flags = 0; > > + > vq_update_avail_ring(vq, idx); > > idx++; > @@ -644,9 +644,8 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq, > flags = vq->vq_packed.cached_flags; > > /* prepend cannot fail, checked by caller */ > - hdr = (struct virtio_net_hdr *) > - rte_pktmbuf_prepend(cookie, head_size); > - cookie->pkt_len -= head_size; > + hdr = (struct virtio_net_hdr *)(char *)cookie->buf_addr + > + cookie->data_off - head_size; > > /* if offload disabled, hdr is not zeroed yet, do it now */ > if (!vq->hw->has_tx_offload) > @@ -655,7 +654,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq, > virtqueue_xmit_offload(hdr, cookie, true); > > dp->addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq); > - dp->len = cookie->data_len; > + dp->len = cookie->data_len + head_size; > dp->id = id; > > if (++vq->vq_avail_idx >= vq->vq_nentries) { > @@ -687,6 +686,7 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, > uint16_t head_size = vq->hw->vtnet_hdr_size; > struct virtio_net_hdr *hdr; > uint16_t prev; > + bool prepend_header = false; > > id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; > > @@ -705,12 +705,9 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, > > if (can_push) { > /* prepend cannot fail, checked by caller */ > - hdr = (struct virtio_net_hdr *) > - rte_pktmbuf_prepend(cookie, head_size); > - /* rte_pktmbuf_prepend() counts the hdr size to the pkt length, > - * which is wrong. Below subtract restores correct pkt size. > - */ > - cookie->pkt_len -= head_size; > + hdr = (struct virtio_net_hdr *)(char *)cookie->buf_addr + > + cookie->data_off - head_size; > + prepend_header = true; > > /* if offload disabled, it is not zeroed below, do it now */ > if (!vq->hw->has_tx_offload) > @@ -738,6 +735,11 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, > > start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq); > start_dp[idx].len = cookie->data_len; > + if (prepend_header) { > + start_dp[idx].len += head_size; > + prepend_header = false; > + } > + > if (likely(idx != head_idx)) { > flags = cookie->next ? VRING_DESC_F_NEXT : 0; > flags |= vq->vq_packed.cached_flags; > @@ -779,6 +781,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, > uint16_t seg_num = cookie->nb_segs; > uint16_t head_idx, idx; > uint16_t head_size = vq->hw->vtnet_hdr_size; > + bool prepend_header = false; > struct virtio_net_hdr *hdr; > > head_idx = vq->vq_desc_head_idx; > @@ -794,12 +797,9 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, > > if (can_push) { > /* prepend cannot fail, checked by caller */ > - hdr = (struct virtio_net_hdr *) > - rte_pktmbuf_prepend(cookie, head_size); > - /* rte_pktmbuf_prepend() counts the hdr size to the pkt length, > - * which is wrong. Below subtract restores correct pkt size. > - */ > - cookie->pkt_len -= head_size; > + hdr = (struct virtio_net_hdr *)(char *)cookie->buf_addr + > + cookie->data_off - head_size; > + prepend_header = true; > > /* if offload disabled, it is not zeroed below, do it now */ > if (!vq->hw->has_tx_offload) > @@ -838,6 +838,10 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, > do { > start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq); > start_dp[idx].len = cookie->data_len; > + if (prepend_header) { > + start_dp[idx].len += head_size; > + prepend_header = false; > + } > start_dp[idx].flags = cookie->next ? VRING_DESC_F_NEXT : 0; > idx = start_dp[idx].next; > } while ((cookie = cookie->next) != NULL);