From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id CEB651B1A2 for ; Wed, 17 Jan 2018 08:29:44 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Jan 2018 23:29:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,371,1511856000"; d="scan'208";a="10870882" Received: from fmsmsx107.amr.corp.intel.com ([10.18.124.205]) by fmsmga007.fm.intel.com with ESMTP; 16 Jan 2018 23:29:43 -0800 Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by fmsmsx107.amr.corp.intel.com (10.18.124.205) with Microsoft SMTP Server (TLS) id 14.3.319.2; Tue, 16 Jan 2018 23:29:43 -0800 Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by fmsmsx156.amr.corp.intel.com (10.18.116.74) with Microsoft SMTP Server (TLS) id 14.3.319.2; Tue, 16 Jan 2018 23:29:43 -0800 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.213]) by SHSMSX151.ccr.corp.intel.com ([169.254.3.218]) with mapi id 14.03.0319.002; Wed, 17 Jan 2018 15:29:20 +0800 From: "Tan, Jianfeng" To: "Chen, Junjie J" , "yliu@fridaylinux.org" , "maxime.coquelin@redhat.com" CC: "dev@dpdk.org" , "Chen, Junjie J" Thread-Topic: [dpdk-dev] [PATCH] vhost: dequeue zero copy should restore mbuf before return to pool Thread-Index: AQHTjz+X76HlIwxfz0aXMqy1OpjN16N3qnRg Date: Wed, 17 Jan 2018 07:29:19 +0000 Message-ID: References: <1516185726-31797-1-git-send-email-junjie.j.chen@intel.com> In-Reply-To: <1516185726-31797-1-git-send-email-junjie.j.chen@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] vhost: dequeue zero copy should restore mbuf before return to pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Jan 2018 07:29:45 -0000 > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Junjie Chen > Sent: Wednesday, January 17, 2018 6:42 PM > To: yliu@fridaylinux.org; maxime.coquelin@redhat.com > Cc: dev@dpdk.org; Chen, Junjie J > Subject: [dpdk-dev] [PATCH] vhost: dequeue zero copy should restore mbuf > before return to pool >=20 > dequeue zero copy change buf_addr and buf_iova of mbuf, and return > to mbuf pool without restore them, it breaks vm memory if others allocate > mbuf from same pool since mbuf reset doesn't reset buf_addr and buf_iova. >=20 > Signed-off-by: Junjie Chen > --- > lib/librte_vhost/virtio_net.c | 21 +++++++++++++++++++++ > 1 file changed, 21 insertions(+) >=20 > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.= c > index 568ad0e..e9aaf6d 100644 > --- a/lib/librte_vhost/virtio_net.c > +++ b/lib/librte_vhost/virtio_net.c > @@ -1158,6 +1158,26 @@ mbuf_is_consumed(struct rte_mbuf *m) > return true; > } >=20 > + > +static __rte_always_inline void > +restore_mbuf(struct rte_mbuf *m) > +{ > + uint32_t mbuf_size, priv_size; > + > + while (m) { > + priv_size =3D rte_pktmbuf_priv_size(m->pool); > + mbuf_size =3D sizeof(struct rte_mbuf) + priv_size; > + /* start of buffer is after mbuf structure and priv data */ > + m->priv_size =3D priv_size; I don't think we need to restore priv_size. Refer to its definition in rte_= mbuf: "Size of the application private data. In case of an indirect mbuf, it = stores the direct mbuf private data size." Thanks, Jianfeng > + > + m->buf_addr =3D (char *)m + mbuf_size; > + m->buf_iova =3D rte_mempool_virt2iova(m) + mbuf_size; > + m->data_off =3D RTE_MIN(RTE_PKTMBUF_HEADROOM, > + (uint16_t)m->buf_len); > + m =3D m->next; > + } > +} > + > uint16_t > rte_vhost_dequeue_burst(int vid, uint16_t queue_id, > struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t > count) > @@ -1209,6 +1229,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t > queue_id, > nr_updated +=3D 1; >=20 > TAILQ_REMOVE(&vq->zmbuf_list, zmbuf, > next); > + restore_mbuf(zmbuf->mbuf); > rte_pktmbuf_free(zmbuf->mbuf); > put_zmbuf(zmbuf); > vq->nr_zmbuf -=3D 1; > -- > 2.0.1