From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <junjie.j.chen@intel.com>
Received: from mga11.intel.com (mga11.intel.com [192.55.52.93])
 by dpdk.org (Postfix) with ESMTP id 6467144CF
 for <dev@dpdk.org>; Wed, 17 Jan 2018 09:10:40 +0100 (CET)
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga007.fm.intel.com ([10.253.24.52])
 by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;
 17 Jan 2018 00:10:39 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.46,372,1511856000"; d="scan'208";a="10881782"
Received: from fmsmsx105.amr.corp.intel.com ([10.18.124.203])
 by fmsmga007.fm.intel.com with ESMTP; 17 Jan 2018 00:10:39 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
 FMSMSX105.amr.corp.intel.com (10.18.124.203) with Microsoft SMTP Server (TLS)
 id 14.3.319.2; Wed, 17 Jan 2018 00:10:39 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.159]) by
 SHSMSX151.ccr.corp.intel.com ([169.254.3.218]) with mapi id 14.03.0319.002;
 Wed, 17 Jan 2018 16:10:37 +0800
From: "Chen, Junjie J" <junjie.j.chen@intel.com>
To: "Tan, Jianfeng" <jianfeng.tan@intel.com>, "yliu@fridaylinux.org"
 <yliu@fridaylinux.org>, "maxime.coquelin@redhat.com"
 <maxime.coquelin@redhat.com>
CC: "dev@dpdk.org" <dev@dpdk.org>
Thread-Topic: [dpdk-dev] [PATCH] vhost: dequeue zero copy should restore
 mbuf	before return to pool
Thread-Index: AQHTj2TjuUMZ3FlFREShVRy1QIAscaN3tdEw
Date: Wed, 17 Jan 2018 08:10:36 +0000
Message-ID: <AA85A5A5E706C44BACB0BEFD5AC08BF63132CCF4@SHSMSX101.ccr.corp.intel.com>
References: <1516185726-31797-1-git-send-email-junjie.j.chen@intel.com>
 <ED26CBA2FAD1BF48A8719AEF02201E36513C18B1@SHSMSX103.ccr.corp.intel.com>
In-Reply-To: <ED26CBA2FAD1BF48A8719AEF02201E36513C18B1@SHSMSX103.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
dlp-product: dlpe-windows
dlp-version: 11.0.0.116
dlp-reaction: no-action
x-ctpclassification: CTP_NT
x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiMTUzOTY2M2QtZWY0Ni00YTQyLWFhNTktNzQ3YTJhNmQ3NDU3IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjIuNS4xOCIsIlRydXN0ZWRMYWJlbEhhc2giOiJkRm9UN1lES2EyYk45MWladHB5XC9SMlpzQUp1akhCWkVVZWQ0RjYrTk9GeWV6K0g2UFY1UkJ2RmVhenlCR2JjRiJ9
x-originating-ip: [10.239.127.40]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dpdk-dev] [PATCH] vhost: dequeue zero copy should restore
	mbuf	before return to pool
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Wed, 17 Jan 2018 08:10:40 -0000

> >
> > dequeue zero copy change buf_addr and buf_iova of mbuf, and return to
> > mbuf pool without restore them, it breaks vm memory if others allocate
> > mbuf from same pool since mbuf reset doesn't reset buf_addr and
> buf_iova.
> >
> > Signed-off-by: Junjie Chen <junjie.j.chen@intel.com>
> > ---
> >  lib/librte_vhost/virtio_net.c | 21 +++++++++++++++++++++
> >  1 file changed, 21 insertions(+)
> >
> > diff --git a/lib/librte_vhost/virtio_net.c
> > b/lib/librte_vhost/virtio_net.c index 568ad0e..e9aaf6d 100644
> > --- a/lib/librte_vhost/virtio_net.c
> > +++ b/lib/librte_vhost/virtio_net.c
> > @@ -1158,6 +1158,26 @@ mbuf_is_consumed(struct rte_mbuf *m)
> >  	return true;
> >  }
> >
> > +
> > +static __rte_always_inline void
> > +restore_mbuf(struct rte_mbuf *m)
> > +{
> > +	uint32_t mbuf_size, priv_size;
> > +
> > +	while (m) {
> > +		priv_size =3D rte_pktmbuf_priv_size(m->pool);
> > +		mbuf_size =3D sizeof(struct rte_mbuf) + priv_size;
> > +		/* start of buffer is after mbuf structure and priv data */
> > +		m->priv_size =3D priv_size;
>=20
> I don't think we need to restore priv_size. Refer to its definition in rt=
e_mbuf:
>     "Size of the application private data. In case of an indirect mbuf, i=
t
> stores the direct mbuf private data size."
>=20
> Thanks,
> Jianfeng

You are right, I also remove restore for data_len since it is reset when al=
locating. Please see v2.
Thanks.

>=20
> > +
> > +		m->buf_addr =3D (char *)m + mbuf_size;
> > +		m->buf_iova =3D rte_mempool_virt2iova(m) + mbuf_size;
> > +		m->data_off =3D RTE_MIN(RTE_PKTMBUF_HEADROOM,
> > +			(uint16_t)m->buf_len);
> > +		m =3D m->next;
> > +	}
> > +}
> > +
> >  uint16_t
> >  rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
> >  	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t
> > count)
> > @@ -1209,6 +1229,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t
> > queue_id,
> >  				nr_updated +=3D 1;
> >
> >  				TAILQ_REMOVE(&vq->zmbuf_list, zmbuf, next);
> > +				restore_mbuf(zmbuf->mbuf);
> >  				rte_pktmbuf_free(zmbuf->mbuf);
> >  				put_zmbuf(zmbuf);
> >  				vq->nr_zmbuf -=3D 1;
> > --
> > 2.0.1