From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id EB640952 for ; Mon, 7 Sep 2015 08:16:53 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP; 06 Sep 2015 23:16:52 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,483,1437462000"; d="scan'208";a="639930352" Received: from pgsmsx103.gar.corp.intel.com ([10.221.44.82]) by orsmga003.jf.intel.com with ESMTP; 06 Sep 2015 23:16:52 -0700 Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by PGSMSX103.gar.corp.intel.com (10.221.44.82) with Microsoft SMTP Server (TLS) id 14.3.224.2; Mon, 7 Sep 2015 14:16:08 +0800 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.171]) by SHSMSX151.ccr.corp.intel.com ([169.254.3.143]) with mapi id 14.03.0224.002; Mon, 7 Sep 2015 14:16:07 +0800 From: "Liu, Jijiang" To: "Ouyang, Changchun" , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [RFC PATCH 5/8] lib/librte_vhost:dequeue vhost TSO offload Thread-Index: AQHQ5GfjdMiwMD/rZEW5hwewNOPKqZ4wn1yg Date: Mon, 7 Sep 2015 06:16:06 +0000 Message-ID: <1ED644BD7E0A5F4091CF203DAFB8E4CC057D9881@SHSMSX101.ccr.corp.intel.com> References: <1441014108-3125-1-git-send-email-jijiang.liu@intel.com> <1441014108-3125-6-git-send-email-jijiang.liu@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [RFC PATCH 5/8] lib/librte_vhost:dequeue vhost TSO offload X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 07 Sep 2015 06:16:54 -0000 > -----Original Message----- > From: Ouyang, Changchun > Sent: Monday, August 31, 2015 8:40 PM > To: Liu, Jijiang; dev@dpdk.org > Cc: Ouyang, Changchun > Subject: RE: [dpdk-dev] [RFC PATCH 5/8] lib/librte_vhost:dequeue vhost TS= O > offload >=20 >=20 >=20 > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jijiang Liu > > Sent: Monday, August 31, 2015 5:42 PM > > To: dev@dpdk.org > > Subject: [dpdk-dev] [RFC PATCH 5/8] lib/librte_vhost:dequeue vhost TSO > > offload > > > > Dequeue vhost TSO offload > > > > Signed-off-by: Jijiang Liu > > --- > > lib/librte_vhost/vhost_rxtx.c | 29 ++++++++++++++++++++++++++++- > > 1 files changed, 28 insertions(+), 1 deletions(-) > > > > diff --git a/lib/librte_vhost/vhost_rxtx.c > > b/lib/librte_vhost/vhost_rxtx.c index 0d07338..9adfdb1 100644 > > --- a/lib/librte_vhost/vhost_rxtx.c > > +++ b/lib/librte_vhost/vhost_rxtx.c > > @@ -545,6 +545,30 @@ rte_vhost_enqueue_burst(struct virtio_net *dev, > > uint16_t queue_id, > > return virtio_dev_rx(dev, queue_id, pkts, count); } > > > > +static inline void __attribute__((always_inline)) > > +vhost_dequeue_offload(uint64_t addr, struct rte_mbuf *m) { > > + struct virtio_net_hdr *hdr =3D > > + (struct virtio_net_hdr *)((uintptr_t)addr); > > + > > + if (hdr->gso_type !=3D VIRTIO_NET_HDR_GSO_NONE) { > > + switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) { > > + case VIRTIO_NET_HDR_GSO_TCPV4: > > + m->ol_flags |=3D (PKT_TX_IPV4 | PKT_TX_TCP_SEG); > > + m->tso_segsz =3D hdr->gso_size; > > + break; > > + case VIRTIO_NET_HDR_GSO_TCPV6: > > + m->ol_flags |=3D (PKT_TX_IPV6 | PKT_TX_TCP_SEG); > > + m->tso_segsz =3D hdr->gso_size; > > + break; > > + default: > > + RTE_LOG(ERR, VHOST_DATA, > > + "bad gso type %u.\n", hdr->gso_type); > > + break; >=20 > Do we need special handling for the bad gso type? Yes, we need return error, and log it and break this operation. I will change it in next version. >=20 > > + } > > + } > > +} > > + > > uint16_t > > rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, > > struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t > > count) @@ -553,6 +577,7 @@ rte_vhost_dequeue_burst(struct virtio_net > > *dev, uint16_t queue_id, > > struct vhost_virtqueue *vq; > > struct vring_desc *desc; > > uint64_t vb_addr =3D 0; > > + uint64_t vb_net_hdr_addr =3D 0; > > uint32_t head[MAX_PKT_BURST]; > > uint32_t used_idx; > > uint32_t i; > > @@ -604,6 +629,8 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, > > uint16_t queue_id, > > > > desc =3D &vq->desc[head[entry_success]]; > > > > + vb_net_hdr_addr =3D gpa_to_vva(dev, desc->addr); > > + > > /* Discard first buffer as it is the virtio header */ > > if (desc->flags & VRING_DESC_F_NEXT) { > > desc =3D &vq->desc[desc->next]; > > @@ -742,7 +769,7 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, > > uint16_t queue_id, > > break; > > > > m->nb_segs =3D seg_num; > > - > > + vhost_dequeue_offload(vb_net_hdr_addr, m); > > pkts[entry_success] =3D m; > > vq->last_used_idx++; > > entry_success++; > > -- > > 1.7.7.6