From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id BD560292D for ; Thu, 21 Feb 2019 13:28:39 +0100 (CET) X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Feb 2019 04:28:38 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,395,1544515200"; d="scan'208";a="320892757" Received: from dpdk-tbie.sh.intel.com ([10.67.104.173]) by fmsmga006.fm.intel.com with ESMTP; 21 Feb 2019 04:28:37 -0800 Date: Thu, 21 Feb 2019 20:25:57 +0800 From: Tiwei Bie To: Maxime Coquelin Cc: zhihong.wang@intel.com, dev@dpdk.org Message-ID: <20190221122556.GA22000@dpdk-tbie.sh.intel.com> References: <20190219105951.31046-1-tiwei.bie@intel.com> <20190219105951.31046-6-tiwei.bie@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.4 (2018-02-28) Subject: Re: [dpdk-dev] [PATCH 5/5] net/virtio: optimize xmit enqueue for packed ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 21 Feb 2019 12:28:40 -0000 On Thu, Feb 21, 2019 at 12:22:29PM +0100, Maxime Coquelin wrote: > On 2/19/19 11:59 AM, Tiwei Bie wrote: > > This patch introduces an optimized enqueue function in packed > > ring for the case that virtio net header can be prepended to > > the unchained mbuf. > > > > Signed-off-by: Tiwei Bie > > --- > > drivers/net/virtio/virtio_rxtx.c | 63 +++++++++++++++++++++++++++++++- > > 1 file changed, 61 insertions(+), 2 deletions(-) > > > > diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c > > index 60fa3aa50..771d3c3f6 100644 > > --- a/drivers/net/virtio/virtio_rxtx.c > > +++ b/drivers/net/virtio/virtio_rxtx.c > > @@ -623,6 +623,62 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq, > > vq->vq_desc_head_idx = idx & (vq->vq_nentries - 1); > > } > > +static inline void > > +virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq, > > + struct rte_mbuf *cookie, > > + int in_order) > > +{ > > + struct virtqueue *vq = txvq->vq; > > + struct vring_packed_desc *dp; > > + struct vq_desc_extra *dxp; > > + uint16_t idx, id, flags; > > + uint16_t head_size = vq->hw->vtnet_hdr_size; > > + struct virtio_net_hdr *hdr; > > + > > + id = in_order ? vq->vq_avail_idx : vq->vq_desc_head_idx; > > + idx = vq->vq_avail_idx; > > + dp = &vq->ring_packed.desc_packed[idx]; > > + > > + dxp = &vq->vq_descx[id]; > > + dxp->ndescs = 1; > > + dxp->cookie = cookie; > > + > > + flags = vq->avail_used_flags; > > + > > + /* prepend cannot fail, checked by caller */ > > + hdr = (struct virtio_net_hdr *) > > + rte_pktmbuf_prepend(cookie, head_size); > > + cookie->pkt_len -= head_size; > > + > > + /* if offload disabled, hdr is not zeroed yet, do it now */ > > + if (!vq->hw->has_tx_offload) > > + virtqueue_clear_net_hdr(hdr); > > + else > > + virtqueue_xmit_offload(hdr, cookie, true); > > + > > + dp->addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq); > > + dp->len = cookie->data_len; > > + dp->id = id; > > + > > + if (++vq->vq_avail_idx >= vq->vq_nentries) { > > + vq->vq_avail_idx -= vq->vq_nentries; > > + vq->avail_wrap_counter ^= 1; > > + vq->avail_used_flags ^= > > + VRING_DESC_F_AVAIL(1) | VRING_DESC_F_USED(1); > > + } > > + > > + vq->vq_free_cnt--; > > + > > + if (!in_order) { > > + vq->vq_desc_head_idx = dxp->next; > > + if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END) > > + vq->vq_desc_tail_idx = VQ_RING_DESC_CHAIN_END; > > + } > > + > > + virtio_wmb(vq->hw->weak_barriers); > > + dp->flags = flags; > > +} > > + > > static inline void > > virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, > > uint16_t needed, int can_push, int in_order) > > @@ -1979,8 +2035,11 @@ virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, > > } > > /* Enqueue Packet buffers */ > > - virtqueue_enqueue_xmit_packed(txvq, txm, slots, can_push, > > - in_order); > > + if (can_push) > > + virtqueue_enqueue_xmit_packed_fast(txvq, txm, in_order); > > + else > > + virtqueue_enqueue_xmit_packed(txvq, txm, slots, 0, > > + in_order); > > virtio_update_packet_stats(&txvq->stats, txm); > > } > > > > I like this patch, but shouldn't virtqueue_enqueue_xmit_packed() be > simplified to get rid off "can_push" now that this case as a dedicated > function? Yeah, I had the same thought. But after a second thought, I think we may also want to push the net hdr to the mbuf even if its nb_segs isn't 1 in the future, so I left it untouched. Thanks, Tiwei