From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by dpdk.org (Postfix) with ESMTP id 58B7A1E2B for ; Wed, 6 Dec 2017 15:23:06 +0100 (CET) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id A96EA20F5B; Wed, 6 Dec 2017 09:23:05 -0500 (EST) Received: from frontend2 ([10.202.2.161]) by compute1.internal (MEProxy); Wed, 06 Dec 2017 09:23:05 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fridaylinux.org; h=cc:content-type:date:from:in-reply-to:message-id:mime-version :references:subject:to:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=a1Xs1i0I+ClcXii/B9J7OaSvo5KYATCm67LiO6ArgQs=; b=dbbtjn1e Wowg26I4xOtQjOKPavmCfStdCJmJf1McUa+StVR1GQNtPY772xBdX1z4c94ZEiVG fy1SIpaJXJW/ueGWgmIiLkiJx+gjKNIvBcSotIyxgL+wgGq/fCwbIf3de8jYStsD LICFmqi3rXuj+mrM7JCDzODrPm1ar2ITVAy4RgGfpUv42778YL+rU+d3ygefiXa6 zxr8GTvn5Y1J38ewP+f0F+rqsXyiw92zuScgRlA4VQAvXfGm9KBc+WdJkijmIrYz rkg05WJiMVH20r6V9zv7hcfqGP6mXykx1uGHYw0No1fD0biDU/baPl3ehhbqGx// ABO8sj85VeS11A== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to:x-me-sender :x-me-sender:x-sasl-enc; s=fm1; bh=a1Xs1i0I+ClcXii/B9J7OaSvo5KYA TCm67LiO6ArgQs=; b=cPQPF68ybDoLIGjbkFq9qj2EpHGOLjH28BLWUyC2+Txv2 diSlITLClugnRslYyYviQYaTGmpYE5W3ByAEwcgwULaZf2OVi3ydIZ/RFGYMV89F iXbsLfO/20CEzu/MkZV6R2Bxs4pkR2kkAzofaMa9lGk31hWr+uttppeCIacWPJjn an5T3EE+XKzNa8ZWgSUorK0PwwJM3C9NmIsfh5pjk2mbVzLhktOCl+lFzgrp6h1l W2eKnQW3wh5rzsmAH+1UKQywzySMTkz+MZ5QSma3m6+g3xtv023D3ZpIuF0O6O/8 WU8rg65m92y5rNZlIL2dblWB7H3Ka6hkuw8Yqnplg== X-ME-Sender: Received: from yliu-dev (unknown [180.158.55.119]) by mail.messagingengine.com (Postfix) with ESMTPA id 3B2EC24740; Wed, 6 Dec 2017 09:23:03 -0500 (EST) Date: Wed, 6 Dec 2017 22:22:54 +0800 From: Yuanhan Liu To: Tiwei Bie Cc: Xiao Wang , dev@dpdk.org, stephen@networkplumber.org Message-ID: <20171206142254.GD17112@yliu-dev> References: <1511521440-57724-2-git-send-email-xiao.w.wang@intel.com> <1512396128-119985-1-git-send-email-xiao.w.wang@intel.com> <1512396128-119985-3-git-send-email-xiao.w.wang@intel.com> <20171206112311.u7uuv66lev3er4yh@debian-xvivbkq> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20171206112311.u7uuv66lev3er4yh@debian-xvivbkq> User-Agent: Mutt/1.5.24 (2015-08-30) Subject: Re: [dpdk-dev] [PATCH v2 2/2] net/virtio: support GUEST ANNOUNCE X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 06 Dec 2017 14:23:06 -0000 On Wed, Dec 06, 2017 at 07:23:11PM +0800, Tiwei Bie wrote: > On Mon, Dec 04, 2017 at 06:02:08AM -0800, Xiao Wang wrote: > [...] > > diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c > > index 6a24fde..7313bdd 100644 > > --- a/drivers/net/virtio/virtio_rxtx.c > > +++ b/drivers/net/virtio/virtio_rxtx.c > > @@ -1100,3 +1100,84 @@ > > > > return nb_tx; > > } > > + > > +uint16_t > > +virtio_inject_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) > > +{ > > + struct virtnet_tx *txvq = tx_queue; > > + struct virtqueue *vq = txvq->vq; > > + struct virtio_hw *hw = vq->hw; > > + uint16_t hdr_size = hw->vtnet_hdr_size; > > + uint16_t nb_used, nb_tx = 0; > > + > > + if (unlikely(nb_pkts < 1)) > > + return nb_pkts; > > + > > + PMD_TX_LOG(DEBUG, "%d packets to xmit", nb_pkts); > > + nb_used = VIRTQUEUE_NUSED(vq); > > + > > + virtio_rmb(); > > + if (likely(nb_used > vq->vq_nentries - vq->vq_free_thresh)) > > + virtio_xmit_cleanup(vq, nb_used); > > + > > + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { > > + struct rte_mbuf *txm = tx_pkts[nb_tx]; > > + int can_push = 0, use_indirect = 0, slots, need; > > + > > + /* optimize ring usage */ > > + if ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) || > > + vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) && > > + rte_mbuf_refcnt_read(txm) == 1 && > > + RTE_MBUF_DIRECT(txm) && > > + txm->nb_segs == 1 && > > + rte_pktmbuf_headroom(txm) >= hdr_size && > > + rte_is_aligned(rte_pktmbuf_mtod(txm, char *), > > + __alignof__(struct virtio_net_hdr_mrg_rxbuf))) > > + can_push = 1; > > + else if (vtpci_with_feature(hw, VIRTIO_RING_F_INDIRECT_DESC) && > > + txm->nb_segs < VIRTIO_MAX_TX_INDIRECT) > > + use_indirect = 1; > > + > > + /* How many main ring entries are needed to this Tx? > > + * any_layout => number of segments > > + * indirect => 1 > > + * default => number of segments + 1 > > + */ > > + slots = use_indirect ? 1 : (txm->nb_segs + !can_push); > > + need = slots - vq->vq_free_cnt; > > + > > + /* Positive value indicates it need free vring descriptors */ > > + if (unlikely(need > 0)) { > > + nb_used = VIRTQUEUE_NUSED(vq); > > + virtio_rmb(); > > + need = RTE_MIN(need, (int)nb_used); > > + > > + virtio_xmit_cleanup(vq, need); > > + need = slots - vq->vq_free_cnt; > > + if (unlikely(need > 0)) { > > + PMD_TX_LOG(ERR, > > + "No free tx descriptors to transmit"); > > + break; > > + } > > + } > > + > > + /* Enqueue Packet buffers */ > > + virtqueue_enqueue_xmit(txvq, txm, slots, use_indirect, can_push); > > + > > + txvq->stats.bytes += txm->pkt_len; > > + virtio_update_packet_stats(&txvq->stats, txm); > > + } > > + > > + txvq->stats.packets += nb_tx; > > + > > + if (likely(nb_tx)) { > > + vq_update_avail_idx(vq); > > + > > + if (unlikely(virtqueue_kick_prepare(vq))) { > > + virtqueue_notify(vq); > > + PMD_TX_LOG(DEBUG, "Notified backend after xmit"); > > + } > > + } > > + > > + return nb_tx; > > +} > > Simple Tx has some special assumptions and setups of the txq. > Basically the current implementation of virtio_inject_pkts() > is a mirror of virtio_xmit_pkts(). So when simple Tx function > is chosen, calling virtio_inject_pkts() could cause problems. That's why I suggested to invoke the tx_pkt_burst callback directly. --yliu