From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 414025A7F for ; Thu, 29 Jan 2015 08:24:32 +0100 (CET) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga102.jf.intel.com with ESMTP; 28 Jan 2015 23:21:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.09,485,1418112000"; d="scan'208";a="658473102" Received: from shvmail01.sh.intel.com ([10.239.29.42]) by fmsmga001.fm.intel.com with ESMTP; 28 Jan 2015 23:24:22 -0800 Received: from shecgisg004.sh.intel.com (shecgisg004.sh.intel.com [10.239.29.89]) by shvmail01.sh.intel.com with ESMTP id t0T7OJ5u014911; Thu, 29 Jan 2015 15:24:19 +0800 Received: from shecgisg004.sh.intel.com (localhost [127.0.0.1]) by shecgisg004.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP id t0T7OGkJ014643; Thu, 29 Jan 2015 15:24:18 +0800 Received: (from couyang@localhost) by shecgisg004.sh.intel.com (8.13.6/8.13.6/Submit) id t0T7OG51014639; Thu, 29 Jan 2015 15:24:16 +0800 From: Ouyang Changchun To: dev@dpdk.org Date: Thu, 29 Jan 2015 15:23:46 +0800 Message-Id: <1422516249-14596-3-git-send-email-changchun.ouyang@intel.com> X-Mailer: git-send-email 1.7.12.2 In-Reply-To: <1422516249-14596-1-git-send-email-changchun.ouyang@intel.com> References: <1422326164-13697-1-git-send-email-changchun.ouyang@intel.com> <1422516249-14596-1-git-send-email-changchun.ouyang@intel.com> Subject: [dpdk-dev] [PATCH v3 02/25] virtio: Use weaker barriers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 29 Jan 2015 07:24:34 -0000 The DPDK driver only has to deal with the case of running on PCI and with SMP. In this case, the code can use the weaker barriers instead of using hard (fence) barriers. This will help performance. The rationale is explained in Linux kernel virtio_ring.h. To make it clearer that this is a virtio thing and not some generic barrier, prefix the barrier calls with virtio_. Add missing (and needed) barrier between updating ring data structure and notifying host. Signed-off-by: Stephen Hemminger Signed-off-by: Changchun Ouyang --- lib/librte_pmd_virtio/virtio_ethdev.c | 2 +- lib/librte_pmd_virtio/virtio_rxtx.c | 8 +++++--- lib/librte_pmd_virtio/virtqueue.h | 19 ++++++++++++++----- 3 files changed, 20 insertions(+), 9 deletions(-) diff --git a/lib/librte_pmd_virtio/virtio_ethdev.c b/lib/librte_pmd_virtio/virtio_ethdev.c index 662a49c..dc47e72 100644 --- a/lib/librte_pmd_virtio/virtio_ethdev.c +++ b/lib/librte_pmd_virtio/virtio_ethdev.c @@ -175,7 +175,7 @@ virtio_send_command(struct virtqueue *vq, struct virtio_pmd_ctrl *ctrl, uint32_t idx, desc_idx, used_idx; struct vring_used_elem *uep; - rmb(); + virtio_rmb(); used_idx = (uint32_t)(vq->vq_used_cons_idx & (vq->vq_nentries - 1)); diff --git a/lib/librte_pmd_virtio/virtio_rxtx.c b/lib/librte_pmd_virtio/virtio_rxtx.c index c013f97..78af334 100644 --- a/lib/librte_pmd_virtio/virtio_rxtx.c +++ b/lib/librte_pmd_virtio/virtio_rxtx.c @@ -456,7 +456,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) nb_used = VIRTQUEUE_NUSED(rxvq); - rmb(); + virtio_rmb(); num = (uint16_t)(likely(nb_used <= nb_pkts) ? nb_used : nb_pkts); num = (uint16_t)(likely(num <= VIRTIO_MBUF_BURST_SZ) ? num : VIRTIO_MBUF_BURST_SZ); @@ -516,6 +516,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } if (likely(nb_enqueued)) { + virtio_wmb(); if (unlikely(virtqueue_kick_prepare(rxvq))) { virtqueue_notify(rxvq); PMD_RX_LOG(DEBUG, "Notified\n"); @@ -547,7 +548,7 @@ virtio_recv_mergeable_pkts(void *rx_queue, nb_used = VIRTQUEUE_NUSED(rxvq); - rmb(); + virtio_rmb(); if (nb_used == 0) return 0; @@ -694,7 +695,7 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) PMD_TX_LOG(DEBUG, "%d packets to xmit", nb_pkts); nb_used = VIRTQUEUE_NUSED(txvq); - rmb(); + virtio_rmb(); num = (uint16_t)(likely(nb_used < VIRTIO_MBUF_BURST_SZ) ? nb_used : VIRTIO_MBUF_BURST_SZ); @@ -735,6 +736,7 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } } vq_update_avail_idx(txvq); + virtio_wmb(); txvq->packets += nb_tx; diff --git a/lib/librte_pmd_virtio/virtqueue.h b/lib/librte_pmd_virtio/virtqueue.h index fdee054..f6ad98d 100644 --- a/lib/librte_pmd_virtio/virtqueue.h +++ b/lib/librte_pmd_virtio/virtqueue.h @@ -46,9 +46,18 @@ #include "virtio_ring.h" #include "virtio_logs.h" -#define mb() rte_mb() -#define wmb() rte_wmb() -#define rmb() rte_rmb() +/* + * Per virtio_config.h in Linux. + * For virtio_pci on SMP, we don't need to order with respect to MMIO + * accesses through relaxed memory I/O windows, so smp_mb() et al are + * sufficient. + * + * This driver is for virtio_pci on SMP and therefore can assume + * weaker (compiler barriers) + */ +#define virtio_mb() rte_mb() +#define virtio_rmb() rte_compiler_barrier() +#define virtio_wmb() rte_compiler_barrier() #ifdef RTE_PMD_PACKET_PREFETCH #define rte_packet_prefetch(p) rte_prefetch1(p) @@ -225,7 +234,7 @@ virtqueue_full(const struct virtqueue *vq) static inline void vq_update_avail_idx(struct virtqueue *vq) { - rte_compiler_barrier(); + virtio_rmb(); vq->vq_ring.avail->idx = vq->vq_avail_idx; } @@ -255,7 +264,7 @@ static inline void virtqueue_notify(struct virtqueue *vq) { /* - * Ensure updated avail->idx is visible to host. mb() necessary? + * Ensure updated avail->idx is visible to host. * For virtio on IA, the notificaiton is through io port operation * which is a serialization instruction itself. */ -- 1.8.4.2