From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id D0854C81E for ; Wed, 29 Apr 2015 13:29:41 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga102.fm.intel.com with ESMTP; 29 Apr 2015 04:29:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.11,670,1422950400"; d="scan'208";a="687385555" Received: from shvmail01.sh.intel.com ([10.239.29.42]) by orsmga001.jf.intel.com with ESMTP; 29 Apr 2015 04:29:39 -0700 Received: from shecgisg003.sh.intel.com (shecgisg003.sh.intel.com [10.239.29.90]) by shvmail01.sh.intel.com with ESMTP id t3TBTcLE013817; Wed, 29 Apr 2015 19:29:38 +0800 Received: from shecgisg003.sh.intel.com (localhost [127.0.0.1]) by shecgisg003.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP id t3TBTZbx009652; Wed, 29 Apr 2015 19:29:37 +0800 Received: (from hxie5@localhost) by shecgisg003.sh.intel.com (8.13.6/8.13.6/Submit) id t3TBTZLD009648; Wed, 29 Apr 2015 19:29:35 +0800 From: Huawei Xie To: dev@dpdk.org Date: Wed, 29 Apr 2015 19:29:34 +0800 Message-Id: <1430306974-9618-1-git-send-email-huawei.xie@intel.com> X-Mailer: git-send-email 1.7.4.1 Subject: [dpdk-dev] [PATCH] vhost: make vhost lockless enqueue configurable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Apr 2015 11:29:42 -0000 vhost enabled vSwitch could have their own thread-safe vring enqueue policy. Add the RTE_LIBRTE_VHOST_LOCKLESS_ENQ macro for vhost lockless enqueue. Turn it off by default. Signed-off-by: Huawei Xie --- config/common_linuxapp | 1 + lib/librte_vhost/vhost_rxtx.c | 24 +++++++++++++++++++++++- 2 files changed, 24 insertions(+), 1 deletion(-) diff --git a/config/common_linuxapp b/config/common_linuxapp index 0078dc9..7f59499 100644 --- a/config/common_linuxapp +++ b/config/common_linuxapp @@ -421,6 +421,7 @@ CONFIG_RTE_KNI_VHOST_DEBUG_TX=n # CONFIG_RTE_LIBRTE_VHOST=n CONFIG_RTE_LIBRTE_VHOST_USER=y +CONFIG_RTE_LIBRTE_VHOST_LOCKLESS_ENQ=n CONFIG_RTE_LIBRTE_VHOST_DEBUG=n # diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vhost/vhost_rxtx.c index 510ffe8..475be6e 100644 --- a/lib/librte_vhost/vhost_rxtx.c +++ b/lib/librte_vhost/vhost_rxtx.c @@ -80,7 +80,11 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, * they need to be reserved. */ do { +#ifdef RTE_LIBRTE_VHOST_LOCKESS_ENQ res_base_idx = vq->last_used_idx_res; +#else + res_base_idx = vq->last_used_idx; +#endif avail_idx = *((volatile uint16_t *)&vq->avail->idx); free_entries = (avail_idx - res_base_idx); @@ -92,10 +96,15 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, return 0; res_end_idx = res_base_idx + count; + +#ifdef RTE_LIBRTE_VHOST_LOCKLESS_ENQ /* vq->last_used_idx_res is atomically updated. */ - /* TODO: Allow to disable cmpset if no concurrency in application. */ success = rte_atomic16_cmpset(&vq->last_used_idx_res, res_base_idx, res_end_idx); +#else + /* last_used_idx_res isn't used. */ + success = 1; +#endif } while (unlikely(success == 0)); res_cur_idx = res_base_idx; LOG_DEBUG(VHOST_DATA, "(%"PRIu64") Current Index %d| End Index %d\n", @@ -171,9 +180,11 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, rte_compiler_barrier(); +#ifdef RTE_LIBRTE_VHOST_LOCKLESS_ENQ /* Wait until it's our turn to add our buffer to the used ring. */ while (unlikely(vq->last_used_idx != res_base_idx)) rte_pause(); +#endif *(volatile uint16_t *)&vq->used->idx += count; vq->last_used_idx = res_end_idx; @@ -422,11 +433,15 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, uint16_t i, id; do { +#ifdef RTE_LIBRTE_VHOST_LOCKLESS_ENQ /* * As many data cores may want access to available * buffers, they need to be reserved. */ res_base_idx = vq->last_used_idx_res; +#else + res_base_idx = vq->last_used_idx; +#endif res_cur_idx = res_base_idx; do { @@ -459,10 +474,15 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, } } while (pkt_len > secure_len); +#ifdef RTE_LIBRTE_VHOST_LOCKLESS_ENQ /* vq->last_used_idx_res is atomically updated. */ success = rte_atomic16_cmpset(&vq->last_used_idx_res, res_base_idx, res_cur_idx); +#else + /* last_used_idx_res isn't used. */ + success = 1; +#endif } while (success == 0); id = res_base_idx; @@ -495,12 +515,14 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, rte_compiler_barrier(); +#ifdef RTE_LIBRTE_VHOST_LOCKLESS_ENQ /* * Wait until it's our turn to add our buffer * to the used ring. */ while (unlikely(vq->last_used_idx != res_base_idx)) rte_pause(); +#endif *(volatile uint16_t *)&vq->used->idx += entry_success; vq->last_used_idx = res_end_idx; -- 1.8.1.4