DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: Ilya Maximets <i.maximets@samsung.com>,
	Yuanhan Liu <yuanhan.liu@linux.intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"Xie, Huawei" <huawei.xie@intel.com>,
	Dyasly Sergey <s.dyasly@samsung.com>
Subject: Re: [dpdk-dev] [PATCH v4] vhost: use SMP barriers instead of	compiler ones.
Date: Mon, 21 Mar 2016 14:07:32 +0000	[thread overview]
Message-ID: <2601191342CEEE43887BDE71AB97725836B1F296@irsmsx105.ger.corp.intel.com> (raw)
In-Reply-To: <56EF7D72.1050108@samsung.com>



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ilya Maximets
> Sent: Monday, March 21, 2016 4:50 AM
> To: Yuanhan Liu
> Cc: dev@dpdk.org; Xie, Huawei; Dyasly Sergey
> Subject: Re: [dpdk-dev] [PATCH v4] vhost: use SMP barriers instead of compiler ones.
> 
> 
> 
> On 18.03.2016 15:41, Yuanhan Liu wrote:
> > On Fri, Mar 18, 2016 at 03:23:53PM +0300, Ilya Maximets wrote:
> >> Since commit 4c02e453cc62 ("eal: introduce SMP memory barriers") virtio
> >> uses architecture dependent SMP barriers. vHost should use them too.
> >>
> >> Fixes: 4c02e453cc62 ("eal: introduce SMP memory barriers")
> >>
> >> Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
> >> ---
> >>  lib/librte_vhost/vhost_rxtx.c | 7 ++++---
> >>  1 file changed, 4 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vhost/vhost_rxtx.c
> >> index b4da665..859c669 100644
> >> --- a/lib/librte_vhost/vhost_rxtx.c
> >> +++ b/lib/librte_vhost/vhost_rxtx.c
> >> @@ -315,7 +315,7 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
> >>  			rte_prefetch0(&vq->desc[desc_indexes[i+1]]);
> >>  	}
> >>
> >> -	rte_compiler_barrier();
> >> +	rte_smp_wmb();
> >>
> >>  	/* Wait until it's our turn to add our buffer to the used ring. */
> >>  	while (unlikely(vq->last_used_idx != res_start_idx))
> >> @@ -565,7 +565,7 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id,
> >>
> >>  		nr_used = copy_mbuf_to_desc_mergeable(dev, vq, start, end,
> >>  						      pkts[pkt_idx]);
> >> -		rte_compiler_barrier();
> >> +		rte_smp_wmb();
> >>
> >>  		/*
> >>  		 * Wait until it's our turn to add our buffer
> >> @@ -923,7 +923,8 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id,
> >>  				sizeof(vq->used->ring[used_idx]));
> >>  	}
> >>
> >> -	rte_compiler_barrier();
> >> +	rte_smp_wmb();
> >> +	rte_smp_rmb();
> >
> > rte_smp_mb?
> 
> rte_smp_mb() is a real mm_fence() on x86. And we don't need to synchronize reads with
> writes here, only reads with reads and writes with writes. It is enough because next
> increment uses read and write. Pair of barriers is better because it will not impact
> on performance on x86.

Not arguing about that particular patch, just a question:
Why do we have:
#define rte_smp_mb() rte_mb()
for  x86?
Why not just:
#define rte_smp_mb() rte_compiler_barrier()
here?
I meant for situations when we do need real mfence, there is an 'rte_mb' to use.
Konstantin

> 
> Best regards, Ilya Maximets.

  reply	other threads:[~2016-03-21 14:07 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-02-24 11:47 [dpdk-dev] [PATCH RFC v3 0/3] Thread safe rte_vhost_enqueue_burst() Ilya Maximets
2016-02-24 11:47 ` [dpdk-dev] [PATCH RFC v3 1/3] vhost: use SMP barriers instead of compiler ones Ilya Maximets
2016-03-18 10:08   ` Xie, Huawei
2016-03-18 10:23     ` Ilya Maximets
2016-03-18 10:27       ` Xie, Huawei
2016-03-18 10:39         ` Ilya Maximets
2016-03-18 10:47           ` Xie, Huawei
2016-03-18 11:00             ` Ilya Maximets
     [not found]               ` <C37D651A908B024F974696C65296B57B4C67825C@SHSMSX101.ccr.corp.intel.com>
     [not found]                 ` <56EBE9AE.9070400@samsung.com>
     [not found]                   ` <56EBF256.8040409@samsung.com>
2016-03-18 12:28                     ` Ilya Maximets
2016-03-18 12:23   ` [dpdk-dev] [PATCH v4] " Ilya Maximets
2016-03-18 12:41     ` Yuanhan Liu
2016-03-21  4:49       ` Ilya Maximets
2016-03-21 14:07         ` Ananyev, Konstantin [this message]
2016-03-21 17:25           ` Xie, Huawei
2016-03-21 17:36             ` Ananyev, Konstantin
2016-03-23 14:07     ` Xie, Huawei
2016-03-31 13:46       ` Thomas Monjalon
2016-02-24 11:47 ` [dpdk-dev] [PATCH RFC v3 2/3] vhost: make buf vector for scatter RX local Ilya Maximets
2016-02-24 11:47 ` [dpdk-dev] [PATCH RFC v3 3/3] vhost: avoid reordering of used->idx and last_used_idx updating Ilya Maximets
2016-03-17 15:29 ` [dpdk-dev] [PATCH RFC v3 0/3] Thread safe rte_vhost_enqueue_burst() Thomas Monjalon
2016-03-18  8:00   ` Yuanhan Liu
2016-03-18  8:09     ` Thomas Monjalon
2016-03-18  9:16       ` Yuanhan Liu
2016-03-18  9:34         ` Thomas Monjalon
2016-03-18  9:46           ` Yuanhan Liu
2016-03-18  9:55             ` Ilya Maximets
2016-03-18 10:10               ` Xie, Huawei
2016-03-18 10:24                 ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2601191342CEEE43887BDE71AB97725836B1F296@irsmsx105.ger.corp.intel.com \
    --to=konstantin.ananyev@intel.com \
    --cc=dev@dpdk.org \
    --cc=huawei.xie@intel.com \
    --cc=i.maximets@samsung.com \
    --cc=s.dyasly@samsung.com \
    --cc=yuanhan.liu@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).