DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH] Added Spinlock to l3fwd-vf example to prevent race conditioning
@ 2014-07-08 11:16 Daniel Mrzyglod
  2014-07-23  8:33 ` Thomas Monjalon
  2014-12-08 14:45 ` [dpdk-dev] " Neil Horman
  0 siblings, 2 replies; 14+ messages in thread
From: Daniel Mrzyglod @ 2014-07-08 11:16 UTC (permalink / raw)
  To: dev


Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
---
 examples/l3fwd-vf/main.c |    6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/examples/l3fwd-vf/main.c b/examples/l3fwd-vf/main.c
index 2ca5c21..57852d0 100644
--- a/examples/l3fwd-vf/main.c
+++ b/examples/l3fwd-vf/main.c
@@ -54,6 +54,7 @@
 #include <rte_per_lcore.h>
 #include <rte_launch.h>
 #include <rte_atomic.h>
+#include <rte_spinlock.h>
 #include <rte_cycles.h>
 #include <rte_prefetch.h>
 #include <rte_lcore.h>
@@ -328,7 +329,7 @@ struct lcore_conf {
 } __rte_cache_aligned;
 
 static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
-
+static rte_spinlock_t spinlock_conf[RTE_MAX_ETHPORTS]={RTE_SPINLOCK_INITIALIZER};
 /* Send burst of packets on an output interface */
 static inline int
 send_burst(struct lcore_conf *qconf, uint16_t n, uint8_t port)
@@ -340,7 +341,10 @@ send_burst(struct lcore_conf *qconf, uint16_t n, uint8_t port)
 	queueid = qconf->tx_queue_id;
 	m_table = (struct rte_mbuf **)qconf->tx_mbufs[port].m_table;
 
+	rte_spinlock_lock(&spinlock_conf[port]) ;
 	ret = rte_eth_tx_burst(port, queueid, m_table, n);
+	rte_spinlock_unlock(&spinlock_conf[port]);
+	
 	if (unlikely(ret < n)) {
 		do {
 			rte_pktmbuf_free(m_table[ret]);
-- 
1.7.9.5

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] [PATCH] Added Spinlock to l3fwd-vf example to prevent race conditioning
  2014-07-08 11:16 [dpdk-dev] [PATCH] Added Spinlock to l3fwd-vf example to prevent race conditioning Daniel Mrzyglod
@ 2014-07-23  8:33 ` Thomas Monjalon
  2014-11-11 22:56   ` Thomas Monjalon
  2014-12-08 14:45 ` [dpdk-dev] " Neil Horman
  1 sibling, 1 reply; 14+ messages in thread
From: Thomas Monjalon @ 2014-07-23  8:33 UTC (permalink / raw)
  To: Daniel Mrzyglod; +Cc: dev

Hi Daniel,

Some explanations are missing here.

> Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
> 
> --- a/examples/l3fwd-vf/main.c
> +++ b/examples/l3fwd-vf/main.c
> @@ -54,6 +54,7 @@
>  #include <rte_per_lcore.h>
>  #include <rte_launch.h>
>  #include <rte_atomic.h>
> +#include <rte_spinlock.h>
>  #include <rte_cycles.h>
>  #include <rte_prefetch.h>
>  #include <rte_lcore.h>
> @@ -328,7 +329,7 @@ struct lcore_conf {
>  } __rte_cache_aligned;
>  
>  static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
> -
> +static rte_spinlock_t spinlock_conf[RTE_MAX_ETHPORTS]={RTE_SPINLOCK_INITIALIZER};
>  /* Send burst of packets on an output interface */
>  static inline int
>  send_burst(struct lcore_conf *qconf, uint16_t n, uint8_t port)
> @@ -340,7 +341,10 @@ send_burst(struct lcore_conf *qconf, uint16_t n, uint8_t port)
>  	queueid = qconf->tx_queue_id;
>  	m_table = (struct rte_mbuf **)qconf->tx_mbufs[port].m_table;
>  
> +	rte_spinlock_lock(&spinlock_conf[port]) ;
>  	ret = rte_eth_tx_burst(port, queueid, m_table, n);
> +	rte_spinlock_unlock(&spinlock_conf[port]);
> +	
>  	if (unlikely(ret < n)) {
>  		do {
>  			rte_pktmbuf_free(m_table[ret]);
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] [PATCH] Added Spinlock to l3fwd-vf example to prevent race conditioning
  2014-07-23  8:33 ` Thomas Monjalon
@ 2014-11-11 22:56   ` Thomas Monjalon
  2014-11-11 23:18     ` Xie, Huawei
  0 siblings, 1 reply; 14+ messages in thread
From: Thomas Monjalon @ 2014-11-11 22:56 UTC (permalink / raw)
  To: Daniel Mrzyglod; +Cc: dev

Hi Daniel,

This old patch is probably good but I'd like you explain it please.
Reviewers are also welcome.

Thanks
-- 
Thomas

2014-07-23 10:33, Thomas Monjalon:
> Hi Daniel,
> 
> Some explanations are missing here.
> 
> > Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
> > 
> > --- a/examples/l3fwd-vf/main.c
> > +++ b/examples/l3fwd-vf/main.c
> > @@ -54,6 +54,7 @@
> >  #include <rte_per_lcore.h>
> >  #include <rte_launch.h>
> >  #include <rte_atomic.h>
> > +#include <rte_spinlock.h>
> >  #include <rte_cycles.h>
> >  #include <rte_prefetch.h>
> >  #include <rte_lcore.h>
> > @@ -328,7 +329,7 @@ struct lcore_conf {
> >  } __rte_cache_aligned;
> >  
> >  static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
> > -
> > +static rte_spinlock_t spinlock_conf[RTE_MAX_ETHPORTS]={RTE_SPINLOCK_INITIALIZER};
> >  /* Send burst of packets on an output interface */
> >  static inline int
> >  send_burst(struct lcore_conf *qconf, uint16_t n, uint8_t port)
> > @@ -340,7 +341,10 @@ send_burst(struct lcore_conf *qconf, uint16_t n, uint8_t port)
> >  	queueid = qconf->tx_queue_id;
> >  	m_table = (struct rte_mbuf **)qconf->tx_mbufs[port].m_table;
> >  
> > +	rte_spinlock_lock(&spinlock_conf[port]) ;
> >  	ret = rte_eth_tx_burst(port, queueid, m_table, n);
> > +	rte_spinlock_unlock(&spinlock_conf[port]);
> > +	
> >  	if (unlikely(ret < n)) {
> >  		do {
> >  			rte_pktmbuf_free(m_table[ret]);
> > 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] [PATCH] Added Spinlock to l3fwd-vf example to prevent race conditioning
  2014-11-11 22:56   ` Thomas Monjalon
@ 2014-11-11 23:18     ` Xie, Huawei
  0 siblings, 0 replies; 14+ messages in thread
From: Xie, Huawei @ 2014-11-11 23:18 UTC (permalink / raw)
  To: Thomas Monjalon, Mrzyglod, DanielX T; +Cc: dev


> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas Monjalon
> Sent: Tuesday, November 11, 2014 3:57 PM
> To: Mrzyglod, DanielX T
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] Added Spinlock to l3fwd-vf example to prevent
> race conditioning
> 
> Hi Daniel,
> 
> This old patch is probably good but I'd like you explain it please.
> Reviewers are also welcome.
> 
> Thanks
> --
> Thomas
> 
> 2014-07-23 10:33, Thomas Monjalon:
> > Hi Daniel,
> >
> > Some explanations are missing here.
> >
> > > Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
> > >
> > > --- a/examples/l3fwd-vf/main.c
> > > +++ b/examples/l3fwd-vf/main.c
> > > @@ -54,6 +54,7 @@
> > >  #include <rte_per_lcore.h>
> > >  #include <rte_launch.h>
> > >  #include <rte_atomic.h>
> > > +#include <rte_spinlock.h>
> > >  #include <rte_cycles.h>
> > >  #include <rte_prefetch.h>
> > >  #include <rte_lcore.h>
> > > @@ -328,7 +329,7 @@ struct lcore_conf {
> > >  } __rte_cache_aligned;
> > >
> > >  static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
> > > -
> > > +static rte_spinlock_t
> spinlock_conf[RTE_MAX_ETHPORTS]={RTE_SPINLOCK_INITIALIZER};
> > >  /* Send burst of packets on an output interface */
> > >  static inline int
> > >  send_burst(struct lcore_conf *qconf, uint16_t n, uint8_t port)
> > > @@ -340,7 +341,10 @@ send_burst(struct lcore_conf *qconf, uint16_t n,
> uint8_t port)
> > >  	queueid = qconf->tx_queue_id;
> > >  	m_table = (struct rte_mbuf **)qconf->tx_mbufs[port].m_table;
> > >
> > > +	rte_spinlock_lock(&spinlock_conf[port]) ;
> > >  	ret = rte_eth_tx_burst(port, queueid, m_table, n);
> > > +	rte_spinlock_unlock(&spinlock_conf[port]);

It might not be good choice for here, but how about we also provide spin_trylock as alternative API?

> > > +
> > >  	if (unlikely(ret < n)) {
> > >  		do {
> > >  			rte_pktmbuf_free(m_table[ret]);
> > >

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] Added Spinlock to l3fwd-vf example to prevent race conditioning
  2014-07-08 11:16 [dpdk-dev] [PATCH] Added Spinlock to l3fwd-vf example to prevent race conditioning Daniel Mrzyglod
  2014-07-23  8:33 ` Thomas Monjalon
@ 2014-12-08 14:45 ` Neil Horman
  2014-12-10  8:18   ` Wodkowski, PawelX
                     ` (2 more replies)
  1 sibling, 3 replies; 14+ messages in thread
From: Neil Horman @ 2014-12-08 14:45 UTC (permalink / raw)
  To: Daniel Mrzyglod; +Cc: dev

On Tue, Jul 08, 2014 at 12:16:24PM +0100, Daniel Mrzyglod wrote:
> Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
> 
> ---
> examples/l3fwd-vf/main.c |    6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/examples/l3fwd-vf/main.c b/examples/l3fwd-vf/main.c
> index 2ca5c21..57852d0 100644
> --- a/examples/l3fwd-vf/main.c
> +++ b/examples/l3fwd-vf/main.c
> @@ -54,6 +54,7 @@
>  #include <rte_per_lcore.h>
>  #include <rte_launch.h>
>  #include <rte_atomic.h>
> +#include <rte_spinlock.h>
>  #include <rte_cycles.h>
>  #include <rte_prefetch.h>
>  #include <rte_lcore.h>
> @@ -328,7 +329,7 @@ struct lcore_conf {
>  } __rte_cache_aligned;
>  
>  static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
> -
> +static rte_spinlock_t spinlock_conf[RTE_MAX_ETHPORTS]={RTE_SPINLOCK_INITIALIZER};
>  /* Send burst of packets on an output interface */
>  static inline int
>  send_burst(struct lcore_conf *qconf, uint16_t n, uint8_t port)
> @@ -340,7 +341,10 @@ send_burst(struct lcore_conf *qconf, uint16_t n, uint8_t port)
>  	queueid = qconf->tx_queue_id;
>  	m_table = (struct rte_mbuf **)qconf->tx_mbufs[port].m_table;
>  
> +	rte_spinlock_lock(&spinlock_conf[port]) ;
>  	ret = rte_eth_tx_burst(port, queueid, m_table, n);
> +	rte_spinlock_unlock(&spinlock_conf[port]);
> +	
>  	if (unlikely(ret < n)) {
>  		do {
>  			rte_pktmbuf_free(m_table[ret]);

Acked-by: Neil Horman <nhorman@tuxdriver.com>

Though, that said, doesn't it seem to anyone else like serialization of enqueue
to a port should be the responsibility of the library, not the application?

Neil

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] Added Spinlock to l3fwd-vf example to prevent race conditioning
  2014-12-08 14:45 ` [dpdk-dev] " Neil Horman
@ 2014-12-10  8:18   ` Wodkowski, PawelX
  2014-12-10 14:47     ` Neil Horman
  2014-12-10 10:53   ` Thomas Monjalon
  2014-12-11  1:08   ` Thomas Monjalon
  2 siblings, 1 reply; 14+ messages in thread
From: Wodkowski, PawelX @ 2014-12-10  8:18 UTC (permalink / raw)
  To: Neil Horman, Mrzyglod, DanielX T; +Cc: dev

> Though, that said, doesn't it seem to anyone else like serialization of enqueue
> to a port should be the responsibility of the library, not the application?
> 
> Neil

>From my knowledge it is an application  responsibility to serialize access to
queue on particular port.

Pawel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] Added Spinlock to l3fwd-vf example to prevent race conditioning
  2014-12-08 14:45 ` [dpdk-dev] " Neil Horman
  2014-12-10  8:18   ` Wodkowski, PawelX
@ 2014-12-10 10:53   ` Thomas Monjalon
  2014-12-11  1:08   ` Thomas Monjalon
  2 siblings, 0 replies; 14+ messages in thread
From: Thomas Monjalon @ 2014-12-10 10:53 UTC (permalink / raw)
  To: Neil Horman, Daniel Mrzyglod; +Cc: dev

2014-12-08 09:45, Neil Horman:
> On Tue, Jul 08, 2014 at 12:16:24PM +0100, Daniel Mrzyglod wrote:
> > Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
> Acked-by: Neil Horman <nhorman@tuxdriver.com>

Someone to provide an explanation for commit log?

Thanks
-- 
Thomas

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] Added Spinlock to l3fwd-vf example to prevent race conditioning
  2014-12-10  8:18   ` Wodkowski, PawelX
@ 2014-12-10 14:47     ` Neil Horman
  2014-12-10 14:54       ` Bruce Richardson
  0 siblings, 1 reply; 14+ messages in thread
From: Neil Horman @ 2014-12-10 14:47 UTC (permalink / raw)
  To: Wodkowski, PawelX; +Cc: dev

On Wed, Dec 10, 2014 at 08:18:36AM +0000, Wodkowski, PawelX wrote:
> > Though, that said, doesn't it seem to anyone else like serialization of enqueue
> > to a port should be the responsibility of the library, not the application?
> > 
> > Neil
> 
> From my knowledge it is an application  responsibility to serialize access to
> queue on particular port.
> 
I understand thats the way it currently is, I'm advocating for the fact that it
should not be.
Neil

> Pawel
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] Added Spinlock to l3fwd-vf example to prevent race conditioning
  2014-12-10 14:47     ` Neil Horman
@ 2014-12-10 14:54       ` Bruce Richardson
  2014-12-10 15:53         ` Mrzyglod, DanielX T
  2014-12-10 16:16         ` Neil Horman
  0 siblings, 2 replies; 14+ messages in thread
From: Bruce Richardson @ 2014-12-10 14:54 UTC (permalink / raw)
  To: Neil Horman; +Cc: dev

On Wed, Dec 10, 2014 at 09:47:45AM -0500, Neil Horman wrote:
> On Wed, Dec 10, 2014 at 08:18:36AM +0000, Wodkowski, PawelX wrote:
> > > Though, that said, doesn't it seem to anyone else like serialization of enqueue
> > > to a port should be the responsibility of the library, not the application?
> > > 
> > > Neil
> > 
> > From my knowledge it is an application  responsibility to serialize access to
> > queue on particular port.
> > 
> I understand thats the way it currently is, I'm advocating for the fact that it
> should not be.
> Neil
>
It could be done, but I think we'd need to add a new API (or new parameter to
existing API) to do so, as the cost of adding the locks would be severe, even in
the uncontented case. 
This is why it hasn't been done up till now, obviously enough. In general, where
we don't provide performant multi-thread safe APIs, we generally don't try and
provide versions with locks, we just document the limitation and then leave it 
up to the app to determine how best to handle things.

/Bruce

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] Added Spinlock to l3fwd-vf example to prevent race conditioning
  2014-12-10 14:54       ` Bruce Richardson
@ 2014-12-10 15:53         ` Mrzyglod, DanielX T
  2014-12-10 16:16         ` Neil Horman
  1 sibling, 0 replies; 14+ messages in thread
From: Mrzyglod, DanielX T @ 2014-12-10 15:53 UTC (permalink / raw)
  To: dev



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Bruce Richardson
> Sent: Wednesday, December 10, 2014 3:55 PM
> To: Neil Horman
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] Added Spinlock to l3fwd-vf example to prevent race
> conditioning
> 
> On Wed, Dec 10, 2014 at 09:47:45AM -0500, Neil Horman wrote:
> > On Wed, Dec 10, 2014 at 08:18:36AM +0000, Wodkowski, PawelX wrote:
> > > > Though, that said, doesn't it seem to anyone else like serialization of
> enqueue
> > > > to a port should be the responsibility of the library, not the application?
> > > >
> > > > Neil
> > >
> > > From my knowledge it is an application  responsibility to serialize access to
> > > queue on particular port.
> > >
> > I understand thats the way it currently is, I'm advocating for the fact that it
> > should not be.
> > Neil
> >
> It could be done, but I think we'd need to add a new API (or new parameter to
> existing API) to do so, as the cost of adding the locks would be severe, even in
> the uncontented case.
> This is why it hasn't been done up till now, obviously enough. In general, where
> we don't provide performant multi-thread safe APIs, we generally don't try and
> provide versions with locks, we just document the limitation and then leave it
> up to the app to determine how best to handle things.
> 
> /Bruce


the problem is when the routing is through the same queue the app crashed. 
example: traffic to 1.1.1.1 from port 0 and 1.1.1.1 from port 1.
You all are right :)
So the only solution are spinlocks, or we must modify 
intel-dpdk-sample-applications-user-guide.pdf to inform users about limitations.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] Added Spinlock to l3fwd-vf example to prevent race conditioning
  2014-12-10 14:54       ` Bruce Richardson
  2014-12-10 15:53         ` Mrzyglod, DanielX T
@ 2014-12-10 16:16         ` Neil Horman
  2014-12-10 23:38           ` Stephen Hemminger
  1 sibling, 1 reply; 14+ messages in thread
From: Neil Horman @ 2014-12-10 16:16 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev

On Wed, Dec 10, 2014 at 02:54:56PM +0000, Bruce Richardson wrote:
> On Wed, Dec 10, 2014 at 09:47:45AM -0500, Neil Horman wrote:
> > On Wed, Dec 10, 2014 at 08:18:36AM +0000, Wodkowski, PawelX wrote:
> > > > Though, that said, doesn't it seem to anyone else like serialization of enqueue
> > > > to a port should be the responsibility of the library, not the application?
> > > > 
> > > > Neil
> > > 
> > > From my knowledge it is an application  responsibility to serialize access to
> > > queue on particular port.
> > > 
> > I understand thats the way it currently is, I'm advocating for the fact that it
> > should not be.
> > Neil
> >
> It could be done, but I think we'd need to add a new API (or new parameter to
> existing API) to do so, as the cost of adding the locks would be severe, even in
> the uncontented case. 
> This is why it hasn't been done up till now, obviously enough. In general, where
> we don't provide performant multi-thread safe APIs, we generally don't try and
> provide versions with locks, we just document the limitation and then leave it 
> up to the app to determine how best to handle things.
> 
This really seems like a false savings to me.  If an application intends to use
multiple processes (which by all rights it seems like the use case that the dpdk
is mostly designed for) then you need locking one way or another, and you've
just made application coding harder, because the application now needs to know
which functions might have internal critical sections that they need to provide
locking for.

I agree that, in the single process case, there might be a slight performance
loss (though I contend it wouldn't be greatly significant).  That said, I would
argue that the right approach is to do the locking internally to the DPDK, then
provide a configuration point which toggles the spinlock defintions to either do
proper locking, or just reduce to empty definitions, the same way the Linux and
BSD kernels do in the uniprocessor case.  That way applications never have to
worry about internal locking, and you can still build for the optimal case when
you need to.

Neil

> /Bruce
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] Added Spinlock to l3fwd-vf example to prevent race conditioning
  2014-12-10 16:16         ` Neil Horman
@ 2014-12-10 23:38           ` Stephen Hemminger
  2014-12-11  0:34             ` Neil Horman
  0 siblings, 1 reply; 14+ messages in thread
From: Stephen Hemminger @ 2014-12-10 23:38 UTC (permalink / raw)
  To: Neil Horman; +Cc: dev

On Wed, 10 Dec 2014 11:16:46 -0500
Neil Horman <nhorman@tuxdriver.com> wrote:

> This really seems like a false savings to me.  If an application intends to use
> multiple processes (which by all rights it seems like the use case that the dpdk
> is mostly designed for) then you need locking one way or another, and you've
> just made application coding harder, because the application now needs to know
> which functions might have internal critical sections that they need to provide
> locking for.

The DPDK is not Linux.
See the examples of how to route without using locks by doing asymmetric multiprocessing.
I.e queues are only serviced by one CPU.

The cost of a locked operation (even uncontended) is often enough to drop
packet performance by several million PPS.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] Added Spinlock to l3fwd-vf example to prevent race conditioning
  2014-12-10 23:38           ` Stephen Hemminger
@ 2014-12-11  0:34             ` Neil Horman
  0 siblings, 0 replies; 14+ messages in thread
From: Neil Horman @ 2014-12-11  0:34 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev

On Wed, Dec 10, 2014 at 03:38:37PM -0800, Stephen Hemminger wrote:
> On Wed, 10 Dec 2014 11:16:46 -0500
> Neil Horman <nhorman@tuxdriver.com> wrote:
> 
> > This really seems like a false savings to me.  If an application intends to use
> > multiple processes (which by all rights it seems like the use case that the dpdk
> > is mostly designed for) then you need locking one way or another, and you've
> > just made application coding harder, because the application now needs to know
> > which functions might have internal critical sections that they need to provide
> > locking for.
> 
> The DPDK is not Linux.
I never indicated that it was.

> See the examples of how to route without using locks by doing asymmetric multiprocessing.
> I.e queues are only serviced by one CPU.
> 
Yes, I've seen it.

> The cost of a locked operation (even uncontended) is often enough to drop
> packet performance by several million PPS.
Please re-read my note, I clearly stated that a single process use case was a
valid one, but that didn't preclude the need to provide mutual exclusion
internally to the api.  Theres no reason that this locking can't be moved into
the api, and the spinlock api itself either be defined to do locking at compile
time, or defined out as empty macros based on a build variable
(CONFIG_SINGLE_ACCESSOR or some such).  That way you save the application the
headache of having to guess which api calls need locking around them, and you
still get maximal performance if the application being written can guarantee
single accessor status to the dpdk library.

Neil

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [dpdk-dev] Added Spinlock to l3fwd-vf example to prevent race conditioning
  2014-12-08 14:45 ` [dpdk-dev] " Neil Horman
  2014-12-10  8:18   ` Wodkowski, PawelX
  2014-12-10 10:53   ` Thomas Monjalon
@ 2014-12-11  1:08   ` Thomas Monjalon
  2 siblings, 0 replies; 14+ messages in thread
From: Thomas Monjalon @ 2014-12-11  1:08 UTC (permalink / raw)
  To: Daniel Mrzyglod; +Cc: dev

> > Signed-off-by: Daniel Mrzyglod <danielx.t.mrzyglod@intel.com>
> 
> Acked-by: Neil Horman <nhorman@tuxdriver.com>

Applied

Thanks
-- 
Thomas

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2014-12-11  1:09 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-07-08 11:16 [dpdk-dev] [PATCH] Added Spinlock to l3fwd-vf example to prevent race conditioning Daniel Mrzyglod
2014-07-23  8:33 ` Thomas Monjalon
2014-11-11 22:56   ` Thomas Monjalon
2014-11-11 23:18     ` Xie, Huawei
2014-12-08 14:45 ` [dpdk-dev] " Neil Horman
2014-12-10  8:18   ` Wodkowski, PawelX
2014-12-10 14:47     ` Neil Horman
2014-12-10 14:54       ` Bruce Richardson
2014-12-10 15:53         ` Mrzyglod, DanielX T
2014-12-10 16:16         ` Neil Horman
2014-12-10 23:38           ` Stephen Hemminger
2014-12-11  0:34             ` Neil Horman
2014-12-10 10:53   ` Thomas Monjalon
2014-12-11  1:08   ` Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).