DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Van Haaren, Harry" <harry.van.haaren@intel.com>
To: "Ma, Liang J" <liang.j.ma@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"Jain, Deepak K" <deepak.k.jain@intel.com>,
	 "Geary, John" <john.geary@intel.com>,
	"Mccarthy, Peter" <peter.mccarthy@intel.com>,
	"jerin.jacob@caviumnetworks.com" <jerin.jacob@caviumnetworks.com>
Subject: Re: [dpdk-dev] [PATCH] event/opdl: fix atomic queue race condition issue
Date: Mon, 26 Mar 2018 12:29:04 +0000	[thread overview]
Message-ID: <E923DB57A917B54B9182A2E928D00FA65E0173E2@IRSMSX101.ger.corp.intel.com> (raw)
In-Reply-To: <1520940853-56748-1-git-send-email-liang.j.ma@intel.com>

> From: Ma, Liang J
> Sent: Tuesday, March 13, 2018 11:34 AM
> To: jerin.jacob@caviumnetworks.com
> Cc: dev@dpdk.org; Van Haaren, Harry <harry.van.haaren@intel.com>; Jain, Deepak
> K <deepak.k.jain@intel.com>; Geary, John <john.geary@intel.com>; Mccarthy,
> Peter <peter.mccarthy@intel.com>
> Subject: [PATCH] event/opdl: fix atomic queue race condition issue
> 
> If application link one atomic queue to multiple ports,
> and each worker core update flow_id, there will have a
> chance to hit race condition issue and lead to double processing
> same event. This fix solve the problem and eliminate
> the race condition issue.
> 
> Fixes: 4236ce9bf5bf ("event/opdl: add OPDL ring infrastructure library")
> 

General notes
- Spaces around & % << >> and other bitwise manipulations (https://dpdk.org/doc/guides/contributing/coding_style.html#operators)
- I've noted a few below, but there are more
- Usually checkpatch flags these - I'm curious why it didn't in this case

It would be nice if we didn't have to rely on __atomic_load_n() and friends,
however I don't see a better alternative. Given other DPDK components are
also using __atomic_* functions, no objection here.


<snip>

> @@ -520,7 +528,17 @@ opdl_stage_claim_singlethread(struct opdl_stage *s, void
> *entries,
> 
>  		for (j = 0; j < num_entries; j++) {
>  			ev = (struct rte_event *)get_slot(t, s->head+j);
> -			if ((ev->flow_id%s->nb_instance) == s->instance_id) {

Spaces around the %

> +
> +			event  = __atomic_load_n(&(ev->event),
> +					__ATOMIC_ACQUIRE);
> +
> +			opa_id = OPDL_OPA_MASK&(event>>OPDL_OPA_OFFSET);

Spaces &

> +			flow_id  = OPDL_FLOWID_MASK&event;

Spaces &

> +
> +			if (opa_id >= s->queue_id)
> +				continue;
> +
> +			if ((flow_id%s->nb_instance) == s->instance_id) {

Spaces %

<snip rest of patch>


Will re-review v2. Cheers, -Harry

  reply	other threads:[~2018-03-26 12:29 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-13 11:34 Liang Ma
2018-03-26 12:29 ` Van Haaren, Harry [this message]
2018-03-26 13:32   ` Liang, Ma
2018-03-27 14:18 ` [dpdk-dev] [PATCH v2] " Liang Ma
2018-03-29 14:48   ` Van Haaren, Harry
2018-04-02  4:09     ` Jerin Jacob

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=E923DB57A917B54B9182A2E928D00FA65E0173E2@IRSMSX101.ger.corp.intel.com \
    --to=harry.van.haaren@intel.com \
    --cc=deepak.k.jain@intel.com \
    --cc=dev@dpdk.org \
    --cc=jerin.jacob@caviumnetworks.com \
    --cc=john.geary@intel.com \
    --cc=liang.j.ma@intel.com \
    --cc=peter.mccarthy@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).