From: "Van Haaren, Harry" <harry.van.haaren@intel.com>
To: Jerin Jacob <jerin.jacob@caviumnetworks.com>,
"bugzilla@dpdk.org" <bugzilla@dpdk.org>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
"Ma, Liang J" <liang.j.ma@intel.com>,
"hemant.agrawal@nxp.com" <hemant.agrawal@nxp.com>,
"sunil.kori@nxp.com" <sunil.kori@nxp.com>,
"nipun.gupta@nxp.com" <nipun.gupta@nxp.com>
Subject: Re: [dpdk-dev] [Bug 60] rte_event_port_unlink() causes subsequent events to end up in wrong port
Date: Tue, 5 Jun 2018 16:43:26 +0000 [thread overview]
Message-ID: <E923DB57A917B54B9182A2E928D00FA65E257562@IRSMSX102.ger.corp.intel.com> (raw)
In-Reply-To: <20180604081959.GA20978@jerin>
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Monday, June 4, 2018 9:20 AM
> To: bugzilla@dpdk.org
> Cc: dev@dpdk.org; Van Haaren, Harry <harry.van.haaren@intel.com>; Ma, Liang
> J <liang.j.ma@intel.com>; hemant.agrawal@nxp.com; sunil.kori@nxp.com;
> nipun.gupta@nxp.com
> Subject: Re: [dpdk-dev] [Bug 60] rte_event_port_unlink() causes subsequent
> events to end up in wrong port
>
> -----Original Message-----
> > Date: Mon, 4 Jun 2018 07:21:18 +0000
> > From: bugzilla@dpdk.org
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] [Bug 60] rte_event_port_unlink() causes subsequent
> > events to end up in wrong port
> >
> > https://dpdk.org/tracker/show_bug.cgi?id=60
> >
> > Bug ID: 60
> > Summary: rte_event_port_unlink() causes subsequent events to
> > end up in wrong port
> > Product: DPDK
> > Version: 17.11
> > Hardware: x86
> > OS: Linux
> > Status: CONFIRMED
> > Severity: major
> > Priority: Normal
> > Component: eventdev
> > Assignee: dev@dpdk.org
> > Reporter: matias.elo@nokia.com
> > Target Milestone: ---
> >
> > Created attachment 8
> > --> https://dpdk.org/tracker/attachment.cgi?id=8&action=edit
> > Test application
> >
> > I'm seeing some unexpected(?) behavior when calling
> rte_event_port_unlink()
> > with the SW eventdev driver (DPDK 17.11.2/18.02.1,
> > RTE_EVENT_MAX_QUEUES_PER_DEV=255). After calling rte_event_port_unlink(),
> > the enqueued events may end up either back to the unlinked port or to port
> > zero.
> >
> > Scenario:
> >
> > - Run SW evendev on a service core
> > - Start eventdev with e.g. 16 ports. Each core will have a dedicated port.
> > - Create 1 atomic queue and link all active ports to it (some ports may
> not
> > be linked).
> > - Allocate some events and enqueue them to the created queue
> > - Next, each worker core does a number of scheduling rounds concurrently.
> > E.g.
> >
> > uint64_t rx_events = 0;
> > while(rx_events < SCHED_ROUNDS) {
> > num_deq = rte_event_dequeue_burst(dev_id, port_id, ev, 1, 0);
> >
> > if (num_deq) {
> > rx_events++;
> > rte_event_enqueue_burst(dev_id, port_id, ev, 1);
> > }
> > }
> >
> > - This works fine but problems occur when doing cleanup after the first
> > loop finishes on some core.
> > E.g.
> >
> > rte_event_port_unlink(dev_id, port_id, NULL, 0);
> >
> > while(1) {
> > num_deq = rte_event_dequeue_burst(dev_id, port_id, ev, 1, 0);
> >
> > if (num_deq == 0)
> > break;
> >
> > rte_event_enqueue_burst(dev_id, port_id, ev, 1);
> > }
> >
> > - The events enqueued in the cleanup loop will ramdomly end up either back
> to
> > the same port (which has already been unlinked) or to port zero, which is
> not
> > used (mapping rte_lcore_id to port_id).
> >
> > As far as I understand the eventdev API, an eventdev port shouldn't have
> to be
> > linked to the target queue for enqueue to work properly.
>
> That is a grey area in the spec. octeontx drivers works as the way you
> described. I am not sure about SW driver(CC:
> harry.van.haaren@intel.com), If there is no performance impact for none of
> the drivers and it is do able for all HW and SW implementation then can
> do that way(CC: all PMD maintainers)
>
> No related to this question, Are you planning to use rte_event_port_unlink()
> in fastpath?
> Does rte_event_stop() works for you, if it is in slow path.
Hi Matias,
Thanks for opening, from memory the sw_port_unlink() API does attempt to handle that correctly.
Having a quick look, we scan for the port to unlink, from the queue, and if we find the queue->port combination, we copy the furthest link in the array to the found position, and reduce num mapped queues by one (aka, we keep the array contiguous from 0 to num_mapped_queues).
The appropriate rte_smp_wmb() is in place to avoid race-conditions between threads there..
I think this should handle the unlink case you mention, however perhaps you have identified a genuine bug. If you have more info or a sample config / app that easily demonstrates the issue that would help reproduce/debug here?
Unfortunately I will be away until next week, but I will check up on this thread once I'm back in the office.
Regards, -Harry
next prev parent reply other threads:[~2018-06-05 16:43 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-06-04 7:21 bugzilla
2018-06-04 8:20 ` Jerin Jacob
2018-06-05 16:43 ` Van Haaren, Harry [this message]
2018-06-19 9:20 Elo, Matias (Nokia - FI/Espoo)
2018-06-26 13:35 ` Maxim Uvarov
2018-06-19 9:20 Elo, Matias (Nokia - FI/Espoo)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=E923DB57A917B54B9182A2E928D00FA65E257562@IRSMSX102.ger.corp.intel.com \
--to=harry.van.haaren@intel.com \
--cc=bugzilla@dpdk.org \
--cc=dev@dpdk.org \
--cc=hemant.agrawal@nxp.com \
--cc=jerin.jacob@caviumnetworks.com \
--cc=liang.j.ma@intel.com \
--cc=nipun.gupta@nxp.com \
--cc=sunil.kori@nxp.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).