DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Elo, Matias (Nokia - FI/Espoo)" <matias.elo@nokia.com>
To: "harry.van.haaren@intel.com" <harry.van.haaren@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"jerin.jacob@caviumnetworks.com" <jerin.jacob@caviumnetworks.com>
Subject: Re: [dpdk-dev] [Bug 60] rte_event_port_unlink() causes subsequent events to end up in wrong port
Date: Tue, 19 Jun 2018 09:20:02 +0000	[thread overview]
Message-ID: <09B2B474-8558-4EE6-BB26-460EF8C89909@nokia.com> (raw)

> I think this should handle the unlink case you mention, however perhaps you have identified a genuine bug. If you have more info or a sample config / app that easily demonstrates the issue that would help reproduce/debug here? 


Hi Harry,

The bug report includes a simple test application for demonstrating the issue. I've done some further digging and the following simple patch seems to fix the issue of events ending up in wrong ports.


diff --git a/drivers/event/sw/sw_evdev_scheduler.c b/drivers/event/sw/sw_evdev_scheduler.c
index 8a2c9d4f9..57298345d 100644
--- a/drivers/event/sw/sw_evdev_scheduler.c
+++ b/drivers/event/sw/sw_evdev_scheduler.c
@@ -79,9 +79,11 @@ sw_schedule_atomic_to_cq(struct sw_evdev *sw, struct sw_qid * const qid,
 		int cq = fid->cq;
 
 		if (cq < 0) {
-			uint32_t cq_idx = qid->cq_next_tx++;
-			if (qid->cq_next_tx == qid->cq_num_mapped_cqs)
+			uint32_t cq_idx;
+			if (qid->cq_next_tx >= qid->cq_num_mapped_cqs)
 				qid->cq_next_tx = 0;
+			cq_idx = qid->cq_next_tx++;
+
 			cq = qid->cq_map[cq_idx];
 
 			/* find least used */
@@ -168,9 +170,11 @@ sw_schedule_parallel_to_cq(struct sw_evdev *sw, struct sw_qid * const qid,
 		do {
 			if (++cq_check_count > qid->cq_num_mapped_cqs)
 				goto exit;
-			cq = qid->cq_map[cq_idx];
-			if (++cq_idx == qid->cq_num_mapped_cqs)
+
+			if (cq_idx >= qid->cq_num_mapped_cqs)
 				cq_idx = 0;
+			cq = qid->cq_map[cq_idx++];
+
 		} while (rte_event_ring_free_count(
 				sw->ports[cq].cq_worker_ring) == 0 ||
 				sw->ports[cq].inflights == SW_PORT_HIST_LIST);
@@ -251,6 +255,9 @@ sw_schedule_qid_to_cq(struct sw_evdev *sw)
 		if (iq_num >= SW_IQS_MAX)
 			continue;
 
+		if (qid->cq_num_mapped_cqs == 0)
+			continue;
+
 		uint32_t pkts_done = 0;
 		uint32_t count = iq_ring_count(qid->iq[iq_num]);


However, events from atomic/ordered queues may still end up getting stuck when unlinking (scheduled back to unlinked port). In case of atomic queues the problem seems to be related to (struct sw_fid_t *)fid->cq fields being invalid. With ordered queues events get stuck in reorder buffer.

-Matias

             reply	other threads:[~2018-06-19  9:20 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-19  9:20 Elo, Matias (Nokia - FI/Espoo) [this message]
2018-06-26 13:35 ` Maxim Uvarov
  -- strict thread matches above, loose matches on Subject: below --
2018-06-19  9:20 Elo, Matias (Nokia - FI/Espoo)
2018-06-04  7:21 bugzilla
2018-06-04  8:20 ` Jerin Jacob
2018-06-05 16:43   ` Van Haaren, Harry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=09B2B474-8558-4EE6-BB26-460EF8C89909@nokia.com \
    --to=matias.elo@nokia.com \
    --cc=dev@dpdk.org \
    --cc=harry.van.haaren@intel.com \
    --cc=jerin.jacob@caviumnetworks.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).