From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by dpdk.org (Postfix, from userid 33) id 148FA5323; Mon, 4 Jun 2018 09:21:19 +0200 (CEST) From: bugzilla@dpdk.org To: dev@dpdk.org Date: Mon, 04 Jun 2018 07:21:18 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: DPDK X-Bugzilla-Component: eventdev X-Bugzilla-Version: 17.11 X-Bugzilla-Keywords: X-Bugzilla-Severity: major X-Bugzilla-Who: matias.elo@nokia.com X-Bugzilla-Status: CONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: dev@dpdk.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter target_milestone attachments.created Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://dpdk.org/tracker/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All MIME-Version: 1.0 Subject: [dpdk-dev] [Bug 60] rte_event_port_unlink() causes subsequent events to end up in wrong port X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 04 Jun 2018 07:21:19 -0000 https://dpdk.org/tracker/show_bug.cgi?id=3D60 Bug ID: 60 Summary: rte_event_port_unlink() causes subsequent events to end up in wrong port Product: DPDK Version: 17.11 Hardware: x86 OS: Linux Status: CONFIRMED Severity: major Priority: Normal Component: eventdev Assignee: dev@dpdk.org Reporter: matias.elo@nokia.com Target Milestone: --- Created attachment 8 --> https://dpdk.org/tracker/attachment.cgi?id=3D8&action=3Dedit Test application I'm seeing some unexpected(?) behavior when calling rte_event_port_unlink() with the SW eventdev driver (DPDK 17.11.2/18.02.1, RTE_EVENT_MAX_QUEUES_PER_DEV=3D255). After calling rte_event_port_unlink(), the enqueued events may end up either back to the unlinked port or to port zero. Scenario: - Run SW evendev on a service core - Start eventdev with e.g. 16 ports. Each core will have a dedicated port. - Create 1 atomic queue and link all active ports to it (some ports may not be linked). - Allocate some events and enqueue them to the created queue - Next, each worker core does a number of scheduling rounds concurrently. E.g. uint64_t rx_events =3D 0; while(rx_events < SCHED_ROUNDS) { num_deq =3D rte_event_dequeue_burst(dev_id, port_id, ev, 1, 0); if (num_deq) { rx_events++; rte_event_enqueue_burst(dev_id, port_id, ev, 1); } } - This works fine but problems occur when doing cleanup after the first loop finishes on some core. E.g. rte_event_port_unlink(dev_id, port_id, NULL, 0); while(1) { num_deq =3D rte_event_dequeue_burst(dev_id, port_id, ev, 1, 0); if (num_deq =3D=3D 0) break; rte_event_enqueue_burst(dev_id, port_id, ev, 1); } - The events enqueued in the cleanup loop will ramdomly end up either back = to the same port (which has already been unlinked) or to port zero, which is n= ot used (mapping rte_lcore_id to port_id). As far as I understand the eventdev API, an eventdev port shouldn't have to= be linked to the target queue for enqueue to work properly. I've attached a simple test application for reproducing this issue. # sudo ./eventdev --vdev event_sw0 -s 0x2 Below is an example rte_event_dev_dump() output when processing events with= two cores (ports 2 and 3). The rest of the ports are not linked at all but even= ts still end up to port zero stalling the system. Regards, Matias EventDev todo-fix-name: ports 16, qids 1 rx 908342 drop 0 tx 908342 sched calls: 42577156 sched cq/qid call: 43120490 sched no IQ enq: 42122057 sched no CQ enq: 42122064 inflight 32, credits: 4064 Port 0=20 rx 0 drop 0 tx 2 inflight 2 Max New: 1024 Avg cycles PP: 0 Credits: 0 Receive burst distribution: 0:-nan%=20 rx ring used: 0 free: 4096 cq ring used: 2 free: 14 Port 1=20 rx 0 drop 0 tx 0 inflight 0 Max New: 1024 Avg cycles PP: 0 Credits: 0 Receive burst distribution: 0:-nan%=20 rx ring used: 0 free: 4096 cq ring used: 0 free: 16 Port 2=20 rx 524292 drop 0 tx 524290 inflight 0 Max New: 1024 Avg cycles PP: 190 Credits: 30 Receive burst distribution: 0:98% 1-4:1.82%=20 rx ring used: 0 free: 4096 cq ring used: 0 free: 16 Port 3=20 rx 384050 drop 0 tx 384050 inflight 0 Max New: 1024 Avg cycles PP: 191 Credits: 0 Receive burst distribution: 0:100% 1-4:0.04%=20 rx ring used: 0 free: 4096 cq ring used: 0 free: 16 ... Port 15=20 rx 0 drop 0 tx 0 inflight 0 Max New: 1024 Avg cycles PP: 0 Credits: 0 Receive burst distribution: 0:-nan%=20 rx ring used: 0 free: 4096 cq ring used: 0 free: 16 Queue 0 (Atomic) rx 908342 drop 0 tx 908342 Per Port Stats: Port 0: Pkts: 2 Flows: 1 Port 1: Pkts: 0 Flows: 0 Port 2: Pkts: 524290 Flows: 0 Port 3: Pkts: 384050 Flows: 0 Port 4: Pkts: 0 Flows: 0 Port 5: Pkts: 0 Flows: 0 Port 6: Pkts: 0 Flows: 0 Port 7: Pkts: 0 Flows: 0 Port 8: Pkts: 0 Flows: 0 Port 9: Pkts: 0 Flows: 0 Port 10: Pkts: 0 Flows: 0 Port 11: Pkts: 0 Flows: 0 Port 12: Pkts: 0 Flows: 0 Port 13: Pkts: 0 Flows: 0 Port 14: Pkts: 0 Flows: 0 Port 15: Pkts: 0 Flows: 0 -- iqs empty -- --=20 You are receiving this mail because: You are the assignee for the bug.=