From: Hari Haran <info2hariharan@gmail.com>
To: users@dpdk.org
Subject: EventDev Queue Hang after process restart
Date: Mon, 8 May 2023 19:59:02 +0530 [thread overview]
Message-ID: <CAPbxCtqpQwk2Ey69ibJnm_Jis0iKy7bakqYUvPcRG=iAwjP_XQ@mail.gmail.com> (raw)
[-- Attachment #1: Type: text/plain, Size: 3258 bytes --]
In Data Application single Process used, not multi process case.
During posting packets from one thread to another thread(both threads are
in different cores), we have used DPDK Event Dev Queue.
After process restart, observed Event Dev Queue Enqueue failure while
enqueue the packets rte_event_enqueue_burst() . Cross checked the EvtDevQ
stats, it showed packets are in Inflight mode, not posted to Queue it seems
and enqueue failed and not recovered afterwards.
Also schedule calls happened multiple times.
Note: Receiver thread is not in tight loop
Event Enqueue function: rte_event_enqueue_burst() - This return enqueue
failure
Schedule call function: rte_service_run_iter_on_app_lcore()
How can I fix it? Any help is appreciated.
DevId 0
Tx Port 2
RX port 0
Queue 0 Used in this case.
Required Logs information below:
*EAL Arguments used:*
ArgCnt 0 Value Appuserplane
ArgCnt 1 Value -l 93,38,94,39
ArgCnt 2 Value -n 2
ArgCnt 3 Value --main-lcore=93
*EAL prints during Init:*
EAL: Detected 112 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: No legacy callbacks, legacy socket not created
*Event Dev Stats:*
Dev=0 Port=1EventDev todo-fix-name: ports 4, qids 3
rx 32768
drop 0
tx 4096
sched calls: 628945658
sched cq/qid call: 628964843
sched no IQ enq: 628926401
sched no CQ enq: 628942982
inflight 32768, credits: 0
Port 0
rx 0 drop 0 tx 4096 inflight 4096
Max New: 32768 Avg cycles PP: 0 Credits: 0
Receive burst distribution:
0:100% 1-4:0.00% 5-8:0.00% 9-12:0.00%
rx ring used: 0 free: 4096
cq ring used: 0 free: 128
Port 1
rx 0 drop 0 tx 0 inflight 0
Max New: 32768 Avg cycles PP: 0 Credits: 0
Receive burst distribution:
0:100%
rx ring used: 0 free: 4096
cq ring used: 0 free: 128
Port 2
rx 32768 drop 0 tx 0 inflight 0
Max New: 32768 Avg cycles PP: 0 Credits: 0
Receive burst distribution:
0:-nan%
rx ring used: 0 free: 4096
cq ring used: 0 free: 128
Port 3 (SingleCons)
rx 0 drop 0 tx 0 inflight 0
Max New: 32768 Avg cycles PP: 0 Credits: 0
Receive burst distribution:
0:-nan%
rx ring used: 0 free: 4096
cq ring used: 0 free: 128
Queue 0 (Atomic)
rx 32768 drop 0 tx 4096
Per Port Stats:
Port 0: Pkts: 4096 Flows: 1
Port 1: Pkts: 0 Flows: 0
Port 2: Pkts: 0 Flows: 0
Port 3: Pkts: 0 Flows: 0
iq 0: Used 28672
Queue 1 (Atomic)
rx 0 drop 0 tx 0
Per Port Stats:
Port 0: Pkts: 0 Flows: 0
Port 1: Pkts: 0 Flows: 0
Port 2: Pkts: 0 Flows: 0
Port 3: Pkts: 0 Flows: 0
-- iqs empty --
Queue 2 (Directed)
rx 0 drop 0 tx 0
Per Port Stats:
Port 0: Pkts: 0 Flows: 0
Port 1: Pkts: 0 Flows: 0
Port 2: Pkts: 0 Flows: 0
Port 3: Pkts: 0 Flows: 0
-- iqs empty --
[-- Attachment #2: Type: text/html, Size: 3986 bytes --]
reply other threads:[~2023-05-08 14:29 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAPbxCtqpQwk2Ey69ibJnm_Jis0iKy7bakqYUvPcRG=iAwjP_XQ@mail.gmail.com' \
--to=info2hariharan@gmail.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).