From: RajeshKumar Kalidass <rajesh.kalidass@gigamon.com>
To: "mk@semihalf.com" <mk@semihalf.com>,
"igorch@amazon.com" <igorch@amazon.com>,
"gtzalik@amazon.com" <gtzalik@amazon.com>,
"ar@semihalf.com" <ar@semihalf.com>,
"dev@dpdk.org" <dev@dpdk.org>
Cc: Tanmay Kishore <tanmay.kishore@gigamon.com>,
Rakesh Jagota <rakesh.jagota@gigamon.com>
Subject: [dpdk-dev] net/ena: traffic lock
Date: Tue, 22 Dec 2020 08:03:21 +0000 [thread overview]
Message-ID: <BY5PR12MB374561392BF3AD087E1E5465F2DF0@BY5PR12MB3745.namprd12.prod.outlook.com> (raw)
Dpdk: 19.11
Driver: ena
During longevity(after 24+ hrs) testing at 10Gbps, one of tx-queue is getting into unrecoverable state ( its not able to find enough tx-descriptor nor its freeing up).
So for every tx-burst, eth_ena_xmit_pkts() neither finds free tx-descriptor nor able to free txd (ena_com_tx_comp_req_id_get() is always returning ENA_COM_TRY_AGAIN).
We see eth_ena_xmit_pkts() has been refactored in latest LTS version, is there any related issue got fixed ? Can you help
(gdb) p *(struct ena_ring *) rte_eth_devices[2].data->tx_queues[5]
$14 = {
next_to_use = 4979,
next_to_clean = 3958,
type = ENA_RING_TYPE_TX,
tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_DEV,
{
empty_tx_reqs = 0x11e406b00,
empty_rx_reqs = 0x11e406b00
},
{
tx_buffer_info = 0x11d2dfc80,
rx_buffer_info = 0x11d2dfc80
},
rx_refill_buffer = 0x0,
ring_size = 1024,
ena_com_io_cq = 0x11e40e640,
ena_com_io_sq = 0x11e4168c0,
ena_bufs = {{
len = 0,
req_id = 0
} <repeats 17 times>},
mb_pool = 0x0,
port_id = 2,
id = 5,
tx_max_header_size = 96 '`',
configured = 1,
push_buf_intermediate_buf = 0x11e406a00 "",
adapter = 0x11e40e040,
offloads = 2,
sgl_size = 17,
{
rx_stats = {
cnt = 4979,
bytes = 417580,
refill_partial = 35426,
bad_csum = 0,
mbuf_alloc_fail = 0,
bad_desc_num = 38603,
---Type <return> to continue, or q <return> to quit---
bad_req_id = 3178
},
tx_stats = {
cnt = 4979,
bytes = 417580,
prepare_ctx_err = 35426, <-- Errors
linearize = 0,
linearize_failed = 0,
tx_poll = 38603,
doorbells = 3178,
bad_req_id = 0,
available_desc = 2
}
},
numa_socket_id = 0
}
Thanks,
-Rajesh
This message may contain confidential and privileged information. If it has been sent to you in error, please reply to advise the sender of the error and then immediately delete it. If you are not the intended recipient, do not read, copy, disclose or otherwise use this message. The sender disclaims any liability for such unauthorized use. NOTE that all incoming emails sent to Gigamon email accounts will be archived and may be scanned by us and/or by external service providers to detect and prevent threats to our systems, investigate illegal or inappropriate behavior, and/or eliminate unsolicited promotional emails ("spam").
next reply other threads:[~2020-12-22 20:57 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-22 8:03 RajeshKumar Kalidass [this message]
2020-12-22 9:46 ` Chauskin, Igor
2020-12-22 9:37 Rajesh Kumar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BY5PR12MB374561392BF3AD087E1E5465F2DF0@BY5PR12MB3745.namprd12.prod.outlook.com \
--to=rajesh.kalidass@gigamon.com \
--cc=ar@semihalf.com \
--cc=dev@dpdk.org \
--cc=gtzalik@amazon.com \
--cc=igorch@amazon.com \
--cc=mk@semihalf.com \
--cc=rakesh.jagota@gigamon.com \
--cc=tanmay.kishore@gigamon.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).