DPDK usage discussions
 help / color / mirror / Atom feed
From: Vishal Mohan <vishal.mohan@tatacommunications.com>
To: "users@dpdk.org" <users@dpdk.org>
Subject: [dpdk-users] IPv4 Fragmentation - indirect pool gets exhausted
Date: Thu, 1 Jul 2021 22:08:24 +0000	[thread overview]
Message-ID: <SG2PR04MB31731BE232496C58C1341676E4009@SG2PR04MB3173.apcprd04.prod.outlook.com> (raw)

Hi,

I m trying to fragmentation an IPv4 packet using the below logic:

/////////////////////////////////////////////////////////////////////

~after pkts ingress~

struct rte_port_ring_writer *p = port_out->h_port;

pool_direct = rte_mempool_lookup("MEMPOOL0");
pool_indirect = rte_mempool_lookup("MEMPOOL1");

printf("before frag mempool size d %d in %d\n",rte_mempool_avail_count(pool_direct),rte_mempool_avail_count(pool_indirect));

struct rte_mbuf *frag_pkts[MAX_FRAG_SIZE];
int out_pkts =    rte_ipv4_fragment_packet (m, frag_pkts, n_frags, ip_mtu,
                                                                                                                pool_direct, pool_indirect);

printf("after frag mempool size d %d in %d\n",rte_mempool_avail_count(pool_direct),rte_mempool_avail_count(pool_indirect));

if(out_pkts > 0)
port_out->ops.f_tx_bulk(port_out->h_port,frag_pkts,RTE_LEN2MASK(out_pkts, uint64_t));
else
printf("frag failed\n");

rte_pktmbuf_free<https://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902>(m);

/////////////////////////////////////////////////////////////////////

Now the problem here is the indirect mempool gets exhausted. As a result after few burst of packets the fragmentation fails due to -ENOMEM. I quite cannot understand why the PMD doesn't free and put back the mempool obj back to MEMPOOL1. Please find the log below for the above snippet which prints the available slots in direct (d) and indirect (in) mempools:

before frag mempool size d 2060457 in 2095988
after frag mempool size d 2060344 in 2095952
before frag mempool size d 2060361 in 2095945
after frag mempool size d 2060215 in 2095913
.
.
.
before frag mempool size d 2045013 in 0
after frag mempool size d 2045013 in 0
before frag mempool size d 2045013 in 0
after frag mempool size d 2045013 in 0
before frag mempool size d 2045013 in 0

I can see the direct mempool reduce and increase as packets ingress and drop/egress as expected. I can also confirm I receive the initial burst of fragmented packets equal to MEMPOOL1 size. Any inputs towards understanding the cause of the problem is much appreciated.

P.S: We had the same problem in dpdk17.11. We had to refractor the rte_ipv4_fragment_packet() to not use indirect chaining of frags instead just generate them.

Thanks & Regards,
Vishal Mohan


Tata Communications - Public


Tata Communications - Public

                 reply	other threads:[~2021-07-01 22:08 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SG2PR04MB31731BE232496C58C1341676E4009@SG2PR04MB3173.apcprd04.prod.outlook.com \
    --to=vishal.mohan@tatacommunications.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).