DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] IPv4 Fragmentation - indirect pool gets exhausted
@ 2021-07-01 22:08 Vishal Mohan
  0 siblings, 0 replies; only message in thread
From: Vishal Mohan @ 2021-07-01 22:08 UTC (permalink / raw)
  To: users

Hi,

I m trying to fragmentation an IPv4 packet using the below logic:

/////////////////////////////////////////////////////////////////////

~after pkts ingress~

struct rte_port_ring_writer *p = port_out->h_port;

pool_direct = rte_mempool_lookup("MEMPOOL0");
pool_indirect = rte_mempool_lookup("MEMPOOL1");

printf("before frag mempool size d %d in %d\n",rte_mempool_avail_count(pool_direct),rte_mempool_avail_count(pool_indirect));

struct rte_mbuf *frag_pkts[MAX_FRAG_SIZE];
int out_pkts =    rte_ipv4_fragment_packet (m, frag_pkts, n_frags, ip_mtu,
                                                                                                                pool_direct, pool_indirect);

printf("after frag mempool size d %d in %d\n",rte_mempool_avail_count(pool_direct),rte_mempool_avail_count(pool_indirect));

if(out_pkts > 0)
port_out->ops.f_tx_bulk(port_out->h_port,frag_pkts,RTE_LEN2MASK(out_pkts, uint64_t));
else
printf("frag failed\n");

rte_pktmbuf_free<https://doc.dpdk.org/api/rte__mbuf_8h.html#a1215458932900b7cd5192326fa4a6902>(m);

/////////////////////////////////////////////////////////////////////

Now the problem here is the indirect mempool gets exhausted. As a result after few burst of packets the fragmentation fails due to -ENOMEM. I quite cannot understand why the PMD doesn't free and put back the mempool obj back to MEMPOOL1. Please find the log below for the above snippet which prints the available slots in direct (d) and indirect (in) mempools:

before frag mempool size d 2060457 in 2095988
after frag mempool size d 2060344 in 2095952
before frag mempool size d 2060361 in 2095945
after frag mempool size d 2060215 in 2095913
.
.
.
before frag mempool size d 2045013 in 0
after frag mempool size d 2045013 in 0
before frag mempool size d 2045013 in 0
after frag mempool size d 2045013 in 0
before frag mempool size d 2045013 in 0

I can see the direct mempool reduce and increase as packets ingress and drop/egress as expected. I can also confirm I receive the initial burst of fragmented packets equal to MEMPOOL1 size. Any inputs towards understanding the cause of the problem is much appreciated.

P.S: We had the same problem in dpdk17.11. We had to refractor the rte_ipv4_fragment_packet() to not use indirect chaining of frags instead just generate them.

Thanks & Regards,
Vishal Mohan


Tata Communications - Public


Tata Communications - Public

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2021-07-01 22:08 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-01 22:08 [dpdk-users] IPv4 Fragmentation - indirect pool gets exhausted Vishal Mohan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).