DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Increasing the NB_MBUFs of PktMbuf MemPool
@ 2018-10-12  4:48 Wajeeha Javed
  2018-10-12  8:56 ` Andrew Rybchenko
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Wajeeha Javed @ 2018-10-12  4:48 UTC (permalink / raw)
  To: users

Hi,

I am in the process of developing  DPDK based Application where I would
like to delay the packets for about 2 secs. There are two ports connected
to DPDK App and sending traffic of 64 bytes size packets at a line rate of
10GB/s. Within 2 secs, I will have 28 Million packets for each of the port
in delay application. The maximum RX Descriptor size is 16384. I am unable
to increase the number of Rx descriptors more than 16384 value. Is it
possible to increase the number of Rx descriptors to a large value. e.g.
65536.  Therefore I copied the mbufs using the pktmbuf copy code(shown
below) and free the packet received. Now the issue is that I can not copy
more than 5 million packets because the  nb_mbufs of the mempool can't be
more than 5 Million (#define NB_MBUF 5000000). If I increase the NB_MBUF
macro from more than 5 Million, the error is returned unable to init mbuf
pool. Is there a possible way to increase the mempool size?

Furthermore, kindly guide me if this is the appropriate mailing list for
asking this type of questions.

<Code>

static inline struct rte_mbuf *

pktmbuf_copy(struct rte_mbuf *md, struct rte_mempool *mp)
{
struct rte_mbuf *mc = NULL;
struct rte_mbuf **prev = &mc;

do {
    struct rte_mbuf *mi;

    mi = rte_pktmbuf_alloc(mp);
    if (unlikely(mi == NULL)) {
        rte_pktmbuf_free(mc);

        rte_exit(EXIT_FAILURE, "Unable to Allocate Memory. Memory
Failure.\n");
        return NULL;
    }

    mi->data_off = md->data_off;
    mi->data_len = md->data_len;
    mi->port = md->port;
    mi->vlan_tci = md->vlan_tci;
    mi->tx_offload = md->tx_offload;
    mi->hash = md->hash;

    mi->next = NULL;
    mi->pkt_len = md->pkt_len;
    mi->nb_segs = md->nb_segs;
    mi->ol_flags = md->ol_flags;
    mi->packet_type = md->packet_type;

   rte_memcpy(rte_pktmbuf_mtod(mi, char *), rte_pktmbuf_mtod(md, char *),
md->data_len);
   *prev = mi;
   prev = &mi->next;
} while ((md = md->next) != NULL);

*prev = NULL;
return mc;

}

</Code>

*Reference:*  http://patchwork.dpdk.org/patch/6289/

Thanks & Best Regards,

Wajeeha Javed

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] Increasing the NB_MBUFs of PktMbuf MemPool
  2018-10-12  4:48 [dpdk-users] Increasing the NB_MBUFs of PktMbuf MemPool Wajeeha Javed
@ 2018-10-12  8:56 ` Andrew Rybchenko
  2018-10-12 11:35 ` Wiles, Keith
  2018-10-12 11:40 ` Wiles, Keith
  2 siblings, 0 replies; 4+ messages in thread
From: Andrew Rybchenko @ 2018-10-12  8:56 UTC (permalink / raw)
  To: Wajeeha Javed, users

Hi,

On 10/12/18 7:48 AM, Wajeeha Javed wrote:
> Hi,
>
> I am in the process of developing  DPDK based Application where I would
> like to delay the packets for about 2 secs. There are two ports connected
> to DPDK App and sending traffic of 64 bytes size packets at a line rate of
> 10GB/s. Within 2 secs, I will have 28 Million packets for each of the port
> in delay application. The maximum RX Descriptor size is 16384. I am unable
> to increase the number of Rx descriptors more than 16384 value. Is it
> possible to increase the number of Rx descriptors to a large value. e.g.
> 65536.  Therefore I copied the mbufs using the pktmbuf copy code(shown
> below) and free the packet received. Now the issue is that I can not copy
> more than 5 million packets because the  nb_mbufs of the mempool can't be
> more than 5 Million (#define NB_MBUF 5000000). If I increase the NB_MBUF
> macro from more than 5 Million, the error is returned unable to init mbuf
> pool. Is there a possible way to increase the mempool size?

I've failed to find explicit limitations from the first glance.
NB_MBUF define is typically internal to examples/apps.
The question I'd like to double-check if the host has enought
RAM and hugepages allocated? 5 million mbufs already require about
10G.

Andrew.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] Increasing the NB_MBUFs of PktMbuf MemPool
  2018-10-12  4:48 [dpdk-users] Increasing the NB_MBUFs of PktMbuf MemPool Wajeeha Javed
  2018-10-12  8:56 ` Andrew Rybchenko
@ 2018-10-12 11:35 ` Wiles, Keith
  2018-10-12 11:40 ` Wiles, Keith
  2 siblings, 0 replies; 4+ messages in thread
From: Wiles, Keith @ 2018-10-12 11:35 UTC (permalink / raw)
  To: Wajeeha Javed; +Cc: users



On Oct 11, 2018, at 11:48 PM, Wajeeha Javed <wajeeha.javed123@gmail.com<mailto:wajeeha.javed123@gmail.com>> wrote:

Hi,

I am in the process of developing  DPDK based Application where I would
like to delay the packets for about 2 secs. There are two ports connected
to DPDK App and sending traffic of 64 bytes size packets at a line rate of
10GB/s. Within 2 secs, I will have 28 Million packets for each of the port
in delay application. The maximum RX Descriptor size is 16384. I am unable
to increase the number of Rx descriptors more than 16384 value. Is it
possible to increase the number of Rx descriptors to a large value. e.g.
65536.

This is most likely a limitation of the NIC being used and increasing beyond that value will not be possible, please check the NIC being used programmer guide.
Therefore I copied the mbufs using the pktmbuf copy code(shown
below) and free the packet received. Now the issue is that I can not copy
more than 5 million packets because the  nb_mbufs of the mempool can't be
more than 5 Million (#define NB_MBUF 5000000). If I increase the NB_MBUF
macro from more than 5 Million, the error is returned unable to init mbuf
pool. Is there a possible way to increase the mempool size?

The mempool uses uint32_t for most sizes and the number of mempool items is uint32_t so ok with the number of entries in a can be ~4G as stated be make sure you have enough memory as the over head for mbufs is not just the header + the packet size.

My question is why are you copying the mbuf and not just linking the mbufs into a link list? Maybe I do not understand the reason. I would try to make sure you do not do a copy of the data and just link the mbufs together using the next pointer in the mbuf header unless you have chained mbufs already.

The other question is can you drop any packets if not then you only have the linking option IMO. If you can drop packets then you can just start dropping them when the ring is getting full. Holding onto 28m packets for two seconds can cause other protocol related problems and TCP could be sending retransmitted packets and now you have caused a bunch of work on the RX side at the end point.


Furthermore, kindly guide me if this is the appropriate mailing list for
asking this type of questions.

You are on the correct email list, dev@dpdk.org<mailto:dev@dpdk.org> is for DPDK developers normally.

Hope this helps.

<Code>

static inline struct rte_mbuf *

pktmbuf_copy(struct rte_mbuf *md, struct rte_mempool *mp)
{
struct rte_mbuf *mc = NULL;
struct rte_mbuf **prev = &mc;

do {
   struct rte_mbuf *mi;

   mi = rte_pktmbuf_alloc(mp);
   if (unlikely(mi == NULL)) {
       rte_pktmbuf_free(mc);

       rte_exit(EXIT_FAILURE, "Unable to Allocate Memory. Memory
Failure.\n");
       return NULL;
   }

   mi->data_off = md->data_off;
   mi->data_len = md->data_len;
   mi->port = md->port;
   mi->vlan_tci = md->vlan_tci;
   mi->tx_offload = md->tx_offload;
   mi->hash = md->hash;

   mi->next = NULL;
   mi->pkt_len = md->pkt_len;
   mi->nb_segs = md->nb_segs;
   mi->ol_flags = md->ol_flags;
   mi->packet_type = md->packet_type;

  rte_memcpy(rte_pktmbuf_mtod(mi, char *), rte_pktmbuf_mtod(md, char *),
md->data_len);
  *prev = mi;
  prev = &mi->next;
} while ((md = md->next) != NULL);

*prev = NULL;
return mc;

}

</Code>

*Reference:*  http://patchwork.dpdk.org/patch/6289/

Thanks & Best Regards,

Wajeeha Javed

Regards,
Keith

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] Increasing the NB_MBUFs of PktMbuf MemPool
  2018-10-12  4:48 [dpdk-users] Increasing the NB_MBUFs of PktMbuf MemPool Wajeeha Javed
  2018-10-12  8:56 ` Andrew Rybchenko
  2018-10-12 11:35 ` Wiles, Keith
@ 2018-10-12 11:40 ` Wiles, Keith
  2 siblings, 0 replies; 4+ messages in thread
From: Wiles, Keith @ 2018-10-12 11:40 UTC (permalink / raw)
  To: Wajeeha Javed; +Cc: users

Stupid email program I tell it to reply in the format as received (text format) and it still sends in rich text format :-(
hope this is more readable.


> On Oct 11, 2018, at 11:48 PM, Wajeeha Javed <wajeeha.javed123@gmail.com> wrote:
> 
> Hi,
> 
> I am in the process of developing  DPDK based Application where I would
> like to delay the packets for about 2 secs. There are two ports connected
> to DPDK App and sending traffic of 64 bytes size packets at a line rate of
> 10GB/s. Within 2 secs, I will have 28 Million packets for each of the port
> in delay application. The maximum RX Descriptor size is 16384. I am unable
> to increase the number of Rx descriptors more than 16384 value. Is it
> possible to increase the number of Rx descriptors to a large value. e.g.
> 65536.

This is most likely a limitation of the NIC being used and increasing beyond that value will not be possible, please check the NIC being used programmer guide.

> Therefore I copied the mbufs using the pktmbuf copy code(shown
> below) and free the packet received. Now the issue is that I can not copy
> more than 5 million packets because the  nb_mbufs of the mempool can't be
> more than 5 Million (#define NB_MBUF 5000000). If I increase the NB_MBUF
> macro from more than 5 Million, the error is returned unable to init mbuf
> pool. Is there a possible way to increase the mempool size?

The mempool uses uint32_t for most sizes and the number of mempool items is uint32_t so ok with the number of entries in a can be ~4G as stated be make sure you have enough memory as the over head for mbufs is not just the header + the packet size.

My question is why are you copying the mbuf and not just linking the mbufs into a link list? Maybe I do not understand the reason. I would try to make sure you do not do a copy of the data and just link the mbufs together using the next pointer in the mbuf header unless you have chained mbufs already.

The other question is can you drop any packets if not then you only have the linking option IMO. If you can drop packets then you can just start dropping them when the ring is getting full. Holding onto 28m packets for two seconds can cause other protocol related problems and TCP could be sending retransmitted packets and now you have caused a bunch of work on the RX side at the end point.

> 
> Furthermore, kindly guide me if this is the appropriate mailing list for
> asking this type of questions.

You are on the correct email list, dev@dpdk.org<mailto:dev@dpdk.org> is for DPDK developers normally.

Hope this helps.

> 
> <Code>
> 
> static inline struct rte_mbuf *
> 
> pktmbuf_copy(struct rte_mbuf *md, struct rte_mempool *mp)
> {
> struct rte_mbuf *mc = NULL;
> struct rte_mbuf **prev = &mc;
> 
> do {
>    struct rte_mbuf *mi;
> 
>    mi = rte_pktmbuf_alloc(mp);
>    if (unlikely(mi == NULL)) {
>        rte_pktmbuf_free(mc);
> 
>        rte_exit(EXIT_FAILURE, "Unable to Allocate Memory. Memory
> Failure.\n");
>        return NULL;
>    }
> 
>    mi->data_off = md->data_off;
>    mi->data_len = md->data_len;
>    mi->port = md->port;
>    mi->vlan_tci = md->vlan_tci;
>    mi->tx_offload = md->tx_offload;
>    mi->hash = md->hash;
> 
>    mi->next = NULL;
>    mi->pkt_len = md->pkt_len;
>    mi->nb_segs = md->nb_segs;
>    mi->ol_flags = md->ol_flags;
>    mi->packet_type = md->packet_type;
> 
>   rte_memcpy(rte_pktmbuf_mtod(mi, char *), rte_pktmbuf_mtod(md, char *),
> md->data_len);
>   *prev = mi;
>   prev = &mi->next;
> } while ((md = md->next) != NULL);
> 
> *prev = NULL;
> return mc;
> 
> }
> 
> </Code>
> 
> *Reference:*  http://patchwork.dpdk.org/patch/6289/
> 
> Thanks & Best Regards,
> 
> Wajeeha Javed

Regards,
Keith

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-10-12 11:40 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-12  4:48 [dpdk-users] Increasing the NB_MBUFs of PktMbuf MemPool Wajeeha Javed
2018-10-12  8:56 ` Andrew Rybchenko
2018-10-12 11:35 ` Wiles, Keith
2018-10-12 11:40 ` Wiles, Keith

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).