* [dpdk-users] Mechanism to increase MBUF allocation
@ 2017-05-15 7:14 Neeraj Tandon (netandon)
0 siblings, 0 replies; 3+ messages in thread
From: Neeraj Tandon (netandon) @ 2017-05-15 7:14 UTC (permalink / raw)
To: users
Hi,
I have recently started using DPDK. I have based my application on l2fwd application. In my application, I am holding buffers for a period of time and freeing the mbuf in another thread. The default number of MBUF is 8192 . I have two questions regarding this:
1. How to increase number of MBUFS : For this increasing NB_MBUF and calling is not having any effect I.e I loose packet when packets > 8192 are sent in burst. I see following used for creating mbuf pool:
/* create the mbuf pool */
l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
rte_socket_id());
If I want to increase MBUF to say 65536 what should I do ?
2. I am receiving packets in RX thread which is running on Core 2 and freeing on a thread which I launched using PHREAD and runs on Core 0 . Any implications for this kind of mechanism
Thanks for the support and keeping forum active.
Regards,
Neenah
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [dpdk-users] Mechanism to increase MBUF allocation
2017-05-17 3:27 Neeraj Tandon (netandon)
@ 2017-05-18 20:21 ` Neeraj Tandon (netandon)
0 siblings, 0 replies; 3+ messages in thread
From: Neeraj Tandon (netandon) @ 2017-05-18 20:21 UTC (permalink / raw)
To: Neeraj Tandon (netandon), users
Hi,
Just for information and helping someone who comes across a similar issue.
The root cause was calling MBUF free in a Non EAL thread. The application
required delayed buffer free but doing it in a different thread launched
via pthread create causes a corruption in mempool. Moving mbuf free to an
EAL thread solves the problem.
Thanks,
Neeraj
On 5/16/17, 8:27 PM, "users on behalf of Neeraj Tandon (netandon)"
<users-bounces@dpdk.org on behalf of netandon@cisco.com> wrote:
>Hi,
>
>I was able to increase mbuf and make it work after increasing the socket
>memory. However I am facing an issue of SEGfault in driver code.
>Intermittently after receiving sometimes few million packet at 1 Gig line
>rate the driver does a segment fault:
>(eth_igb_recv_pkts+0xd3)[0x5057a3]
>
>I have net_e1000_igb driver with two 1 Gig ports on it.
>
>Thanks in advance for any help or pointer to debug driver .
>
>EAL: Detected 24 lcore(s)
>EAL: Probing VFIO support...
>EAL: VFIO support initialized
>EAL: PCI device 0000:01:00.0 on NUMA socket 0
>EAL: probe driver: 8086:1521 net_e1000_igb
>EAL: PCI device 0000:01:00.1 on NUMA socket 0
>EAL: probe driver: 8086:1521 net_e1000_igb
>
>Regards,
>Neeraj
>
>
>
>
>On 5/15/17, 12:14 AM, "users on behalf of Neeraj Tandon (netandon)"
><users-bounces@dpdk.org on behalf of netandon@cisco.com> wrote:
>
>>Hi,
>>
>>I have recently started using DPDK. I have based my application on l2fwd
>>application. In my application, I am holding buffers for a period of
>>time and freeing the mbuf in another thread. The default number of MBUF
>>is 8192 . I have two questions regarding this:
>>
>>
>> 1. How to increase number of MBUFS : For this increasing NB_MBUF and
>>calling is not having any effect I.e I loose packet when packets > 8192
>>are sent in burst. I see following used for creating mbuf pool:
>>
>>/* create the mbuf pool */
>>l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
>>MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
>>rte_socket_id());
>>
>>If I want to increase MBUF to say 65536 what should I do ?
>>
>> 2. I am receiving packets in RX thread which is running on Core 2
>>and freeing on a thread which I launched using PHREAD and runs on Core 0
>>. Any implications for this kind of mechanism
>>
>>Thanks for the support and keeping forum active.
>>
>>Regards,
>>Neenah
>>
>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [dpdk-users] Mechanism to increase MBUF allocation
@ 2017-05-17 3:27 Neeraj Tandon (netandon)
2017-05-18 20:21 ` Neeraj Tandon (netandon)
0 siblings, 1 reply; 3+ messages in thread
From: Neeraj Tandon (netandon) @ 2017-05-17 3:27 UTC (permalink / raw)
To: users
Hi,
I was able to increase mbuf and make it work after increasing the socket
memory. However I am facing an issue of SEGfault in driver code.
Intermittently after receiving sometimes few million packet at 1 Gig line
rate the driver does a segment fault:
(eth_igb_recv_pkts+0xd3)[0x5057a3]
I have net_e1000_igb driver with two 1 Gig ports on it.
Thanks in advance for any help or pointer to debug driver .
EAL: Detected 24 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:01:00.1 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
Regards,
Neeraj
On 5/15/17, 12:14 AM, "users on behalf of Neeraj Tandon (netandon)"
<users-bounces@dpdk.org on behalf of netandon@cisco.com> wrote:
>Hi,
>
>I have recently started using DPDK. I have based my application on l2fwd
>application. In my application, I am holding buffers for a period of
>time and freeing the mbuf in another thread. The default number of MBUF
>is 8192 . I have two questions regarding this:
>
>
> 1. How to increase number of MBUFS : For this increasing NB_MBUF and
>calling is not having any effect I.e I loose packet when packets > 8192
>are sent in burst. I see following used for creating mbuf pool:
>
>/* create the mbuf pool */
>l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
>MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
>rte_socket_id());
>
>If I want to increase MBUF to say 65536 what should I do ?
>
> 2. I am receiving packets in RX thread which is running on Core 2
>and freeing on a thread which I launched using PHREAD and runs on Core 0
>. Any implications for this kind of mechanism
>
>Thanks for the support and keeping forum active.
>
>Regards,
>Neenah
>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2017-05-18 20:21 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-15 7:14 [dpdk-users] Mechanism to increase MBUF allocation Neeraj Tandon (netandon)
2017-05-17 3:27 Neeraj Tandon (netandon)
2017-05-18 20:21 ` Neeraj Tandon (netandon)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).