DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] rte_eth_tx_burst improperly freeing mbufs from KNI mbuf pool
@ 2019-04-10 13:10 Paras Jha
  2019-04-10 13:10 ` Paras Jha
  2019-04-30 14:59 ` Ferruh Yigit
  0 siblings, 2 replies; 8+ messages in thread
From: Paras Jha @ 2019-04-10 13:10 UTC (permalink / raw)
  To: dev

Hi all,

I've been chasing down a strange issue related to rte_kni_tx_burst.

My application calls rte_kni_rx_burst, which allocates from a discrete mbuf
pool using kni_allocate_mbufs. That traffic is immediately sent to
rte_eth_tx_burst which does not seem to be freeing mbufs even upon
succesful completion.

My application follows the standard model of freeing mbufs only if the
number of tx mbufs is less than the rx mbufs - however, after sending as
many mbufs as there are in the pool, I get KNI: Out of memory soon after
when calling rte_kni_rx_burst

My concern is that if I free all mbufs allocated by the KNI during
rte_kni_rx_burst the application seems to work as intended without memory
leaks, even though this goes against how the actual PMDs work. Is this a
bug, or intended behavior? The documented examples on the DPDK website seem
to only free mbufs if it fails to send, even with the KNI example.

B/R

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dpdk-dev] rte_eth_tx_burst improperly freeing mbufs from KNI mbuf pool
  2019-04-10 13:10 [dpdk-dev] rte_eth_tx_burst improperly freeing mbufs from KNI mbuf pool Paras Jha
@ 2019-04-10 13:10 ` Paras Jha
  2019-04-30 14:59 ` Ferruh Yigit
  1 sibling, 0 replies; 8+ messages in thread
From: Paras Jha @ 2019-04-10 13:10 UTC (permalink / raw)
  To: dev

Hi all,

I've been chasing down a strange issue related to rte_kni_tx_burst.

My application calls rte_kni_rx_burst, which allocates from a discrete mbuf
pool using kni_allocate_mbufs. That traffic is immediately sent to
rte_eth_tx_burst which does not seem to be freeing mbufs even upon
succesful completion.

My application follows the standard model of freeing mbufs only if the
number of tx mbufs is less than the rx mbufs - however, after sending as
many mbufs as there are in the pool, I get KNI: Out of memory soon after
when calling rte_kni_rx_burst

My concern is that if I free all mbufs allocated by the KNI during
rte_kni_rx_burst the application seems to work as intended without memory
leaks, even though this goes against how the actual PMDs work. Is this a
bug, or intended behavior? The documented examples on the DPDK website seem
to only free mbufs if it fails to send, even with the KNI example.

B/R

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] rte_eth_tx_burst improperly freeing mbufs from KNI mbuf pool
  2019-04-10 13:10 [dpdk-dev] rte_eth_tx_burst improperly freeing mbufs from KNI mbuf pool Paras Jha
  2019-04-10 13:10 ` Paras Jha
@ 2019-04-30 14:59 ` Ferruh Yigit
  2019-04-30 14:59   ` Ferruh Yigit
  2019-04-30 15:37   ` Paras Jha
  1 sibling, 2 replies; 8+ messages in thread
From: Ferruh Yigit @ 2019-04-30 14:59 UTC (permalink / raw)
  To: Paras Jha, dev

On 4/10/2019 2:10 PM, Paras Jha wrote:
> Hi all,
> 
> I've been chasing down a strange issue related to rte_kni_tx_burst.
> 
> My application calls rte_kni_rx_burst, which allocates from a discrete mbuf
> pool using kni_allocate_mbufs. That traffic is immediately sent to
> rte_eth_tx_burst which does not seem to be freeing mbufs even upon
> succesful completion.
> 
> My application follows the standard model of freeing mbufs only if the
> number of tx mbufs is less than the rx mbufs - however, after sending as
> many mbufs as there are in the pool, I get KNI: Out of memory soon after
> when calling rte_kni_rx_burst
> 
> My concern is that if I free all mbufs allocated by the KNI during
> rte_kni_rx_burst the application seems to work as intended without memory
> leaks, even though this goes against how the actual PMDs work. Is this a
> bug, or intended behavior? The documented examples on the DPDK website seem
> to only free mbufs if it fails to send, even with the KNI example.

The behavior in the KNI sample application is correct thing to do, free only the
mbufs failed to Tx. As far as I understand you are doing same thing as the kni
sample app, so it should be OK, sample app works fine.

'rte_eth_tx_burst()' sends packets to the kernel, so it shouldn't free the
mbufs, right, userspace can't know when kernel side will be done with them.
When kernel side is done, it puts the mbufs into 'free_q' fifo,
'kni_free_mbufs()' pulls the mbuf from 'free_q' fifo and frees them.
So the mbufs sent via 'rte_eth_tx_burst()' freed asynchronously.
I hope this helps.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] rte_eth_tx_burst improperly freeing mbufs from KNI mbuf pool
  2019-04-30 14:59 ` Ferruh Yigit
@ 2019-04-30 14:59   ` Ferruh Yigit
  2019-04-30 15:37   ` Paras Jha
  1 sibling, 0 replies; 8+ messages in thread
From: Ferruh Yigit @ 2019-04-30 14:59 UTC (permalink / raw)
  To: Paras Jha, dev

On 4/10/2019 2:10 PM, Paras Jha wrote:
> Hi all,
> 
> I've been chasing down a strange issue related to rte_kni_tx_burst.
> 
> My application calls rte_kni_rx_burst, which allocates from a discrete mbuf
> pool using kni_allocate_mbufs. That traffic is immediately sent to
> rte_eth_tx_burst which does not seem to be freeing mbufs even upon
> succesful completion.
> 
> My application follows the standard model of freeing mbufs only if the
> number of tx mbufs is less than the rx mbufs - however, after sending as
> many mbufs as there are in the pool, I get KNI: Out of memory soon after
> when calling rte_kni_rx_burst
> 
> My concern is that if I free all mbufs allocated by the KNI during
> rte_kni_rx_burst the application seems to work as intended without memory
> leaks, even though this goes against how the actual PMDs work. Is this a
> bug, or intended behavior? The documented examples on the DPDK website seem
> to only free mbufs if it fails to send, even with the KNI example.

The behavior in the KNI sample application is correct thing to do, free only the
mbufs failed to Tx. As far as I understand you are doing same thing as the kni
sample app, so it should be OK, sample app works fine.

'rte_eth_tx_burst()' sends packets to the kernel, so it shouldn't free the
mbufs, right, userspace can't know when kernel side will be done with them.
When kernel side is done, it puts the mbufs into 'free_q' fifo,
'kni_free_mbufs()' pulls the mbuf from 'free_q' fifo and frees them.
So the mbufs sent via 'rte_eth_tx_burst()' freed asynchronously.
I hope this helps.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] rte_eth_tx_burst improperly freeing mbufs from KNI mbuf pool
  2019-04-30 14:59 ` Ferruh Yigit
  2019-04-30 14:59   ` Ferruh Yigit
@ 2019-04-30 15:37   ` Paras Jha
  2019-04-30 15:37     ` Paras Jha
  2019-04-30 15:39     ` Paras Jha
  1 sibling, 2 replies; 8+ messages in thread
From: Paras Jha @ 2019-04-30 15:37 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev

Hi,

I think this issue seems to be due to how the PMD frees mbufs. If the PMD
is configured with pool X, and KNI configured with pool Y, and pool Y has
far fewer mbufs available than pool X, when an application calls tx_burst
on the PMD, the mbufs will not be freed as the threshold for freeing seems
to be based on how many mbufs are in the pool the mbuf originated from.
Setting the amount of mbufs in pool X and Y to be equal resolved the issue,
even with arbitrarily small or large counts. This kind of seems like a
"gotcha" where it isn't immediately clear without experimentation.

On Tue, Apr 30, 2019 at 10:59 AM Ferruh Yigit <ferruh.yigit@intel.com>
wrote:

> On 4/10/2019 2:10 PM, Paras Jha wrote:
> > Hi all,
> >
> > I've been chasing down a strange issue related to rte_kni_tx_burst.
> >
> > My application calls rte_kni_rx_burst, which allocates from a discrete
> mbuf
> > pool using kni_allocate_mbufs. That traffic is immediately sent to
> > rte_eth_tx_burst which does not seem to be freeing mbufs even upon
> > succesful completion.
> >
> > My application follows the standard model of freeing mbufs only if the
> > number of tx mbufs is less than the rx mbufs - however, after sending as
> > many mbufs as there are in the pool, I get KNI: Out of memory soon after
> > when calling rte_kni_rx_burst
> >
> > My concern is that if I free all mbufs allocated by the KNI during
> > rte_kni_rx_burst the application seems to work as intended without memory
> > leaks, even though this goes against how the actual PMDs work. Is this a
> > bug, or intended behavior? The documented examples on the DPDK website
> seem
> > to only free mbufs if it fails to send, even with the KNI example.
>
> The behavior in the KNI sample application is correct thing to do, free
> only the
> mbufs failed to Tx. As far as I understand you are doing same thing as the
> kni
> sample app, so it should be OK, sample app works fine.
>
> 'rte_eth_tx_burst()' sends packets to the kernel, so it shouldn't free the
> mbufs, right, userspace can't know when kernel side will be done with them.
> When kernel side is done, it puts the mbufs into 'free_q' fifo,
> 'kni_free_mbufs()' pulls the mbuf from 'free_q' fifo and frees them.
> So the mbufs sent via 'rte_eth_tx_burst()' freed asynchronously.
> I hope this helps.
>
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] rte_eth_tx_burst improperly freeing mbufs from KNI mbuf pool
  2019-04-30 15:37   ` Paras Jha
@ 2019-04-30 15:37     ` Paras Jha
  2019-04-30 15:39     ` Paras Jha
  1 sibling, 0 replies; 8+ messages in thread
From: Paras Jha @ 2019-04-30 15:37 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev

Hi,

I think this issue seems to be due to how the PMD frees mbufs. If the PMD
is configured with pool X, and KNI configured with pool Y, and pool Y has
far fewer mbufs available than pool X, when an application calls tx_burst
on the PMD, the mbufs will not be freed as the threshold for freeing seems
to be based on how many mbufs are in the pool the mbuf originated from.
Setting the amount of mbufs in pool X and Y to be equal resolved the issue,
even with arbitrarily small or large counts. This kind of seems like a
"gotcha" where it isn't immediately clear without experimentation.

On Tue, Apr 30, 2019 at 10:59 AM Ferruh Yigit <ferruh.yigit@intel.com>
wrote:

> On 4/10/2019 2:10 PM, Paras Jha wrote:
> > Hi all,
> >
> > I've been chasing down a strange issue related to rte_kni_tx_burst.
> >
> > My application calls rte_kni_rx_burst, which allocates from a discrete
> mbuf
> > pool using kni_allocate_mbufs. That traffic is immediately sent to
> > rte_eth_tx_burst which does not seem to be freeing mbufs even upon
> > succesful completion.
> >
> > My application follows the standard model of freeing mbufs only if the
> > number of tx mbufs is less than the rx mbufs - however, after sending as
> > many mbufs as there are in the pool, I get KNI: Out of memory soon after
> > when calling rte_kni_rx_burst
> >
> > My concern is that if I free all mbufs allocated by the KNI during
> > rte_kni_rx_burst the application seems to work as intended without memory
> > leaks, even though this goes against how the actual PMDs work. Is this a
> > bug, or intended behavior? The documented examples on the DPDK website
> seem
> > to only free mbufs if it fails to send, even with the KNI example.
>
> The behavior in the KNI sample application is correct thing to do, free
> only the
> mbufs failed to Tx. As far as I understand you are doing same thing as the
> kni
> sample app, so it should be OK, sample app works fine.
>
> 'rte_eth_tx_burst()' sends packets to the kernel, so it shouldn't free the
> mbufs, right, userspace can't know when kernel side will be done with them.
> When kernel side is done, it puts the mbufs into 'free_q' fifo,
> 'kni_free_mbufs()' pulls the mbuf from 'free_q' fifo and frees them.
> So the mbufs sent via 'rte_eth_tx_burst()' freed asynchronously.
> I hope this helps.
>
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] rte_eth_tx_burst improperly freeing mbufs from KNI mbuf pool
  2019-04-30 15:37   ` Paras Jha
  2019-04-30 15:37     ` Paras Jha
@ 2019-04-30 15:39     ` Paras Jha
  2019-04-30 15:39       ` Paras Jha
  1 sibling, 1 reply; 8+ messages in thread
From: Paras Jha @ 2019-04-30 15:39 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev

Sorry, I meant that "the mbufs will not be freed as the threshold for
freeing seems to not be based on the pool the mbuf originated from, but
based on the pool the PMD is configured to use"

On Tue, Apr 30, 2019 at 11:37 AM Paras Jha <dreadiscool@gmail.com> wrote:

> Hi,
>
> I think this issue seems to be due to how the PMD frees mbufs. If the PMD
> is configured with pool X, and KNI configured with pool Y, and pool Y has
> far fewer mbufs available than pool X, when an application calls tx_burst
> on the PMD, the mbufs will not be freed as the threshold for freeing seems
> to be based on how many mbufs are in the pool the mbuf originated from.
> Setting the amount of mbufs in pool X and Y to be equal resolved the issue,
> even with arbitrarily small or large counts. This kind of seems like a
> "gotcha" where it isn't immediately clear without experimentation.
>
> On Tue, Apr 30, 2019 at 10:59 AM Ferruh Yigit <ferruh.yigit@intel.com>
> wrote:
>
>> On 4/10/2019 2:10 PM, Paras Jha wrote:
>> > Hi all,
>> >
>> > I've been chasing down a strange issue related to rte_kni_tx_burst.
>> >
>> > My application calls rte_kni_rx_burst, which allocates from a discrete
>> mbuf
>> > pool using kni_allocate_mbufs. That traffic is immediately sent to
>> > rte_eth_tx_burst which does not seem to be freeing mbufs even upon
>> > succesful completion.
>> >
>> > My application follows the standard model of freeing mbufs only if the
>> > number of tx mbufs is less than the rx mbufs - however, after sending as
>> > many mbufs as there are in the pool, I get KNI: Out of memory soon after
>> > when calling rte_kni_rx_burst
>> >
>> > My concern is that if I free all mbufs allocated by the KNI during
>> > rte_kni_rx_burst the application seems to work as intended without
>> memory
>> > leaks, even though this goes against how the actual PMDs work. Is this a
>> > bug, or intended behavior? The documented examples on the DPDK website
>> seem
>> > to only free mbufs if it fails to send, even with the KNI example.
>>
>> The behavior in the KNI sample application is correct thing to do, free
>> only the
>> mbufs failed to Tx. As far as I understand you are doing same thing as
>> the kni
>> sample app, so it should be OK, sample app works fine.
>>
>> 'rte_eth_tx_burst()' sends packets to the kernel, so it shouldn't free the
>> mbufs, right, userspace can't know when kernel side will be done with
>> them.
>> When kernel side is done, it puts the mbufs into 'free_q' fifo,
>> 'kni_free_mbufs()' pulls the mbuf from 'free_q' fifo and frees them.
>> So the mbufs sent via 'rte_eth_tx_burst()' freed asynchronously.
>> I hope this helps.
>>
>>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] rte_eth_tx_burst improperly freeing mbufs from KNI mbuf pool
  2019-04-30 15:39     ` Paras Jha
@ 2019-04-30 15:39       ` Paras Jha
  0 siblings, 0 replies; 8+ messages in thread
From: Paras Jha @ 2019-04-30 15:39 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: dev

Sorry, I meant that "the mbufs will not be freed as the threshold for
freeing seems to not be based on the pool the mbuf originated from, but
based on the pool the PMD is configured to use"

On Tue, Apr 30, 2019 at 11:37 AM Paras Jha <dreadiscool@gmail.com> wrote:

> Hi,
>
> I think this issue seems to be due to how the PMD frees mbufs. If the PMD
> is configured with pool X, and KNI configured with pool Y, and pool Y has
> far fewer mbufs available than pool X, when an application calls tx_burst
> on the PMD, the mbufs will not be freed as the threshold for freeing seems
> to be based on how many mbufs are in the pool the mbuf originated from.
> Setting the amount of mbufs in pool X and Y to be equal resolved the issue,
> even with arbitrarily small or large counts. This kind of seems like a
> "gotcha" where it isn't immediately clear without experimentation.
>
> On Tue, Apr 30, 2019 at 10:59 AM Ferruh Yigit <ferruh.yigit@intel.com>
> wrote:
>
>> On 4/10/2019 2:10 PM, Paras Jha wrote:
>> > Hi all,
>> >
>> > I've been chasing down a strange issue related to rte_kni_tx_burst.
>> >
>> > My application calls rte_kni_rx_burst, which allocates from a discrete
>> mbuf
>> > pool using kni_allocate_mbufs. That traffic is immediately sent to
>> > rte_eth_tx_burst which does not seem to be freeing mbufs even upon
>> > succesful completion.
>> >
>> > My application follows the standard model of freeing mbufs only if the
>> > number of tx mbufs is less than the rx mbufs - however, after sending as
>> > many mbufs as there are in the pool, I get KNI: Out of memory soon after
>> > when calling rte_kni_rx_burst
>> >
>> > My concern is that if I free all mbufs allocated by the KNI during
>> > rte_kni_rx_burst the application seems to work as intended without
>> memory
>> > leaks, even though this goes against how the actual PMDs work. Is this a
>> > bug, or intended behavior? The documented examples on the DPDK website
>> seem
>> > to only free mbufs if it fails to send, even with the KNI example.
>>
>> The behavior in the KNI sample application is correct thing to do, free
>> only the
>> mbufs failed to Tx. As far as I understand you are doing same thing as
>> the kni
>> sample app, so it should be OK, sample app works fine.
>>
>> 'rte_eth_tx_burst()' sends packets to the kernel, so it shouldn't free the
>> mbufs, right, userspace can't know when kernel side will be done with
>> them.
>> When kernel side is done, it puts the mbufs into 'free_q' fifo,
>> 'kni_free_mbufs()' pulls the mbuf from 'free_q' fifo and frees them.
>> So the mbufs sent via 'rte_eth_tx_burst()' freed asynchronously.
>> I hope this helps.
>>
>>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-04-30 15:39 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-10 13:10 [dpdk-dev] rte_eth_tx_burst improperly freeing mbufs from KNI mbuf pool Paras Jha
2019-04-10 13:10 ` Paras Jha
2019-04-30 14:59 ` Ferruh Yigit
2019-04-30 14:59   ` Ferruh Yigit
2019-04-30 15:37   ` Paras Jha
2019-04-30 15:37     ` Paras Jha
2019-04-30 15:39     ` Paras Jha
2019-04-30 15:39       ` Paras Jha

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).