DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] Unable to send Response packets to the same port
@ 2014-06-17 12:55 Tomasz K
  2014-06-17 13:44 ` Tomasz K
  0 siblings, 1 reply; 2+ messages in thread
From: Tomasz K @ 2014-06-17 12:55 UTC (permalink / raw)
  To: dev

Hello

We're currently testing an application based on L2FWD example.

1. The application is located on VM which has 2 VFs from 2 different PFs

2. One core simply polls both RX queues from VFs, makes simple message
processing and forwards the messages to appropriate TX queue of different
VF (so there is no multiple access to the same port)

3. However sometimes when message is being processed, it results with
failure and code needs to send back Failure notification to the port from
which the message was received.

The issue is that sometimes we see that packets are not being sent back
(even though rte_eth_tx_burst() is succesfull... checked with tcpdump on
peer ).
Instead the core receives next packets and tries to send Failure
Indications again until it runs out of memory in mempool.

One thing to notice is that our app priority is latency over throughput so
it always invokes rte_eth_tx_burst with only 1 packet to send. (we are
suspecting this might be an issue here)

Has anyone encountered such issue before.?

Host Setup:
DL380p Gen8 Server Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz
Ubuntu 14.04: 3.13.0-24-generic
Intel 82599

VM Setup:
Ubuntu 14.04: 3.13.0-24-generic
2 VFs (each one from different PF)

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-dev] Unable to send Response packets to the same port
  2014-06-17 12:55 [dpdk-dev] Unable to send Response packets to the same port Tomasz K
@ 2014-06-17 13:44 ` Tomasz K
  0 siblings, 0 replies; 2+ messages in thread
From: Tomasz K @ 2014-06-17 13:44 UTC (permalink / raw)
  To: dev

Update:

I forgot to mention that in our case, due to some our internal constrains
code always allocates new m_buf for received packet and adds additional
overhead PDU to it (both ways)

It seems that problem lies with the same mempool being used. We've tried to
create another mempool for packet allocation, and it seems to be working
fine.

Thanks
Tomasz Kasowicz


2014-06-17 14:55 GMT+02:00 Tomasz K <tomasz.kasowicz@gmail.com>:

> Hello
>
> We're currently testing an application based on L2FWD example.
>
> 1. The application is located on VM which has 2 VFs from 2 different PFs
>
> 2. One core simply polls both RX queues from VFs, makes simple message
> processing and forwards the messages to appropriate TX queue of different
> VF (so there is no multiple access to the same port)
>
> 3. However sometimes when message is being processed, it results with
> failure and code needs to send back Failure notification to the port from
> which the message was received.
>
> The issue is that sometimes we see that packets are not being sent back
> (even though rte_eth_tx_burst() is succesfull... checked with tcpdump on
> peer ).
> Instead the core receives next packets and tries to send Failure
> Indications again until it runs out of memory in mempool.
>
> One thing to notice is that our app priority is latency over throughput so
> it always invokes rte_eth_tx_burst with only 1 packet to send. (we are
> suspecting this might be an issue here)
>
> Has anyone encountered such issue before.?
>
> Host Setup:
> DL380p Gen8 Server Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz
> Ubuntu 14.04: 3.13.0-24-generic
> Intel 82599
>
> VM Setup:
> Ubuntu 14.04: 3.13.0-24-generic
> 2 VFs (each one from different PF)
>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2014-06-17 13:45 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-17 12:55 [dpdk-dev] Unable to send Response packets to the same port Tomasz K
2014-06-17 13:44 ` Tomasz K

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).