Hi, Once a packet has been transmitted using an API call such as rte_eth_tx_burst(), what is the preferred method of determining that the packet has been transmitted? I need to provide a notification to an application so that it can re-use/re-claim some application resources that were associated with the mbuf in the mbuf custom private data; this is a lazy re-claim, we're not after immediate per-packets events just eventual reclaim. I've been finding that rte_eth_tx_done_cleanup() isn't very widely supported by many PMDs and when used with the Intel i40e PMD, it doesn't flush out mbufs where the associated descriptor hasn't yet been updated due to the descriptor update granularity (setting that down to 1 seems to result in at least 1 mbuf still remaining stuck in the PMD). The reclaim cycle works fine when I'm continually sending, when I pause or stop sending, I never get my mbufs back so I'm wondering what the preferred way of achieving this is? Many Thanks, John A.
On Tue, 2 Mar 2021 10:26:24 +0000
John Alexander <John.Alexander@datapath.co.uk> wrote:
> Hi,
>
> Once a packet has been transmitted using an API call such as rte_eth_tx_burst(), what is the preferred method of determining that the packet has been transmitted? I need to provide a notification to an application so that it can re-use/re-claim some application resources that were associated with the mbuf in the mbuf custom private data; this is a lazy re-claim, we're not after immediate per-packets events just eventual reclaim.
>
> I've been finding that rte_eth_tx_done_cleanup() isn't very widely supported by many PMDs and when used with the Intel i40e PMD, it doesn't flush out mbufs where the associated descriptor hasn't yet been updated due to the descriptor update granularity (setting that down to 1 seems to result in at least 1 mbuf still remaining stuck in the PMD). The reclaim cycle works fine when I'm continually sending, when I pause or stop sending, I never get my mbufs back so I'm wondering what the preferred way of achieving this is?
>
> Many Thanks,
> John A.
>
>
DPDK has no such mechanism built in.
You might be able to do something like this by creating a new memory pool
type and doing an action in the mempool enqueue operation.
Transmit done -> rte_pktmbuf_free -> rte_mempool_put
calls mempool_ops->enqueue
Le 02/03/2021 à 11:26, John Alexander a écrit :
> The reclaim cycle works fine when I'm continually sending, when I pause or stop sending, I never get my mbufs back so I'm wondering what the preferred way of achieving this is?
This is not something you want as most drivers reclaim buffers in
batches for performance reasons. If you want to ensure a buffer is
reclaimed right away you would need to do some garbage collection. Most
NICs do not even ring a doorbell/update completion flags for every
transmitted packets, so you would not know when they're sent exactly
right away.
If it's lazy re-claim, I guess it's not a problem?