DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Transmit Completion
@ 2021-03-02 10:26 John Alexander
  2021-03-02 15:53 ` Stephen Hemminger
  2021-03-02 16:22 ` Tom Barbette
  0 siblings, 2 replies; 3+ messages in thread
From: John Alexander @ 2021-03-02 10:26 UTC (permalink / raw)
  To: users

Hi,

Once a packet has been transmitted using an API call such as rte_eth_tx_burst(), what is the preferred method of determining that the packet has been transmitted?  I need to provide a notification to an application so that it can re-use/re-claim some application resources that were associated with the mbuf in the mbuf custom private data; this is a lazy re-claim, we're not after immediate per-packets events just eventual reclaim.

I've been finding that rte_eth_tx_done_cleanup() isn't very widely supported by many PMDs and when used with the Intel i40e PMD, it doesn't flush out mbufs where the associated descriptor hasn't yet been updated due to the descriptor update granularity (setting that down to 1 seems to result in at least 1 mbuf still remaining stuck in the PMD).  The reclaim cycle works fine when I'm continually sending, when I pause or stop sending, I never get my mbufs back so I'm wondering what the preferred way of achieving this is?

Many Thanks,
John A.



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] Transmit Completion
  2021-03-02 10:26 [dpdk-users] Transmit Completion John Alexander
@ 2021-03-02 15:53 ` Stephen Hemminger
  2021-03-02 16:22 ` Tom Barbette
  1 sibling, 0 replies; 3+ messages in thread
From: Stephen Hemminger @ 2021-03-02 15:53 UTC (permalink / raw)
  To: John Alexander; +Cc: users

On Tue, 2 Mar 2021 10:26:24 +0000
John Alexander <John.Alexander@datapath.co.uk> wrote:

> Hi,
> 
> Once a packet has been transmitted using an API call such as rte_eth_tx_burst(), what is the preferred method of determining that the packet has been transmitted?  I need to provide a notification to an application so that it can re-use/re-claim some application resources that were associated with the mbuf in the mbuf custom private data; this is a lazy re-claim, we're not after immediate per-packets events just eventual reclaim.
> 
> I've been finding that rte_eth_tx_done_cleanup() isn't very widely supported by many PMDs and when used with the Intel i40e PMD, it doesn't flush out mbufs where the associated descriptor hasn't yet been updated due to the descriptor update granularity (setting that down to 1 seems to result in at least 1 mbuf still remaining stuck in the PMD).  The reclaim cycle works fine when I'm continually sending, when I pause or stop sending, I never get my mbufs back so I'm wondering what the preferred way of achieving this is?
> 
> Many Thanks,
> John A.
> 
> 

DPDK has no such mechanism built in.
You might be able to do something like this by creating a new memory pool
type and doing an action in the mempool enqueue operation.

Transmit done -> rte_pktmbuf_free -> rte_mempool_put
calls mempool_ops->enqueue


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] Transmit Completion
  2021-03-02 10:26 [dpdk-users] Transmit Completion John Alexander
  2021-03-02 15:53 ` Stephen Hemminger
@ 2021-03-02 16:22 ` Tom Barbette
  1 sibling, 0 replies; 3+ messages in thread
From: Tom Barbette @ 2021-03-02 16:22 UTC (permalink / raw)
  To: John Alexander, users

Le 02/03/2021 à 11:26, John Alexander a écrit :
> The reclaim cycle works fine when I'm continually sending, when I pause or stop sending, I never get my mbufs back so I'm wondering what the preferred way of achieving this is?

This is not something you want as most drivers reclaim buffers in 
batches for performance reasons. If you want to ensure a buffer is 
reclaimed right away you would need to do some garbage collection. Most 
NICs do not even ring a doorbell/update completion flags for every 
transmitted packets, so you would not know when they're sent exactly 
right away.

If it's lazy re-claim, I guess it's not a problem?



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-03-02 16:22 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-02 10:26 [dpdk-users] Transmit Completion John Alexander
2021-03-02 15:53 ` Stephen Hemminger
2021-03-02 16:22 ` Tom Barbette

DPDK usage discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git