DPDK usage discussions
 help / color / mirror / Atom feed
From: Alvaro Karsz <alvaro.karsz@solid-run.com>
To: users@dpdk.org
Subject: Poll tx_pkt_burst callback
Date: Mon, 11 Apr 2022 16:45:21 +0300
Message-ID: <CAJs=3_Cr5tVDT7oRPSm+dDccNPVp4vRKwg5bOtVnETz_qdfEdw@mail.gmail.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 2412 bytes --]

Hello,
I'm developing a PMD for a NIC that I'm developing at the moment.
(DPDK version 20.11)

The NIC should forward packets between a network device (based on dpaa2 HW)
and a PCIe device.

I'm running l2fwd application in order to develop the PMD, with the
following command:

./dpdk-l2fwd -c 0x3 --vdev=my_pmd0 -- -p 0x3 -T 0

The application uses 2 lcores:

   - Lcore 0: RX port 0 TX port 1.
   - Lcore 1: RX port 1 TX port 0.

Where port 0 is the physical one, and net_dpaa2 PMD is attached to this
port.

Port 1 is the virtual one (pci device), and my PMD is attached to it.

The problem I'm experiencing is related to the tx_pkt_burst callback.

When the tx_pkt_burst callback is called, I start the DMA operations to
write the packets, but the problem is that I need to wait for the DMA
operations to finish in order to do some more work.

I don't want to lock the lcore in my tx_pkt_burst function waiting for the
DMA operation to finish, I want to release it as soon as possible, so it
will go back polling the physical port.

Another issue is that I need to poll the PCI for new available write
buffers.

I would like to handle all the TX related software work using one
core, including polling for new write buffers, starting the DMA operations
once new packets arrive and polling DMA until the operation finishes, then
doing some more software work.

Is there a "correct" way to handle this?

What I've tried:
tx_pkt_burst just stores the new packets in an array, and rx_pkt_burst
handles all the TX tasks (polling for new write buffers, Starts DMA
operations, and polls the DMA operations status).
This works, but:

   - The workload is not symmetric, the RX lcore does all the "heavy
   lifting", while the TX lcore does almost nothing.
   - spinlocks are needed since 2 lcores access the same data.
   - Less efficient caches.



What I've though:

   - Using rte_eal_alarm_set to poll the tx_pkt_burst function, for example:

uint16_t my_tx_pkt_burst(void *txq, struct rte_mbuf **tx_pkts,
	 uint16_t nb_pkts)
{

 /* Save mbufs in an array and return number of saved mbufs */
}

void tx_poll(void *param)
{
 /* Check for new write buffers*/

 /* Start DMA operations for new mbufs from my_tx_pkt_burst */

 /* Check DMA status for prev. operations, and do
  * some more work if needed
  */


 /* Call this function again in 1us */
 rte_eal_alarm_set(1, tx_poll, param);
}


Best regards,
Alvaro

[-- Attachment #2: Type: text/html, Size: 6356 bytes --]

                 reply	other threads:[~2022-04-11 13:45 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJs=3_Cr5tVDT7oRPSm+dDccNPVp4vRKwg5bOtVnETz_qdfEdw@mail.gmail.com' \
    --to=alvaro.karsz@solid-run.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK usage discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git