DPDK patches and discussions
 help / color / mirror / Atom feed
From: Adrien Mazarguil <adrien.mazarguil@6wind.com>
To: Ilya Matveychikov <matvejchikov@gmail.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] A question about (poor) rte_ethdev internal rx/tx callbacks design
Date: Mon, 13 Nov 2017 18:15:12 +0100	[thread overview]
Message-ID: <20171113171512.GV24849@6wind.com> (raw)
In-Reply-To: <5F2502C3-99CF-4BE1-9DEC-364C5E636061@gmail.com>

On Mon, Nov 13, 2017 at 02:56:23PM +0400, Ilya Matveychikov wrote:
> 
> > On Nov 13, 2017, at 2:39 PM, Adrien Mazarguil <adrien.mazarguil@6wind.com> wrote:
> > 
> > On Sat, Nov 11, 2017 at 09:18:45PM +0400, Ilya Matveychikov wrote:
> >> Folks,
> >> 
> >> Are you serious with it:
> >> 
> >> typedef uint16_t (*eth_rx_burst_t)(void *rxq,
> >> 				   struct rte_mbuf **rx_pkts,
> >> 				   uint16_t nb_pkts);
> >> typedef uint16_t (*eth_tx_burst_t)(void *txq,
> >> 				   struct rte_mbuf **tx_pkts,
> >> 				   uint16_t nb_pkts);
> >> 
> >> I’m not surprised that every PMD stores port_id in every and each queue as having just the queue as an argument doesn’t allow to get the device. So the question is - why not to use something like:
> >> 
> >> typedef uint16_t (*eth_rx_burst_t)(void *dev, uint16_t queue_id,
> >> 				   struct rte_mbuf **rx_pkts,
> >> 				   uint16_t nb_pkts);
> >> typedef uint16_t (*eth_tx_burst_t)(void *dev, uint16_t queue_id,
> >> 				   struct rte_mbuf **tx_pkts,
> >> 				   uint16_t nb_pkts);
> > 
> > I assume it's since the rte_eth_[rt]x_burst() wrappers already pay the price
> > for that indirection, doing it twice would be redundant.
> 
> No need to do it twice, agree. We can pass dev pointer as well as queue, not just the queue’s
> index.
> 
> > 
> > Basically the cost of storing a back-pointer to dev or a queue index in each
> > Rx/Tx queue structure is minor compared to saving a couple of CPU cycles
> > wherever we can.
> 
> Not sure about it. More data to store - more cache space to occupy. Note that every queue has
> at least 4 bytes more than it actually needs. And RTE_MAX_QUEUES_PER_PORT is defined
> by it’s default to 1024. So we may have 4k extra for each port....

Note that queues are only allocated if requested by application, there's
really not much overhead involved.

Also to echo Konstantin's reply and clarify mine, PMDs normally do not
access this structure from their data plane. This pointer, if needed, is
normally stored away from hot regions accessed during TX/RX, usually at the
end of TX/RX structures and only for the convenience of management
operations. It therefore has no measurable impact on the CPU cache.

-- 
Adrien Mazarguil
6WIND

  reply	other threads:[~2017-11-13 17:15 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-11 17:18 Ilya Matveychikov
2017-11-12  2:43 ` Thomas Monjalon
2017-11-13 10:39 ` Adrien Mazarguil
2017-11-13 10:56   ` Ilya Matveychikov
2017-11-13 17:15     ` Adrien Mazarguil [this message]
2017-11-13 19:33       ` Ilya Matveychikov
2017-11-14  6:24         ` Andrew Rybchenko
2017-11-13 10:58   ` Ananyev, Konstantin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171113171512.GV24849@6wind.com \
    --to=adrien.mazarguil@6wind.com \
    --cc=dev@dpdk.org \
    --cc=matvejchikov@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).