From: Thomas Monjalon <thomas@monjalon.net>
To: dev@dpdk.org
Cc: Ferruh Yigit <ferruh.yigit@intel.com>,
"Yigit, Ferruh" <ferruh.yigit@linux.intel.com>,
Nilanjan Sarkar <nsarkar@sandvine.com>,
Andrew Rybchenko <arybchenko@solarflare.com>,
Konstantin Ananyev <konstantin.ananyev@intel.com>,
Bruce Richardson <bruce.richardson@intel.com>,
"Kinsella, Ray" <ray.kinsella@intel.com>,
Olivier MATZ <olivier.matz@6wind.com>,
Jerin Jacob <jerin.jacob@caviumnetworks.com>
Subject: Re: [dpdk-dev] [PATCH] eal: added new api to only enqueue a packet in tx buffer
Date: Mon, 11 Nov 2019 18:30:12 +0100 [thread overview]
Message-ID: <4483786.soQ6Bse14g@xps> (raw)
In-Reply-To: <2e01890d-c21e-a7de-8674-eb2ab139aa2d@intel.com>
11/11/2019 17:56, Ferruh Yigit:
> On 10/18/2019 5:24 PM, Yigit, Ferruh wrote:
> > On 8/8/2019 1:28 PM, Nilanjan Sarkar wrote:
> >> This api is similar like api `rte_eth_tx_buffer` except it
> >> does not attempt to flush the buffer in case buffer is full.
> >> The advantage is that, this api does not need port id and
> >> queue id. In case port id and queue id are shared within threads
> >> then application can not buffer a packet until it gets access
> >> to port and queue. So this function segregate buffering
> >> job from flushing job and thus removes dependency on port and queue.
> >
> > Hi Nilanjan,
> >
> > Sorry, the patch seems missed because of the misleading module info in the patch
> > title, this is not an 'eal' patch but a 'ethdev' patch ...
> >
> > Related to the API, it looks like target is to reduce the critical section which
> > looks reasonable to me.
> >
> > A concern is related to the making this function inline, we are discussing
> > moving existing inline functions to regular functions, this may have performance
> > affect but if the drop is acceptable what about making this an ethdev API?
> >
>
> There was no response on making the new proposed API a proper function.
>
> @Thomas, @Andrew, et al,
>
> What do you think about a new static inline ethdev API?
>
> >> +static __rte_always_inline int
> >> +rte_eth_tx_enqueue(struct rte_eth_dev_tx_buffer *buffer, struct rte_mbuf *tx_pkt)
> >> +{
> >> + if (buffer->length < buffer->size) {
> >> + buffer->pkts[buffer->length++] = tx_pkt;
> >> + return 0;
> >> + }
> >> +
> >> + return -1;
> >> +}
It looks reasonnable.
But the function name should include _buffer_
What about rte_eth_tx_buffer_enqueue?
next prev parent reply other threads:[~2019-11-11 17:30 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-08 12:28 Nilanjan Sarkar
2019-10-18 16:24 ` Yigit, Ferruh
2019-11-11 16:56 ` Ferruh Yigit
2019-11-11 17:30 ` Thomas Monjalon [this message]
2019-11-12 7:17 ` Andrew Rybchenko
-- strict thread matches above, loose matches on Subject: below --
2019-08-21 5:59 Nilanjan Sarkar
2019-08-08 11:53 Nilanjan Sarkar
2019-08-08 11:17 Nilanjan Sarkar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4483786.soQ6Bse14g@xps \
--to=thomas@monjalon.net \
--cc=arybchenko@solarflare.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=ferruh.yigit@linux.intel.com \
--cc=jerin.jacob@caviumnetworks.com \
--cc=konstantin.ananyev@intel.com \
--cc=nsarkar@sandvine.com \
--cc=olivier.matz@6wind.com \
--cc=ray.kinsella@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).