DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@amd.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: dev@dpdk.org
Subject: Re: [PATCH v4] tap: do not duplicate fd's
Date: Wed, 24 Apr 2024 17:57:46 +0100	[thread overview]
Message-ID: <fbdc8c75-1c72-44a7-a5aa-8a817b6e8154@amd.com> (raw)
In-Reply-To: <20240311194543.39690-1-stephen@networkplumber.org>

On 3/11/2024 7:45 PM, Stephen Hemminger wrote:
> The TAP device can use same file descriptopr for both rx and tx queues.
> This allows up to 8 queues (versus 4).
> 
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
> v4 - fix typos reported by check patch
> 
>  drivers/net/tap/meson.build   |   2 +-
>  drivers/net/tap/rte_eth_tap.c | 197 +++++++++++++++-------------------
>  drivers/net/tap/rte_eth_tap.h |   3 +-
>  drivers/net/tap/tap_flow.c    |   3 +-
>  drivers/net/tap/tap_intr.c    |   7 +-
>  5 files changed, 92 insertions(+), 120 deletions(-)
> 
> diff --git a/drivers/net/tap/meson.build b/drivers/net/tap/meson.build
> index 5099ccdff11b..9cd124d53e23 100644
> --- a/drivers/net/tap/meson.build
> +++ b/drivers/net/tap/meson.build
> @@ -16,7 +16,7 @@ sources = files(
>  
>  deps = ['bus_vdev', 'gso', 'hash']
>  
> -cflags += '-DTAP_MAX_QUEUES=16'
> +cflags += '-DTAP_MAX_QUEUES=8'
>  

OK to merge file descriptors instead of duplicating them.

But we have this 4 queue limitation only for multi process case, right?
If user is planning to use only with primary, this will reduce the
supported queue number.

Does it make sense to enforce this limitation for secondary only and
keep TAP_MAX_QUEUES same?
So for multi process usecase supported queue number will be 8, for
primary only use case it will remain 16.

<...>

> @@ -1482,52 +1480,34 @@ tap_setup_queue(struct rte_eth_dev *dev,
>  		uint16_t qid,
>  		int is_rx)
>  {
> -	int ret;
> -	int *fd;
> -	int *other_fd;
> -	const char *dir;
> +	int fd, ret;
>  	struct pmd_internals *pmd = dev->data->dev_private;
>  	struct pmd_process_private *process_private = dev->process_private;
>  	struct rx_queue *rx = &internals->rxq[qid];
>  	struct tx_queue *tx = &internals->txq[qid];
> -	struct rte_gso_ctx *gso_ctx;
> +	struct rte_gso_ctx *gso_ctx = NULL;
> +	const char *dir = is_rx ? "rx" : "tx";
>  
> -	if (is_rx) {
> -		fd = &process_private->rxq_fds[qid];
> -		other_fd = &process_private->txq_fds[qid];
> -		dir = "rx";
> -		gso_ctx = NULL;
> -	} else {
> -		fd = &process_private->txq_fds[qid];
> -		other_fd = &process_private->rxq_fds[qid];
> -		dir = "tx";
> +	if (is_rx)
>  		gso_ctx = &tx->gso_ctx;
>

Should this be 'if (!is_rx)' ?

  reply	other threads:[~2024-04-24 16:57 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <0240308185401.150651-1-stephen@networkplumber.org>
2024-03-11 19:45 ` Stephen Hemminger
2024-04-24 16:57   ` Ferruh Yigit [this message]
2024-04-24 19:04     ` Stephen Hemminger
2024-04-25 12:50       ` Ferruh Yigit
2024-04-25 12:50       ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fbdc8c75-1c72-44a7-a5aa-8a817b6e8154@amd.com \
    --to=ferruh.yigit@amd.com \
    --cc=dev@dpdk.org \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).