DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Guo, Junfeng" <junfeng.guo@intel.com>
To: "ferruh.yigit@amd.com" <ferruh.yigit@amd.com>,
	"Richardson, Bruce" <bruce.richardson@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"Zhang, Qi Z" <qi.z.zhang@intel.com>,
	Rushil Gupta <rushilg@google.com>
Subject: RE: [PATCH 1/1] net/gve: update base code for DQO
Date: Tue, 11 Apr 2023 06:51:09 +0000	[thread overview]
Message-ID: <DM6PR11MB37232F81FAFE158854D5C531E79A9@DM6PR11MB3723.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20230411045908.844901-2-rushilg@google.com>

Hi Ferruh & Bruce,

This patch contains few lines change for the MIT licensed gve base code.
Note that there is no new files added, just some minor code update.

Do we need to ask for special approval from the Tech Board for this?
Please help give some advice and also help review this patch. Thanks!

BTW, Google will also help replace all the base code under MIT license
with the ones under BSD-3 license soon, which would make things more
easier.

Regards,
Junfeng

> -----Original Message-----
> From: Rushil Gupta <rushilg@google.com>
> Sent: Tuesday, April 11, 2023 12:59
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; ferruh.yigit@amd.com
> Cc: Richardson, Bruce <bruce.richardson@intel.com>; dev@dpdk.org;
> Rushil Gupta <rushilg@google.com>; Guo, Junfeng
> <junfeng.guo@intel.com>
> Subject: [PATCH 1/1] net/gve: update base code for DQO
> 
> Update gve base code to support DQO.
> 
> This patch is based on this:
> https://patchwork.dpdk.org/project/dpdk/list/?series=27647&state=*
> 
> Signed-off-by: Rushil Gupta <rushilg@google.com>
> Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
> ---
>  drivers/net/gve/base/gve.h          |  1 +
>  drivers/net/gve/base/gve_adminq.c   | 10 +++++-----
>  drivers/net/gve/base/gve_desc_dqo.h |  4 ----
>  3 files changed, 6 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/net/gve/base/gve.h b/drivers/net/gve/base/gve.h
> index 2dc4507acb..2b7cf7d99b 100644
> --- a/drivers/net/gve/base/gve.h
> +++ b/drivers/net/gve/base/gve.h
> @@ -7,6 +7,7 @@
>  #define _GVE_H_
> 
>  #include "gve_desc.h"
> +#include "gve_desc_dqo.h"
> 
>  #define GVE_VERSION		"1.3.0"
>  #define GVE_VERSION_PREFIX	"GVE-"
> diff --git a/drivers/net/gve/base/gve_adminq.c
> b/drivers/net/gve/base/gve_adminq.c
> index e745b709b2..e963f910a0 100644
> --- a/drivers/net/gve/base/gve_adminq.c
> +++ b/drivers/net/gve/base/gve_adminq.c
> @@ -497,11 +497,11 @@ static int gve_adminq_create_tx_queue(struct
> gve_priv *priv, u32 queue_index)
>  		cmd.create_tx_queue.queue_page_list_id =
> cpu_to_be32(qpl_id);
>  	} else {
>  		cmd.create_tx_queue.tx_ring_size =
> -			cpu_to_be16(txq->nb_tx_desc);
> +			cpu_to_be16(priv->tx_desc_cnt);
>  		cmd.create_tx_queue.tx_comp_ring_addr =
> -			cpu_to_be64(txq->complq->tx_ring_phys_addr);
> +			cpu_to_be64(txq->compl_ring_phys_addr);
>  		cmd.create_tx_queue.tx_comp_ring_size =
> -			cpu_to_be16(priv->tx_compq_size);
> +			cpu_to_be16(priv->tx_compq_size *
> DQO_TX_MULTIPLIER);
>  	}
> 
>  	return gve_adminq_issue_cmd(priv, &cmd);
> @@ -549,9 +549,9 @@ static int gve_adminq_create_rx_queue(struct
> gve_priv *priv, u32 queue_index)
>  		cmd.create_rx_queue.rx_ring_size =
>  			cpu_to_be16(priv->rx_desc_cnt);
>  		cmd.create_rx_queue.rx_desc_ring_addr =
> -			cpu_to_be64(rxq->rx_ring_phys_addr);
> +			cpu_to_be64(rxq->compl_ring_phys_addr);
>  		cmd.create_rx_queue.rx_data_ring_addr =
> -			cpu_to_be64(rxq->bufq->rx_ring_phys_addr);
> +			cpu_to_be64(rxq->rx_ring_phys_addr);
>  		cmd.create_rx_queue.packet_buffer_size =
>  			cpu_to_be16(rxq->rx_buf_len);
>  		cmd.create_rx_queue.rx_buff_ring_size =
> diff --git a/drivers/net/gve/base/gve_desc_dqo.h
> b/drivers/net/gve/base/gve_desc_dqo.h
> index ee1afdecb8..bb4a18d4d1 100644
> --- a/drivers/net/gve/base/gve_desc_dqo.h
> +++ b/drivers/net/gve/base/gve_desc_dqo.h
> @@ -13,10 +13,6 @@
>  #define GVE_TX_MAX_HDR_SIZE_DQO 255
>  #define GVE_TX_MIN_TSO_MSS_DQO 88
> 
> -#ifndef __LITTLE_ENDIAN_BITFIELD
> -#error "Only little endian supported"
> -#endif
> -
>  /* Basic TX descriptor (DTYPE 0x0C) */
>  struct gve_tx_pkt_desc_dqo {
>  	__le64 buf_addr;
> --
> 2.40.0.577.gac1e443424-goog


  reply	other threads:[~2023-04-11  6:51 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-11  4:59 [PATCH 0/1] *** Update drivers/net/gve/base code for DQO *** Rushil Gupta
2023-04-11  4:59 ` [PATCH 1/1] net/gve: update base code for DQO Rushil Gupta
2023-04-11  6:51   ` Guo, Junfeng [this message]
2023-04-12  8:50     ` Ferruh Yigit
2023-04-12  9:09       ` Guo, Junfeng
2023-04-12  9:34         ` Ferruh Yigit
2023-04-12  9:41           ` Guo, Junfeng
2023-04-12 15:42             ` Rushil Gupta
2023-04-12 16:02               ` Ferruh Yigit
2023-04-12 18:04                 ` Rushil Gupta
2023-04-14  5:17                   ` Rushil Gupta
2023-04-14 11:00                     ` Ferruh Yigit
2023-05-04  8:30   ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM6PR11MB37232F81FAFE158854D5C531E79A9@DM6PR11MB3723.namprd11.prod.outlook.com \
    --to=junfeng.guo@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=qi.z.zhang@intel.com \
    --cc=rushilg@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).