DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Wu, Wenjun1" <wenjun1.wu@intel.com>
To: "Su, Simei" <simei.su@intel.com>,
	"Wu, Jingjing" <jingjing.wu@intel.com>,
	 "Xing, Beilei" <beilei.xing@intel.com>,
	"Zhang, Qi Z" <qi.z.zhang@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: RE: [PATCH] common/idpf: rework single queue Tx function
Date: Fri, 25 Aug 2023 07:48:46 +0000	[thread overview]
Message-ID: <IA0PR11MB79557E9E9A41343A3F7AF1BDDFE3A@IA0PR11MB7955.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20230825072106.1819603-1-simei.su@intel.com>



> -----Original Message-----
> From: Su, Simei <simei.su@intel.com>
> Sent: Friday, August 25, 2023 3:21 PM
> To: Wu, Jingjing <jingjing.wu@intel.com>; Xing, Beilei <beilei.xing@intel.com>;
> Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Wu, Wenjun1 <wenjun1.wu@intel.com>; Su, Simei
> <simei.su@intel.com>
> Subject: [PATCH] common/idpf: rework single queue Tx function
> 
> This patch replaces flex Tx descriptor structure with base Tx descriptor.
> 
> Signed-off-by: Simei Su <simei.su@intel.com>
> ---
>  drivers/common/idpf/idpf_common_rxtx.c        | 72 +++++++++++++------
>  drivers/common/idpf/idpf_common_rxtx.h        |  2 +-
>  drivers/common/idpf/idpf_common_rxtx_avx512.c | 36 +++++-----
>  drivers/net/idpf/idpf_rxtx.c                  |  2 +-
>  4 files changed, 69 insertions(+), 43 deletions(-)
> 
> diff --git a/drivers/common/idpf/idpf_common_rxtx.c
> b/drivers/common/idpf/idpf_common_rxtx.c
> index fc87e3e243..67c124a614 100644
> --- a/drivers/common/idpf/idpf_common_rxtx.c
> +++ b/drivers/common/idpf/idpf_common_rxtx.c
> @@ -276,14 +276,14 @@ idpf_qc_single_tx_queue_reset(struct
> idpf_tx_queue *txq)
>  	}
> 
>  	txe = txq->sw_ring;
> -	size = sizeof(struct idpf_flex_tx_desc) * txq->nb_tx_desc;
> +	size = sizeof(struct idpf_base_tx_desc) * txq->nb_tx_desc;
>  	for (i = 0; i < size; i++)
>  		((volatile char *)txq->tx_ring)[i] = 0;
> 
>  	prev = (uint16_t)(txq->nb_tx_desc - 1);
>  	for (i = 0; i < txq->nb_tx_desc; i++) {
> -		txq->tx_ring[i].qw1.cmd_dtype =
> -
> 	rte_cpu_to_le_16(IDPF_TX_DESC_DTYPE_DESC_DONE);
> +		txq->tx_ring[i].qw1 =
> +
> 	rte_cpu_to_le_64(IDPF_TX_DESC_DTYPE_DESC_DONE);
>  		txe[i].mbuf =  NULL;
>  		txe[i].last_id = i;
>  		txe[prev].next_id = i;
> @@ -823,6 +823,33 @@ idpf_calc_context_desc(uint64_t flags)
>  	return 0;
>  }
> 
> +/* set TSO context descriptor for single queue  */ static inline void
> +idpf_set_singleq_tso_ctx(struct rte_mbuf *mbuf,
> +			union idpf_tx_offload tx_offload,
> +			volatile struct idpf_base_tx_ctx_desc *ctx_desc) {
> +	uint16_t cmd_dtype;
> +	uint32_t tso_len;
> +	uint8_t hdr_len;
> +
> +	if (tx_offload.l4_len == 0) {
> +		TX_LOG(DEBUG, "L4 length set to 0");
> +		return;
> +	}
> +
> +	hdr_len = tx_offload.l2_len +
> +		tx_offload.l3_len +
> +		tx_offload.l4_len;
> +	cmd_dtype = IDPF_TX_CTX_DESC_TSO;

The cmd_dtype for base mode context TSO descriptor should be 
cmd_dtype = IDPF_TX_DESC_DTYPE_CTX | IDPF_TX_CTX_DESC_TSO << IDPF_TXD_QW1_CMD_S;

> +	tso_len = mbuf->pkt_len - hdr_len;
> +
> +	ctx_desc->qw1 |= ((uint64_t)cmd_dtype <<
> IDPF_TXD_CTX_QW1_CMD_S) |
> +		((uint64_t)tso_len << IDPF_TXD_CTX_QW1_TSO_LEN_S) |
> +		((uint64_t)mbuf->tso_segsz <<
> IDPF_TXD_CTX_QW1_MSS_S); }

It seems better to add mask & here to avoid crossing.

>...

Regards,
Wenjun

  reply	other threads:[~2023-08-25  7:49 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-25  7:21 Simei Su
2023-08-25  7:48 ` Wu, Wenjun1 [this message]
2023-08-25  8:14 ` Zhang, Qi Z
2023-09-04  7:02 ` [PATCH v2] common/idpf: refactor " Simei Su
2023-09-08 10:28   ` [PATCH v3] " Simei Su
2023-09-13  5:57     ` Wu, Wenjun1
2023-09-13  7:45       ` Zhang, Qi Z
2023-09-14  1:47         ` Zhang, Qi Z
2023-09-13  6:07     ` Xing, Beilei
2023-09-14  1:50     ` [PATCH v4 0/3] refactor single queue Tx data path Simei Su
2023-09-14  1:50       ` [PATCH v4 1/3] common/idpf: " Simei Su
2023-09-14  1:50       ` [PATCH v4 2/3] net/idpf: refine Tx queue setup Simei Su
2023-09-14  1:50       ` [PATCH v4 3/3] net/cpfl: " Simei Su
2023-09-14  1:54       ` [PATCH v4 0/3] refactor single queue Tx data path Xing, Beilei
2023-09-14  6:37       ` [PATCH v5] common/idpf: " Simei Su
2023-09-14 11:08         ` Zhang, Qi Z

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=IA0PR11MB79557E9E9A41343A3F7AF1BDDFE3A@IA0PR11MB7955.namprd11.prod.outlook.com \
    --to=wenjun1.wu@intel.com \
    --cc=beilei.xing@intel.com \
    --cc=dev@dpdk.org \
    --cc=jingjing.wu@intel.com \
    --cc=qi.z.zhang@intel.com \
    --cc=simei.su@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).