* Re: [PATCH 1/2] net/iavf: enable TSO offloading for tunnel cases
@ 2022-08-30 2:22 Xu, Ke1
0 siblings, 0 replies; 4+ messages in thread
From: Xu, Ke1 @ 2022-08-30 2:22 UTC (permalink / raw)
To: Zhang, Peng1X; +Cc: Xing, Beilei, dev, Wu, Jingjing
> Subject: [PATCH 1/2] net/iavf: enable TSO offloading for tunnel cases
> Date: Sat, 13 Aug 2022 00:52:22 +0800
> Message-ID: <20220812165223.470777-1-peng1x.zhang@intel.com> (raw)
>
> From: Peng Zhang <peng1x.zhang@intel.com>
>
> Hardware limits that max buffer size per Tx descriptor should be (16K-1)B.
> So when TSO enabled under unencrypt scenario, the mbuf data size may exceed
> the limit and cause malicious behavior to the NIC.
>
> This patch supports Tx descriptors for this kind of large buffer.
>
> Signed-off-by: Peng Zhang <peng1x.zhang@intel.com>
Tested and passed.
Regards,
Tested-by: Ke Xu <ke1.xu@intel.com>
> ---
> drivers/net/iavf/iavf_rxtx.c | 66 ++++++++++++++++++++++++++++++++----
> 1 file changed, 60 insertions(+), 6 deletions(-)
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH 1/2] net/iavf: enable TSO offloading for tunnel cases
@ 2022-08-26 14:37 Buckley, Daniel M
0 siblings, 0 replies; 4+ messages in thread
From: Buckley, Daniel M @ 2022-08-26 14:37 UTC (permalink / raw)
To: Jiang, YuX, dev
[-- Attachment #1: Type: text/plain, Size: 10900 bytes --]
From patchwork Fri Aug 12 16:52:22 2022
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
X-Patchwork-Submitter: "Zhang, Peng1X" <peng1x.zhang@intel.com>
X-Patchwork-Id: 114858
X-Patchwork-Delegate: qi.z.zhang@intel.com
Return-Path: <dev-bounces@dpdk.org>
X-Original-To: patchwork@inbox.dpdk.org
Delivered-To: patchwork@inbox.dpdk.org
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
by inbox.dpdk.org (Postfix) with ESMTP id F30E1A0543;
Fri, 12 Aug 2022 11:02:17 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
by mails.dpdk.org (Postfix) with ESMTP id A76A040A7D;
Fri, 12 Aug 2022 11:02:17 +0200 (CEST)
Received: from mga11.intel.com (mga11.intel.com [192.55.52.93])
by mails.dpdk.org (Postfix) with ESMTP id AA3CB406A2
for <dev@dpdk.org>; Fri, 12 Aug 2022 11:02:15 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
t=1660294935; x=1691830935;
h=from:to:cc:subject:date:message-id:mime-version:
content-transfer-encoding;
bh=QjzGaVZvnEkF2FllVFU+xR8Y5DYCqb7NHx81eAGH2xU=;
b=nvX/giSyIEaEGzx/ohuqZY1N0eibgtx/4jxW1rUSWVasoU/e7fGz2iw2
KJwvpm8fGWrGF4p8joSgBP7UM++xo/D/ogAsc9W2HLRYgbFd2ckvvQxsq
gnD1LsI3zO5+J5AKJTzu5Kohlcwmkb3kurwZI56MHThZIVnFFYD9RTQ+c
mUTgBaiWvKNGAvnM4BEj2OXxHPwgy0TadKLMKX9fssQUQZ/95V1wAXCyS
kPwqzX5wNQeb4nEHGrw6H3jc5my/E5bOwga+A3K/cGd1Kv2MvZqL1u2l4
BfrZGZWkbejkuM0i8WfIPiMFXC/7GdTuBwVf/oe7S1hh0qSFarxYo0Ll6 g==;
X-IronPort-AV: E=McAfee;i="6400,9594,10436"; a="289127450"
X-IronPort-AV: E=Sophos;i="5.93,231,1654585200"; d="scan'208";a="289127450"
Received: from fmsmga008.fm.intel.com ([10.253.24.58])
by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
12 Aug 2022 02:02:14 -0700
X-IronPort-AV: E=Sophos;i="5.93,231,1654585200"; d="scan'208";a="665750825"
Received: from unknown (HELO localhost.localdomain) ([10.239.252.253])
by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
12 Aug 2022 02:02:13 -0700
From: peng1x.zhang@intel.com
To: dev@dpdk.org
Cc: beilei.xing@intel.com, jingjing.wu@intel.com,
Peng Zhang <peng1x.zhang@intel.com>
Subject: [PATCH 1/2] net/iavf: enable TSO offloading for tunnel cases
Date: Sat, 13 Aug 2022 00:52:22 +0800
Message-Id: <20220812165223.470777-1-peng1x.zhang@intel.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
<mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
<mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
From: Peng Zhang <peng1x.zhang@intel.com>
Hardware limits that max buffer size per Tx descriptor should be (16K-1)B.
So when TSO enabled under unencrypt scenario, the mbuf data size may exceed
the limit and cause malicious behavior to the NIC.
This patch supports Tx descriptors for this kind of large buffer.
Signed-off-by: Peng Zhang <peng1x.zhang@intel.com>
---
Tested-by: Daniel M Buckley <daniel.m.buckley@intel.com>
drivers/net/iavf/iavf_rxtx.c | 66 ++++++++++++++++++++++++++++++++----
1 file changed, 60 insertions(+), 6 deletions(-)
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index dfd021889e..adec58e90a 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2642,6 +2642,47 @@ iavf_ipsec_crypto_get_pkt_metadata(const struct iavf_tx_queue *txq,
return NULL;
}
+/* HW requires that TX buffer size ranges from 1B up to (16K-1)B. */
+#define IAVF_MAX_DATA_PER_TXD \
+ (IAVF_TXD_QW1_TX_BUF_SZ_MASK >> IAVF_TXD_QW1_TX_BUF_SZ_SHIFT)
+
+static inline void
+iavf_fill_unencrypt_desc(volatile struct iavf_tx_desc *txd, struct rte_mbuf *m,
+ volatile uint64_t desc_template, struct iavf_tx_entry *txe,
+ volatile struct iavf_tx_desc *txr, struct iavf_tx_entry *txe_ring,
+ int desc_idx_last)
+{
+ /* Setup TX Descriptor */
+ int desc_idx;
+ uint16_t slen = m->data_len;
+ uint64_t buf_dma_addr = rte_mbuf_data_iova(m);
+ struct iavf_tx_entry *txn = &txe_ring[txe->next_id];
+
+ while ((m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
+ unlikely(slen > IAVF_MAX_DATA_PER_TXD)) {
+ txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr);
+
+ txd->cmd_type_offset_bsz =
+ rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DATA |
+ (uint64_t)IAVF_MAX_DATA_PER_TXD <<
+ IAVF_TXD_DATA_QW1_TX_BUF_SZ_SHIFT) | desc_template;
+
+ buf_dma_addr += IAVF_MAX_DATA_PER_TXD;
+ slen -= IAVF_MAX_DATA_PER_TXD;
+
+ txe->last_id = desc_idx_last;
+ desc_idx = txe->next_id;
+ txe = txn;
+ txd = &txr[desc_idx];
+ txn = &txe_ring[txe->next_id];
+ }
+
+ txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr);
+ txd->cmd_type_offset_bsz =
+ rte_cpu_to_le_64((uint64_t)slen << IAVF_TXD_DATA_QW1_TX_BUF_SZ_SHIFT) |
+ desc_template;
+}
+
/* TX function */
uint16_t
iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
@@ -2650,6 +2691,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
volatile struct iavf_tx_desc *txr = txq->tx_ring;
struct iavf_tx_entry *txe_ring = txq->sw_ring;
struct iavf_tx_entry *txe, *txn;
+ volatile struct iavf_tx_desc *txd;
struct rte_mbuf *mb, *mb_seg;
uint16_t desc_idx, desc_idx_last;
uint16_t idx;
@@ -2781,6 +2823,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
ddesc = (volatile struct iavf_tx_desc *)
&txr[desc_idx];
+ txd = &txr[desc_idx];
txn = &txe_ring[txe->next_id];
RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
@@ -2788,10 +2831,16 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
rte_pktmbuf_free_seg(txe->mbuf);
txe->mbuf = mb_seg;
- iavf_fill_data_desc(ddesc, mb_seg,
- ddesc_template, tlen, ipseclen);
- IAVF_DUMP_TX_DESC(txq, ddesc, desc_idx);
+ if (nb_desc_ipsec) {
+ iavf_fill_data_desc(ddesc, mb_seg,
+ ddesc_template, tlen, ipseclen);
+ IAVF_DUMP_TX_DESC(txq, ddesc, desc_idx);
+ } else {
+ iavf_fill_unencrypt_desc(txd, mb_seg,
+ ddesc_template, txe, txr, txe_ring, desc_idx_last);
+ IAVF_DUMP_TX_DESC(txq, txd, desc_idx);
+ }
txe->last_id = desc_idx_last;
desc_idx = txe->next_id;
@@ -2816,10 +2865,15 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txq->nb_used = 0;
}
- ddesc->cmd_type_offset_bsz |= rte_cpu_to_le_64(ddesc_cmd <<
+ if (nb_desc_ipsec) {
+ ddesc->cmd_type_offset_bsz |= rte_cpu_to_le_64(ddesc_cmd <<
IAVF_TXD_DATA_QW1_CMD_SHIFT);
-
- IAVF_DUMP_TX_DESC(txq, ddesc, desc_idx - 1);
+ IAVF_DUMP_TX_DESC(txq, ddesc, desc_idx - 1);
+ } else {
+ txd->cmd_type_offset_bsz |= rte_cpu_to_le_64(ddesc_cmd <<
+ IAVF_TXD_DATA_QW1_CMD_SHIFT);
+ IAVF_DUMP_TX_DESC(txq, txd, desc_idx - 1);
+ }
}
end_of_tx:
--------------------------------------------------------------
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.
[-- Attachment #2: Type: text/html, Size: 13963 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH 1/2] net/iavf: enable TSO offloading for tunnel cases
@ 2022-08-12 16:52 peng1x.zhang
2022-08-30 7:52 ` Yang, Qiming
0 siblings, 1 reply; 4+ messages in thread
From: peng1x.zhang @ 2022-08-12 16:52 UTC (permalink / raw)
To: dev; +Cc: beilei.xing, jingjing.wu, Peng Zhang
From: Peng Zhang <peng1x.zhang@intel.com>
Hardware limits that max buffer size per Tx descriptor should be (16K-1)B.
So when TSO enabled under unencrypt scenario, the mbuf data size may exceed
the limit and cause malicious behavior to the NIC.
This patch supports Tx descriptors for this kind of large buffer.
Signed-off-by: Peng Zhang <peng1x.zhang@intel.com>
---
drivers/net/iavf/iavf_rxtx.c | 66 ++++++++++++++++++++++++++++++++----
1 file changed, 60 insertions(+), 6 deletions(-)
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index dfd021889e..adec58e90a 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -2642,6 +2642,47 @@ iavf_ipsec_crypto_get_pkt_metadata(const struct iavf_tx_queue *txq,
return NULL;
}
+/* HW requires that TX buffer size ranges from 1B up to (16K-1)B. */
+#define IAVF_MAX_DATA_PER_TXD \
+ (IAVF_TXD_QW1_TX_BUF_SZ_MASK >> IAVF_TXD_QW1_TX_BUF_SZ_SHIFT)
+
+static inline void
+iavf_fill_unencrypt_desc(volatile struct iavf_tx_desc *txd, struct rte_mbuf *m,
+ volatile uint64_t desc_template, struct iavf_tx_entry *txe,
+ volatile struct iavf_tx_desc *txr, struct iavf_tx_entry *txe_ring,
+ int desc_idx_last)
+{
+ /* Setup TX Descriptor */
+ int desc_idx;
+ uint16_t slen = m->data_len;
+ uint64_t buf_dma_addr = rte_mbuf_data_iova(m);
+ struct iavf_tx_entry *txn = &txe_ring[txe->next_id];
+
+ while ((m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
+ unlikely(slen > IAVF_MAX_DATA_PER_TXD)) {
+ txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr);
+
+ txd->cmd_type_offset_bsz =
+ rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DATA |
+ (uint64_t)IAVF_MAX_DATA_PER_TXD <<
+ IAVF_TXD_DATA_QW1_TX_BUF_SZ_SHIFT) | desc_template;
+
+ buf_dma_addr += IAVF_MAX_DATA_PER_TXD;
+ slen -= IAVF_MAX_DATA_PER_TXD;
+
+ txe->last_id = desc_idx_last;
+ desc_idx = txe->next_id;
+ txe = txn;
+ txd = &txr[desc_idx];
+ txn = &txe_ring[txe->next_id];
+ }
+
+ txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr);
+ txd->cmd_type_offset_bsz =
+ rte_cpu_to_le_64((uint64_t)slen << IAVF_TXD_DATA_QW1_TX_BUF_SZ_SHIFT) |
+ desc_template;
+}
+
/* TX function */
uint16_t
iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
@@ -2650,6 +2691,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
volatile struct iavf_tx_desc *txr = txq->tx_ring;
struct iavf_tx_entry *txe_ring = txq->sw_ring;
struct iavf_tx_entry *txe, *txn;
+ volatile struct iavf_tx_desc *txd;
struct rte_mbuf *mb, *mb_seg;
uint16_t desc_idx, desc_idx_last;
uint16_t idx;
@@ -2781,6 +2823,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
ddesc = (volatile struct iavf_tx_desc *)
&txr[desc_idx];
+ txd = &txr[desc_idx];
txn = &txe_ring[txe->next_id];
RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
@@ -2788,10 +2831,16 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
rte_pktmbuf_free_seg(txe->mbuf);
txe->mbuf = mb_seg;
- iavf_fill_data_desc(ddesc, mb_seg,
- ddesc_template, tlen, ipseclen);
- IAVF_DUMP_TX_DESC(txq, ddesc, desc_idx);
+ if (nb_desc_ipsec) {
+ iavf_fill_data_desc(ddesc, mb_seg,
+ ddesc_template, tlen, ipseclen);
+ IAVF_DUMP_TX_DESC(txq, ddesc, desc_idx);
+ } else {
+ iavf_fill_unencrypt_desc(txd, mb_seg,
+ ddesc_template, txe, txr, txe_ring, desc_idx_last);
+ IAVF_DUMP_TX_DESC(txq, txd, desc_idx);
+ }
txe->last_id = desc_idx_last;
desc_idx = txe->next_id;
@@ -2816,10 +2865,15 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txq->nb_used = 0;
}
- ddesc->cmd_type_offset_bsz |= rte_cpu_to_le_64(ddesc_cmd <<
+ if (nb_desc_ipsec) {
+ ddesc->cmd_type_offset_bsz |= rte_cpu_to_le_64(ddesc_cmd <<
IAVF_TXD_DATA_QW1_CMD_SHIFT);
-
- IAVF_DUMP_TX_DESC(txq, ddesc, desc_idx - 1);
+ IAVF_DUMP_TX_DESC(txq, ddesc, desc_idx - 1);
+ } else {
+ txd->cmd_type_offset_bsz |= rte_cpu_to_le_64(ddesc_cmd <<
+ IAVF_TXD_DATA_QW1_CMD_SHIFT);
+ IAVF_DUMP_TX_DESC(txq, txd, desc_idx - 1);
+ }
}
end_of_tx:
--
2.25.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* RE: [PATCH 1/2] net/iavf: enable TSO offloading for tunnel cases
2022-08-12 16:52 peng1x.zhang
@ 2022-08-30 7:52 ` Yang, Qiming
0 siblings, 0 replies; 4+ messages in thread
From: Yang, Qiming @ 2022-08-30 7:52 UTC (permalink / raw)
To: Zhang, Peng1X, dev; +Cc: Xing, Beilei, Wu, Jingjing, Zhang, Peng1X
Please retest: TCP/UDP/tunnel-TCP/tunnel-UDP packet
> -----Original Message-----
> From: peng1x.zhang@intel.com <peng1x.zhang@intel.com>
> Sent: Saturday, August 13, 2022 12:52 AM
> To: dev@dpdk.org
> Cc: Xing, Beilei <beilei.xing@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Zhang, Peng1X <peng1x.zhang@intel.com>
> Subject: [PATCH 1/2] net/iavf: enable TSO offloading for tunnel cases
Should be a bug fix patch.
>
> From: Peng Zhang <peng1x.zhang@intel.com>
>
No need this line.
> Hardware limits that max buffer size per Tx descriptor should be (16K-1)B.
> So when TSO enabled under unencrypt scenario, the mbuf data size may
> exceed the limit and cause malicious behavior to the NIC.
So this patch is fixing the tunnel TSO not enabling.
>
> This patch supports Tx descriptors for this kind of large buffer.
>
> Signed-off-by: Peng Zhang <peng1x.zhang@intel.com>
> ---
> drivers/net/iavf/iavf_rxtx.c | 66 ++++++++++++++++++++++++++++++++----
> 1 file changed, 60 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index
> dfd021889e..adec58e90a 100644
> --- a/drivers/net/iavf/iavf_rxtx.c
> +++ b/drivers/net/iavf/iavf_rxtx.c
> @@ -2642,6 +2642,47 @@ iavf_ipsec_crypto_get_pkt_metadata(const struct
> iavf_tx_queue *txq,
> return NULL;
> }
>
> +/* HW requires that TX buffer size ranges from 1B up to (16K-1)B. */
> +#define IAVF_MAX_DATA_PER_TXD \
> + (IAVF_TXD_QW1_TX_BUF_SZ_MASK >>
> IAVF_TXD_QW1_TX_BUF_SZ_SHIFT)
> +
> +static inline void
> +iavf_fill_unencrypt_desc(volatile struct iavf_tx_desc *txd, struct rte_mbuf
> *m,
> + volatile uint64_t desc_template, struct iavf_tx_entry
> *txe,
> + volatile struct iavf_tx_desc *txr, struct iavf_tx_entry
> *txe_ring,
> + int desc_idx_last)
> +{
> + /* Setup TX Descriptor */
> + int desc_idx;
> + uint16_t slen = m->data_len;
> + uint64_t buf_dma_addr = rte_mbuf_data_iova(m);
> + struct iavf_tx_entry *txn = &txe_ring[txe->next_id];
> +
> + while ((m->ol_flags & RTE_MBUF_F_TX_TCP_SEG) &&
??? lack of UDP?
> + unlikely(slen > IAVF_MAX_DATA_PER_TXD)) {
> + txd->buffer_addr =
> rte_cpu_to_le_64(buf_dma_addr);
> +
> + txd->cmd_type_offset_bsz =
> + rte_cpu_to_le_64(IAVF_TX_DESC_DTYPE_DATA |
> + (uint64_t)IAVF_MAX_DATA_PER_TXD <<
> + IAVF_TXD_DATA_QW1_TX_BUF_SZ_SHIFT) |
> desc_template;
> +
> + buf_dma_addr += IAVF_MAX_DATA_PER_TXD;
> + slen -= IAVF_MAX_DATA_PER_TXD;
> +
> + txe->last_id = desc_idx_last;
> + desc_idx = txe->next_id;
> + txe = txn;
> + txd = &txr[desc_idx];
> + txn = &txe_ring[txe->next_id];
> + }
> +
> + txd->buffer_addr = rte_cpu_to_le_64(buf_dma_addr);
> + txd->cmd_type_offset_bsz =
> + rte_cpu_to_le_64((uint64_t)slen <<
> IAVF_TXD_DATA_QW1_TX_BUF_SZ_SHIFT) |
> + desc_template;
> +}
> +
> /* TX function */
> uint16_t
> iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> @@ -2650,6 +2691,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf
> **tx_pkts, uint16_t nb_pkts)
> volatile struct iavf_tx_desc *txr = txq->tx_ring;
> struct iavf_tx_entry *txe_ring = txq->sw_ring;
> struct iavf_tx_entry *txe, *txn;
> + volatile struct iavf_tx_desc *txd;
> struct rte_mbuf *mb, *mb_seg;
> uint16_t desc_idx, desc_idx_last;
> uint16_t idx;
> @@ -2781,6 +2823,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf
> **tx_pkts, uint16_t nb_pkts)
> ddesc = (volatile struct iavf_tx_desc *)
> &txr[desc_idx];
>
> + txd = &txr[desc_idx];
> txn = &txe_ring[txe->next_id];
> RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);
>
> @@ -2788,10 +2831,16 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf
> **tx_pkts, uint16_t nb_pkts)
> rte_pktmbuf_free_seg(txe->mbuf);
>
> txe->mbuf = mb_seg;
> - iavf_fill_data_desc(ddesc, mb_seg,
> - ddesc_template, tlen, ipseclen);
>
> - IAVF_DUMP_TX_DESC(txq, ddesc, desc_idx);
> + if (nb_desc_ipsec) {
> + iavf_fill_data_desc(ddesc, mb_seg,
> + ddesc_template, tlen, ipseclen);
> + IAVF_DUMP_TX_DESC(txq, ddesc, desc_idx);
> + } else {
> + iavf_fill_unencrypt_desc(txd, mb_seg,
> + ddesc_template, txe, txr, txe_ring,
> desc_idx_last);
> + IAVF_DUMP_TX_DESC(txq, txd, desc_idx);
> + }
>
> txe->last_id = desc_idx_last;
> desc_idx = txe->next_id;
> @@ -2816,10 +2865,15 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf
> **tx_pkts, uint16_t nb_pkts)
> txq->nb_used = 0;
> }
>
> - ddesc->cmd_type_offset_bsz |=
> rte_cpu_to_le_64(ddesc_cmd <<
> + if (nb_desc_ipsec) {
> + ddesc->cmd_type_offset_bsz |=
> rte_cpu_to_le_64(ddesc_cmd <<
> IAVF_TXD_DATA_QW1_CMD_SHIFT);
> -
> - IAVF_DUMP_TX_DESC(txq, ddesc, desc_idx - 1);
> + IAVF_DUMP_TX_DESC(txq, ddesc, desc_idx - 1);
> + } else {
> + txd->cmd_type_offset_bsz |=
> rte_cpu_to_le_64(ddesc_cmd <<
> + IAVF_TXD_DATA_QW1_CMD_SHIFT);
> + IAVF_DUMP_TX_DESC(txq, txd, desc_idx - 1);
> + }
> }
>
> end_of_tx:
> --
> 2.25.1
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2022-08-30 7:52 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-30 2:22 [PATCH 1/2] net/iavf: enable TSO offloading for tunnel cases Xu, Ke1
-- strict thread matches above, loose matches on Subject: below --
2022-08-26 14:37 Buckley, Daniel M
2022-08-12 16:52 peng1x.zhang
2022-08-30 7:52 ` Yang, Qiming
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).