DPDK patches and discussions
 help / color / mirror / Atom feed
* Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
  2022-10-26 17:10 [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq Mingjin Ye
@ 2022-10-26  9:52 ` lihuisong (C)
  2022-10-27 11:02   ` Ye, MingjinX
  2022-10-26 17:10 ` [PATCH v4 2/2] net/ice: fix vlan offload Mingjin Ye
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 36+ messages in thread
From: lihuisong (C) @ 2022-10-26  9:52 UTC (permalink / raw)
  To: Mingjin Ye, dev; +Cc: stable, yidingx.zhou, Aman Singh, Yuying Zhang


在 2022/10/27 1:10, Mingjin Ye 写道:
> After setting vlan offload in testpmd, the result is not updated
> to rxq. Therefore, the queue needs to be reconfigured after
> executing the "vlan offload" related commands.
>
> Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
> Cc: stable@dpdk.org
>
> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> ---
>   app/test-pmd/cmdline.c | 1 +
>   1 file changed, 1 insertion(+)
>
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index 17be2de402..ce125f549f 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -4133,6 +4133,7 @@ cmd_vlan_offload_parsed(void *parsed_result,
>   	else
>   		vlan_extend_set(port_id, on);
>   
> +	cmd_reconfig_device_queue(port_id, 1, 1);
This means that queue offloads need to upadte by re-calling 
dev_configure and setup all queues, right?
If it is, this adds a usage limitation.
>   	return;
>   }
>   

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
@ 2022-10-26 17:10 Mingjin Ye
  2022-10-26  9:52 ` lihuisong (C)
                   ` (3 more replies)
  0 siblings, 4 replies; 36+ messages in thread
From: Mingjin Ye @ 2022-10-26 17:10 UTC (permalink / raw)
  To: dev; +Cc: stable, yidingx.zhou, Mingjin Ye, Aman Singh, Yuying Zhang

After setting vlan offload in testpmd, the result is not updated
to rxq. Therefore, the queue needs to be reconfigured after
executing the "vlan offload" related commands.

Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
Cc: stable@dpdk.org

Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
---
 app/test-pmd/cmdline.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 17be2de402..ce125f549f 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -4133,6 +4133,7 @@ cmd_vlan_offload_parsed(void *parsed_result,
 	else
 		vlan_extend_set(port_id, on);
 
+	cmd_reconfig_device_queue(port_id, 1, 1);
 	return;
 }
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 2/2] net/ice: fix vlan offload
  2022-10-26 17:10 [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq Mingjin Ye
  2022-10-26  9:52 ` lihuisong (C)
@ 2022-10-26 17:10 ` Mingjin Ye
  2022-10-27  8:36   ` Huang, ZhiminX
  2022-10-27  8:36 ` [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq Huang, ZhiminX
  2022-11-08 13:28 ` [PATCH v5 1/2] net/ice: fix vlan offload Mingjin Ye
  3 siblings, 1 reply; 36+ messages in thread
From: Mingjin Ye @ 2022-10-26 17:10 UTC (permalink / raw)
  To: dev
  Cc: stable, yidingx.zhou, Mingjin Ye, Bruce Richardson,
	Konstantin Ananyev, Qiming Yang, Qi Zhang, Wenzhuo Lu,
	Junyu Jiang, Leyi Rong, Wisam Jaddo, Chenbo Xia, Hemant Agrawal,
	Jerin Jacob, Ajit Khaparde

The vlan tag and flag in Rx descriptor are not processed on vector path,
then the upper application can't fetch the tci from mbuf.

This commit add handling of vlan RX offloading.

Fixes: c68a52b8b38c ("net/ice: support vector SSE in Rx")
Fixes: ece1f8a8f1c8 ("net/ice: switch to flexible descriptor in SSE path")
Fixes: 12443386a0b0 ("net/ice: support flex Rx descriptor RxDID22")
Fixes: 214f452f7d5f ("net/ice: add AVX2 offload Rx")
Fixes: 7f85d5ebcfe1 ("net/ice: add AVX512 vector path")
Fixes: 295968d17407 ("ethdev: add namespace")
Fixes: 808a17b3c1e6 ("net/ice: add Rx AVX512 offload path")
Cc: stable@dpdk.org

Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>

v3:
	* Fix macros in ice_rxtx_vec_sse.c source file.
v4:
	* Fix ice_rx_desc_to_olflags_v define in ice_rxtx_vec_sse.c source file.
---
 drivers/net/ice/ice_rxtx_vec_avx2.c   | 136 +++++++++++++++++-----
 drivers/net/ice/ice_rxtx_vec_avx512.c | 156 +++++++++++++++++++++-----
 drivers/net/ice/ice_rxtx_vec_sse.c    | 132 ++++++++++++++++------
 3 files changed, 335 insertions(+), 89 deletions(-)

diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 31d6af42fd..888666f206 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -238,6 +238,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			RTE_MBUF_F_RX_RSS_HASH, 0);
 
+
 	RTE_SET_USED(avx_aligned); /* for 32B descriptors we don't use this */
 
 	uint16_t i, received;
@@ -474,7 +475,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
+					(RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_VLAN)) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
 					_mm_load_si128
@@ -529,33 +530,112 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 				 * to shift the 32b RSS hash value to the
 				 * highest 32b of each 128b before mask
 				 */
-				__m256i rss_hash6_7 =
-					_mm256_slli_epi64(raw_desc_bh6_7, 32);
-				__m256i rss_hash4_5 =
-					_mm256_slli_epi64(raw_desc_bh4_5, 32);
-				__m256i rss_hash2_3 =
-					_mm256_slli_epi64(raw_desc_bh2_3, 32);
-				__m256i rss_hash0_1 =
-					_mm256_slli_epi64(raw_desc_bh0_1, 32);
-
-				__m256i rss_hash_msk =
-					_mm256_set_epi32(0xFFFFFFFF, 0, 0, 0,
-							 0xFFFFFFFF, 0, 0, 0);
-
-				rss_hash6_7 = _mm256_and_si256
-						(rss_hash6_7, rss_hash_msk);
-				rss_hash4_5 = _mm256_and_si256
-						(rss_hash4_5, rss_hash_msk);
-				rss_hash2_3 = _mm256_and_si256
-						(rss_hash2_3, rss_hash_msk);
-				rss_hash0_1 = _mm256_and_si256
-						(rss_hash0_1, rss_hash_msk);
-
-				mb6_7 = _mm256_or_si256(mb6_7, rss_hash6_7);
-				mb4_5 = _mm256_or_si256(mb4_5, rss_hash4_5);
-				mb2_3 = _mm256_or_si256(mb2_3, rss_hash2_3);
-				mb0_1 = _mm256_or_si256(mb0_1, rss_hash0_1);
-			} /* if() on RSS hash parsing */
+				if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
+						RTE_ETH_RX_OFFLOAD_RSS_HASH) {
+					__m256i rss_hash6_7 =
+						_mm256_slli_epi64(raw_desc_bh6_7, 32);
+					__m256i rss_hash4_5 =
+						_mm256_slli_epi64(raw_desc_bh4_5, 32);
+					__m256i rss_hash2_3 =
+						_mm256_slli_epi64(raw_desc_bh2_3, 32);
+					__m256i rss_hash0_1 =
+						_mm256_slli_epi64(raw_desc_bh0_1, 32);
+
+					__m256i rss_hash_msk =
+						_mm256_set_epi32(0xFFFFFFFF, 0, 0, 0,
+								0xFFFFFFFF, 0, 0, 0);
+
+					rss_hash6_7 = _mm256_and_si256
+							(rss_hash6_7, rss_hash_msk);
+					rss_hash4_5 = _mm256_and_si256
+							(rss_hash4_5, rss_hash_msk);
+					rss_hash2_3 = _mm256_and_si256
+							(rss_hash2_3, rss_hash_msk);
+					rss_hash0_1 = _mm256_and_si256
+							(rss_hash0_1, rss_hash_msk);
+
+					mb6_7 = _mm256_or_si256(mb6_7, rss_hash6_7);
+					mb4_5 = _mm256_or_si256(mb4_5, rss_hash4_5);
+					mb2_3 = _mm256_or_si256(mb2_3, rss_hash2_3);
+					mb0_1 = _mm256_or_si256(mb0_1, rss_hash0_1);
+				} /* if() on RSS hash parsing */
+
+				if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
+						RTE_ETH_RX_OFFLOAD_VLAN) {
+					/* merge the status/error-1 bits into one register */
+					const __m256i status1_4_7 =
+							_mm256_unpacklo_epi32(raw_desc_bh6_7,
+							raw_desc_bh4_5);
+					const __m256i status1_0_3 =
+							_mm256_unpacklo_epi32(raw_desc_bh2_3,
+							raw_desc_bh0_1);
+
+					const __m256i status1_0_7 =
+							_mm256_unpacklo_epi64(status1_4_7,
+							status1_0_3);
+
+					const __m256i l2tag2p_flag_mask =
+							_mm256_set1_epi32(1 << 11);
+
+					__m256i l2tag2p_flag_bits =
+							_mm256_and_si256
+							(status1_0_7, l2tag2p_flag_mask);
+
+					l2tag2p_flag_bits =
+							_mm256_srli_epi32(l2tag2p_flag_bits,
+									11);
+
+					__m256i vlan_flags = _mm256_setzero_si256();
+					const __m256i l2tag2_flags_shuf =
+							_mm256_set_epi8(0, 0, 0, 0,
+									0, 0, 0, 0,
+									0, 0, 0, 0,
+									0, 0, 0, 0,
+									/* end up 128-bits */
+									0, 0, 0, 0,
+									0, 0, 0, 0,
+									0, 0, 0, 0,
+									0, 0,
+									RTE_MBUF_F_RX_VLAN |
+									RTE_MBUF_F_RX_VLAN_STRIPPED,
+									0);
+					vlan_flags =
+							_mm256_shuffle_epi8(l2tag2_flags_shuf,
+							l2tag2p_flag_bits);
+
+					/* merge with vlan_flags */
+					mbuf_flags = _mm256_or_si256
+							(mbuf_flags, vlan_flags);
+
+					/* L2TAG2_2 */
+					__m256i vlan_tci6_7 =
+							_mm256_slli_si256(raw_desc_bh6_7, 4);
+					__m256i vlan_tci4_5 =
+							_mm256_slli_si256(raw_desc_bh4_5, 4);
+					__m256i vlan_tci2_3 =
+							_mm256_slli_si256(raw_desc_bh2_3, 4);
+					__m256i vlan_tci0_1 =
+							_mm256_slli_si256(raw_desc_bh0_1, 4);
+
+					const __m256i vlan_tci_msk =
+							_mm256_set_epi32(0, 0xFFFF0000, 0, 0,
+							0, 0xFFFF0000, 0, 0);
+
+					vlan_tci6_7 = _mm256_and_si256
+									(vlan_tci6_7, vlan_tci_msk);
+					vlan_tci4_5 = _mm256_and_si256
+									(vlan_tci4_5, vlan_tci_msk);
+					vlan_tci2_3 = _mm256_and_si256
+									(vlan_tci2_3, vlan_tci_msk);
+					vlan_tci0_1 = _mm256_and_si256
+									(vlan_tci0_1, vlan_tci_msk);
+
+					mb6_7 = _mm256_or_si256(mb6_7, vlan_tci6_7);
+					mb4_5 = _mm256_or_si256(mb4_5, vlan_tci4_5);
+					mb2_3 = _mm256_or_si256(mb2_3, vlan_tci2_3);
+					mb0_1 = _mm256_or_si256(mb0_1, vlan_tci0_1);
+				}
+			}
 #endif
 		}
 
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 5bfd5152df..e5cf777cf5 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -338,6 +338,8 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 			RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED,
 			RTE_MBUF_F_RX_RSS_HASH, 0);
 
+
+
 	uint16_t i, received;
 
 	for (i = 0, received = 0; i < nb_pkts;
@@ -585,7 +587,7 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
+					(RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_VLAN)) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
 					_mm_load_si128
@@ -640,33 +642,131 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 				 * to shift the 32b RSS hash value to the
 				 * highest 32b of each 128b before mask
 				 */
-				__m256i rss_hash6_7 =
-					_mm256_slli_epi64(raw_desc_bh6_7, 32);
-				__m256i rss_hash4_5 =
-					_mm256_slli_epi64(raw_desc_bh4_5, 32);
-				__m256i rss_hash2_3 =
-					_mm256_slli_epi64(raw_desc_bh2_3, 32);
-				__m256i rss_hash0_1 =
-					_mm256_slli_epi64(raw_desc_bh0_1, 32);
-
-				__m256i rss_hash_msk =
-					_mm256_set_epi32(0xFFFFFFFF, 0, 0, 0,
-							 0xFFFFFFFF, 0, 0, 0);
-
-				rss_hash6_7 = _mm256_and_si256
-						(rss_hash6_7, rss_hash_msk);
-				rss_hash4_5 = _mm256_and_si256
-						(rss_hash4_5, rss_hash_msk);
-				rss_hash2_3 = _mm256_and_si256
-						(rss_hash2_3, rss_hash_msk);
-				rss_hash0_1 = _mm256_and_si256
-						(rss_hash0_1, rss_hash_msk);
-
-				mb6_7 = _mm256_or_si256(mb6_7, rss_hash6_7);
-				mb4_5 = _mm256_or_si256(mb4_5, rss_hash4_5);
-				mb2_3 = _mm256_or_si256(mb2_3, rss_hash2_3);
-				mb0_1 = _mm256_or_si256(mb0_1, rss_hash0_1);
-			} /* if() on RSS hash parsing */
+				if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
+						RTE_ETH_RX_OFFLOAD_RSS_HASH) {
+					__m256i rss_hash6_7 =
+						_mm256_slli_epi64(raw_desc_bh6_7, 32);
+					__m256i rss_hash4_5 =
+						_mm256_slli_epi64(raw_desc_bh4_5, 32);
+					__m256i rss_hash2_3 =
+						_mm256_slli_epi64(raw_desc_bh2_3, 32);
+					__m256i rss_hash0_1 =
+						_mm256_slli_epi64(raw_desc_bh0_1, 32);
+
+					__m256i rss_hash_msk =
+						_mm256_set_epi32(0xFFFFFFFF, 0, 0, 0,
+								0xFFFFFFFF, 0, 0, 0);
+
+					rss_hash6_7 = _mm256_and_si256
+							(rss_hash6_7, rss_hash_msk);
+					rss_hash4_5 = _mm256_and_si256
+							(rss_hash4_5, rss_hash_msk);
+					rss_hash2_3 = _mm256_and_si256
+							(rss_hash2_3, rss_hash_msk);
+					rss_hash0_1 = _mm256_and_si256
+							(rss_hash0_1, rss_hash_msk);
+
+					mb6_7 = _mm256_or_si256(mb6_7, rss_hash6_7);
+					mb4_5 = _mm256_or_si256(mb4_5, rss_hash4_5);
+					mb2_3 = _mm256_or_si256(mb2_3, rss_hash2_3);
+					mb0_1 = _mm256_or_si256(mb0_1, rss_hash0_1);
+				} /* if() on RSS hash parsing */
+
+				if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
+							RTE_ETH_RX_OFFLOAD_VLAN) {
+					/* merge the status/error-1 bits into one register */
+					const __m256i status1_4_7 =
+						_mm256_unpacklo_epi32
+						(raw_desc_bh6_7,
+						raw_desc_bh4_5);
+					const __m256i status1_0_3 =
+						_mm256_unpacklo_epi32
+						(raw_desc_bh2_3,
+						raw_desc_bh0_1);
+
+					const __m256i status1_0_7 =
+						_mm256_unpacklo_epi64
+						(status1_4_7, status1_0_3);
+
+					const __m256i l2tag2p_flag_mask =
+						_mm256_set1_epi32
+						(1 << 11);
+
+					__m256i l2tag2p_flag_bits =
+						_mm256_and_si256
+						(status1_0_7,
+						l2tag2p_flag_mask);
+
+					l2tag2p_flag_bits =
+						_mm256_srli_epi32
+						(l2tag2p_flag_bits,
+						11);
+					const __m256i l2tag2_flags_shuf =
+						_mm256_set_epi8
+							(0, 0, 0, 0,
+							0, 0, 0, 0,
+							0, 0, 0, 0,
+							0, 0, 0, 0,
+							/* end up 128-bits */
+							0, 0, 0, 0,
+							0, 0, 0, 0,
+							0, 0, 0, 0,
+							0, 0,
+							RTE_MBUF_F_RX_VLAN |
+							RTE_MBUF_F_RX_VLAN_STRIPPED,
+							0);
+					__m256i vlan_flags =
+						_mm256_shuffle_epi8
+							(l2tag2_flags_shuf,
+							l2tag2p_flag_bits);
+
+					/* merge with vlan_flags */
+					mbuf_flags = _mm256_or_si256
+							(mbuf_flags,
+							vlan_flags);
+
+					/* L2TAG2_2 */
+					__m256i vlan_tci6_7 =
+						_mm256_slli_si256
+							(raw_desc_bh6_7, 4);
+					__m256i vlan_tci4_5 =
+						_mm256_slli_si256
+							(raw_desc_bh4_5, 4);
+					__m256i vlan_tci2_3 =
+						_mm256_slli_si256
+							(raw_desc_bh2_3, 4);
+					__m256i vlan_tci0_1 =
+						_mm256_slli_si256
+							(raw_desc_bh0_1, 4);
+
+					const __m256i vlan_tci_msk =
+						_mm256_set_epi32
+						(0, 0xFFFF0000, 0, 0,
+						0, 0xFFFF0000, 0, 0);
+
+					vlan_tci6_7 = _mm256_and_si256
+							(vlan_tci6_7,
+							vlan_tci_msk);
+					vlan_tci4_5 = _mm256_and_si256
+							(vlan_tci4_5,
+							vlan_tci_msk);
+					vlan_tci2_3 = _mm256_and_si256
+							(vlan_tci2_3,
+							vlan_tci_msk);
+					vlan_tci0_1 = _mm256_and_si256
+							(vlan_tci0_1,
+							vlan_tci_msk);
+
+					mb6_7 = _mm256_or_si256
+							(mb6_7, vlan_tci6_7);
+					mb4_5 = _mm256_or_si256
+							(mb4_5, vlan_tci4_5);
+					mb2_3 = _mm256_or_si256
+							(mb2_3, vlan_tci2_3);
+					mb0_1 = _mm256_or_si256
+							(mb0_1, vlan_tci0_1);
+				}
+			}
 #endif
 		}
 
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index fd94cedde3..cc5b8510dc 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -100,9 +100,15 @@ ice_rxq_rearm(struct ice_rx_queue *rxq)
 	ICE_PCI_REG_WC_WRITE(rxq->qrx_tail, rx_id);
 }
 
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+static inline void
+ice_rx_desc_to_olflags_v(struct ice_rx_queue *rxq, __m128i descs[4], __m128i descs_bh[4],
+			 struct rte_mbuf **rx_pkts)
+#else
 static inline void
 ice_rx_desc_to_olflags_v(struct ice_rx_queue *rxq, __m128i descs[4],
 			 struct rte_mbuf **rx_pkts)
+#endif
 {
 	const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_initializer);
 	__m128i rearm0, rearm1, rearm2, rearm3;
@@ -214,6 +220,38 @@ ice_rx_desc_to_olflags_v(struct ice_rx_queue *rxq, __m128i descs[4],
 	/* merge the flags */
 	flags = _mm_or_si128(flags, rss_vlan);
 
+ #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+	if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
+							RTE_ETH_RX_OFFLOAD_VLAN) {
+		const __m128i l2tag2_mask =
+			_mm_set1_epi32(1 << 11);
+		const __m128i vlan_tci0_1 =
+			_mm_unpacklo_epi32(descs_bh[0], descs_bh[1]);
+		const __m128i vlan_tci2_3 =
+			_mm_unpacklo_epi32(descs_bh[2], descs_bh[3]);
+		const __m128i vlan_tci0_3 =
+			_mm_unpacklo_epi64(vlan_tci0_1, vlan_tci2_3);
+
+		__m128i vlan_bits = _mm_and_si128(vlan_tci0_3, l2tag2_mask);
+
+		vlan_bits = _mm_srli_epi32(vlan_bits, 11);
+
+		const __m128i vlan_flags_shuf =
+			_mm_set_epi8(0, 0, 0, 0,
+					0, 0, 0, 0,
+					0, 0, 0, 0,
+					0, 0,
+					RTE_MBUF_F_RX_VLAN |
+					RTE_MBUF_F_RX_VLAN_STRIPPED,
+					0);
+
+		const __m128i vlan_flags = _mm_shuffle_epi8(vlan_flags_shuf, vlan_bits);
+
+		/* merge with vlan_flags */
+		flags = _mm_or_si128(flags, vlan_flags);
+	}
+#endif
+
 	if (rxq->fdir_enabled) {
 		const __m128i fdir_id0_1 =
 			_mm_unpackhi_epi32(descs[0], descs[1]);
@@ -405,6 +443,9 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	     pos += ICE_DESCS_PER_LOOP,
 	     rxdp += ICE_DESCS_PER_LOOP) {
 		__m128i descs[ICE_DESCS_PER_LOOP];
+		#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+		__m128i descs_bh[ICE_DESCS_PER_LOOP];
+		#endif
 		__m128i pkt_mb0, pkt_mb1, pkt_mb2, pkt_mb3;
 		__m128i staterr, sterr_tmp1, sterr_tmp2;
 		/* 2 64 bit or 4 32 bit mbuf pointers in one XMM reg. */
@@ -463,8 +504,6 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 		/* C.1 4=>2 filter staterr info only */
 		sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
 
-		ice_rx_desc_to_olflags_v(rxq, descs, &rx_pkts[pos]);
-
 		/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
 		pkt_mb3 = _mm_add_epi16(pkt_mb3, crc_adjust);
 		pkt_mb2 = _mm_add_epi16(pkt_mb2, crc_adjust);
@@ -479,21 +518,21 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 		 * will cause performance drop to get into this context.
 		 */
 		if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-				RTE_ETH_RX_OFFLOAD_RSS_HASH) {
+					(RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_VLAN)) {
 			/* load bottom half of every 32B desc */
-			const __m128i raw_desc_bh3 =
+			descs_bh[3] =
 				_mm_load_si128
 					((void *)(&rxdp[3].wb.status_error1));
 			rte_compiler_barrier();
-			const __m128i raw_desc_bh2 =
+			descs_bh[2] =
 				_mm_load_si128
 					((void *)(&rxdp[2].wb.status_error1));
 			rte_compiler_barrier();
-			const __m128i raw_desc_bh1 =
+			descs_bh[1] =
 				_mm_load_si128
 					((void *)(&rxdp[1].wb.status_error1));
 			rte_compiler_barrier();
-			const __m128i raw_desc_bh0 =
+			descs_bh[0] =
 				_mm_load_si128
 					((void *)(&rxdp[0].wb.status_error1));
 
@@ -501,32 +540,59 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * to shift the 32b RSS hash value to the
 			 * highest 32b of each 128b before mask
 			 */
-			__m128i rss_hash3 =
-				_mm_slli_epi64(raw_desc_bh3, 32);
-			__m128i rss_hash2 =
-				_mm_slli_epi64(raw_desc_bh2, 32);
-			__m128i rss_hash1 =
-				_mm_slli_epi64(raw_desc_bh1, 32);
-			__m128i rss_hash0 =
-				_mm_slli_epi64(raw_desc_bh0, 32);
-
-			__m128i rss_hash_msk =
-				_mm_set_epi32(0xFFFFFFFF, 0, 0, 0);
-
-			rss_hash3 = _mm_and_si128
-					(rss_hash3, rss_hash_msk);
-			rss_hash2 = _mm_and_si128
-					(rss_hash2, rss_hash_msk);
-			rss_hash1 = _mm_and_si128
-					(rss_hash1, rss_hash_msk);
-			rss_hash0 = _mm_and_si128
-					(rss_hash0, rss_hash_msk);
-
-			pkt_mb3 = _mm_or_si128(pkt_mb3, rss_hash3);
-			pkt_mb2 = _mm_or_si128(pkt_mb2, rss_hash2);
-			pkt_mb1 = _mm_or_si128(pkt_mb1, rss_hash1);
-			pkt_mb0 = _mm_or_si128(pkt_mb0, rss_hash0);
-		} /* if() on RSS hash parsing */
+			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
+							RTE_ETH_RX_OFFLOAD_RSS_HASH) {
+				__m128i rss_hash3 =
+					_mm_slli_epi64(descs_bh[3], 32);
+				__m128i rss_hash2 =
+					_mm_slli_epi64(descs_bh[2], 32);
+				__m128i rss_hash1 =
+					_mm_slli_epi64(descs_bh[1], 32);
+				__m128i rss_hash0 =
+					_mm_slli_epi64(descs_bh[0], 32);
+
+				__m128i rss_hash_msk =
+					_mm_set_epi32(0xFFFFFFFF, 0, 0, 0);
+
+				rss_hash3 = _mm_and_si128
+						(rss_hash3, rss_hash_msk);
+				rss_hash2 = _mm_and_si128
+						(rss_hash2, rss_hash_msk);
+				rss_hash1 = _mm_and_si128
+						(rss_hash1, rss_hash_msk);
+				rss_hash0 = _mm_and_si128
+						(rss_hash0, rss_hash_msk);
+
+				pkt_mb3 = _mm_or_si128(pkt_mb3, rss_hash3);
+				pkt_mb2 = _mm_or_si128(pkt_mb2, rss_hash2);
+				pkt_mb1 = _mm_or_si128(pkt_mb1, rss_hash1);
+				pkt_mb0 = _mm_or_si128(pkt_mb0, rss_hash0);
+			} /* if() on RSS hash parsing */
+
+			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
+							RTE_ETH_RX_OFFLOAD_VLAN) {
+									/* L2TAG2_2 */
+				__m128i vlan_tci3 = _mm_slli_si128(descs_bh[3], 4);
+				__m128i vlan_tci2 = _mm_slli_si128(descs_bh[2], 4);
+				__m128i vlan_tci1 = _mm_slli_si128(descs_bh[1], 4);
+				__m128i vlan_tci0 = _mm_slli_si128(descs_bh[0], 4);
+
+				const __m128i vlan_tci_msk = _mm_set_epi32(0, 0xFFFF0000, 0, 0);
+
+				vlan_tci3 = _mm_and_si128(vlan_tci3, vlan_tci_msk);
+				vlan_tci2 = _mm_and_si128(vlan_tci2, vlan_tci_msk);
+				vlan_tci1 = _mm_and_si128(vlan_tci1, vlan_tci_msk);
+				vlan_tci0 = _mm_and_si128(vlan_tci0, vlan_tci_msk);
+
+				pkt_mb3 = _mm_or_si128(pkt_mb3, vlan_tci3);
+				pkt_mb2 = _mm_or_si128(pkt_mb2, vlan_tci2);
+				pkt_mb1 = _mm_or_si128(pkt_mb1, vlan_tci1);
+				pkt_mb0 = _mm_or_si128(pkt_mb0, vlan_tci0);
+				}
+		ice_rx_desc_to_olflags_v(rxq, descs, descs_bh, &rx_pkts[pos]);
+		}
+#else
+		ice_rx_desc_to_olflags_v(rxq, descs, &rx_pkts[pos]);
 #endif
 
 		/* C.2 get 4 pkts staterr value  */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v4 2/2] net/ice: fix vlan offload
  2022-10-26 17:10 ` [PATCH v4 2/2] net/ice: fix vlan offload Mingjin Ye
@ 2022-10-27  8:36   ` Huang, ZhiminX
  0 siblings, 0 replies; 36+ messages in thread
From: Huang, ZhiminX @ 2022-10-27  8:36 UTC (permalink / raw)
  To: Ye, MingjinX, dev
  Cc: stable, Zhou, YidingX, Ye, MingjinX, Richardson, Bruce,
	Konstantin Ananyev, Yang, Qiming, Zhang, Qi Z, Lu, Wenzhuo,
	Junyu Jiang, Rong, Leyi, Wisam Jaddo, Xia, Chenbo,
	Hemant Agrawal, Jerin Jacob, Ajit Khaparde

> -----Original Message-----
> From: Mingjin Ye <mingjinx.ye@intel.com>
> Sent: Thursday, October 27, 2022 1:10 AM
> To: dev@dpdk.org
> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Ye, MingjinX
> <mingjinx.ye@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
> Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>; Yang, Qiming
> <qiming.yang@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; Junyu Jiang <junyux.jiang@intel.com>; Rong, Leyi
> <leyi.rong@intel.com>; Wisam Jaddo <wisamm@nvidia.com>; Xia, Chenbo
> <chenbo.xia@intel.com>; Hemant Agrawal <hemant.agrawal@nxp.com>;
> Jerin Jacob <jerinj@marvell.com>; Ajit Khaparde
> <ajit.khaparde@broadcom.com>
> Subject: [PATCH v4 2/2] net/ice: fix vlan offload
> 
> The vlan tag and flag in Rx descriptor are not processed on vector path, then
> the upper application can't fetch the tci from mbuf.
> 
> This commit add handling of vlan RX offloading.
> 
> Fixes: c68a52b8b38c ("net/ice: support vector SSE in Rx")
> Fixes: ece1f8a8f1c8 ("net/ice: switch to flexible descriptor in SSE path")
> Fixes: 12443386a0b0 ("net/ice: support flex Rx descriptor RxDID22")
> Fixes: 214f452f7d5f ("net/ice: add AVX2 offload Rx")
> Fixes: 7f85d5ebcfe1 ("net/ice: add AVX512 vector path")
> Fixes: 295968d17407 ("ethdev: add namespace")
> Fixes: 808a17b3c1e6 ("net/ice: add Rx AVX512 offload path")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> 
> v3:
> 	* Fix macros in ice_rxtx_vec_sse.c source file.
> v4:
> 	* Fix ice_rx_desc_to_olflags_v define in ice_rxtx_vec_sse.c source file.
> ---
Tested-by: Zhimin Huang <zhiminx.huang@intel.com >



^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
  2022-10-26 17:10 [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq Mingjin Ye
  2022-10-26  9:52 ` lihuisong (C)
  2022-10-26 17:10 ` [PATCH v4 2/2] net/ice: fix vlan offload Mingjin Ye
@ 2022-10-27  8:36 ` Huang, ZhiminX
  2022-10-27 13:16   ` Singh, Aman Deep
  2022-11-08 13:28 ` [PATCH v5 1/2] net/ice: fix vlan offload Mingjin Ye
  3 siblings, 1 reply; 36+ messages in thread
From: Huang, ZhiminX @ 2022-10-27  8:36 UTC (permalink / raw)
  To: Ye, MingjinX, dev
  Cc: stable, Zhou, YidingX, Ye, MingjinX, Singh, Aman Deep, Zhang, Yuying

> -----Original Message-----
> From: Mingjin Ye <mingjinx.ye@intel.com>
> Sent: Thursday, October 27, 2022 1:10 AM
> To: dev@dpdk.org
> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Ye, MingjinX
> <mingjinx.ye@intel.com>; Singh, Aman Deep <aman.deep.singh@intel.com>;
> Zhang, Yuying <yuying.zhang@intel.com>
> Subject: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
> 
> After setting vlan offload in testpmd, the result is not updated to rxq. Therefore,
> the queue needs to be reconfigured after executing the "vlan offload" related
> commands.
> 
> Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> ---
Tested-by: Zhimin Huang <zhiminx.huang@intel.com >



^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
  2022-10-26  9:52 ` lihuisong (C)
@ 2022-10-27 11:02   ` Ye, MingjinX
  2022-10-28  2:09     ` lihuisong (C)
  0 siblings, 1 reply; 36+ messages in thread
From: Ye, MingjinX @ 2022-10-27 11:02 UTC (permalink / raw)
  To: lihuisong (C), dev; +Cc: stable, Zhou, YidingX, Singh, Aman Deep, Zhang, Yuying

Hi lihuisong,

This means that queue offloads need to update by recalling dev_configure
and setup target queues.
Can you tell me, where is the limitation?

Thanks,
Mingjin

> -----Original Message-----
> From: lihuisong (C) <lihuisong@huawei.com>
> Sent: 2022年10月26日 17:53
> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh, Aman
> Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> <yuying.zhang@intel.com>
> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
> 
> 
> 在 2022/10/27 1:10, Mingjin Ye 写道:
> > After setting vlan offload in testpmd, the result is not updated to
> > rxq. Therefore, the queue needs to be reconfigured after executing the
> > "vlan offload" related commands.
> >
> > Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> > ---
> >   app/test-pmd/cmdline.c | 1 +
> >   1 file changed, 1 insertion(+)
> >
> > diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
> > 17be2de402..ce125f549f 100644
> > --- a/app/test-pmd/cmdline.c
> > +++ b/app/test-pmd/cmdline.c
> > @@ -4133,6 +4133,7 @@ cmd_vlan_offload_parsed(void *parsed_result,
> >   	else
> >   		vlan_extend_set(port_id, on);
> >
> > +	cmd_reconfig_device_queue(port_id, 1, 1);
> This means that queue offloads need to upadte by re-calling dev_configure
> and setup all queues, right?
> If it is, this adds a usage limitation.
> >   	return;
> >   }
> >

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
  2022-10-27  8:36 ` [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq Huang, ZhiminX
@ 2022-10-27 13:16   ` Singh, Aman Deep
  0 siblings, 0 replies; 36+ messages in thread
From: Singh, Aman Deep @ 2022-10-27 13:16 UTC (permalink / raw)
  To: Huang, ZhiminX, Ye, MingjinX, dev; +Cc: stable, Zhou, YidingX, Zhang, Yuying



On 10/27/2022 2:06 PM, Huang, ZhiminX wrote:
>> -----Original Message-----
>> From: Mingjin Ye <mingjinx.ye@intel.com>
>> Sent: Thursday, October 27, 2022 1:10 AM
>> To: dev@dpdk.org
>> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Ye, MingjinX
>> <mingjinx.ye@intel.com>; Singh, Aman Deep <aman.deep.singh@intel.com>;
>> Zhang, Yuying <yuying.zhang@intel.com>
>> Subject: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
>>
>> After setting vlan offload in testpmd, the result is not updated to rxq. Therefore,
>> the queue needs to be reconfigured after executing the "vlan offload" related
>> commands.
>>
>> Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>

Acked-by: Aman Singh<aman.deep.singh@intel.com>

>> ---
> Tested-by: Zhimin Huang <zhiminx.huang@intel.com >
>
>


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
  2022-10-27 11:02   ` Ye, MingjinX
@ 2022-10-28  2:09     ` lihuisong (C)
  2022-11-03  1:28       ` Ye, MingjinX
  0 siblings, 1 reply; 36+ messages in thread
From: lihuisong (C) @ 2022-10-28  2:09 UTC (permalink / raw)
  To: Ye, MingjinX, dev; +Cc: stable, Zhou, YidingX, Singh, Aman Deep, Zhang, Yuying


在 2022/10/27 19:02, Ye, MingjinX 写道:
> Hi lihuisong,
>
> This means that queue offloads need to update by recalling dev_configure
> and setup target queues.
Why not update queue offloads in PMD?
> Can you tell me, where is the limitation?
According to other Rx/Tx offload configurations, this may not be a 
limitation.
But it seems to create a dependency on user usage.

Port VLAN releated offloads are set by ethdev ops. There is no requirement
in ehedev layer that this port needs to stopped when set these offloads.
Now it depends on user does recall dev_configure and setup queues to update
queue offloads because of setting these offloads.
>
> Thanks,
> Mingjin
>
>> -----Original Message-----
>> From: lihuisong (C) <lihuisong@huawei.com>
>> Sent: 2022年10月26日 17:53
>> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
>> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh, Aman
>> Deep <aman.deep.singh@intel.com>; Zhang, Yuying
>> <yuying.zhang@intel.com>
>> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
>>
>>
>> 在 2022/10/27 1:10, Mingjin Ye 写道:
>>> After setting vlan offload in testpmd, the result is not updated to
>>> rxq. Therefore, the queue needs to be reconfigured after executing the
>>> "vlan offload" related commands.
>>>
>>> Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
>>> ---
>>>    app/test-pmd/cmdline.c | 1 +
>>>    1 file changed, 1 insertion(+)
>>>
>>> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
>>> 17be2de402..ce125f549f 100644
>>> --- a/app/test-pmd/cmdline.c
>>> +++ b/app/test-pmd/cmdline.c
>>> @@ -4133,6 +4133,7 @@ cmd_vlan_offload_parsed(void *parsed_result,
>>>    	else
>>>    		vlan_extend_set(port_id, on);
>>>
>>> +	cmd_reconfig_device_queue(port_id, 1, 1);
In addition, I have some comments:
1) Normally, the parsed function of testpmd command needed to re-config 
port and
queue needs to check if port status is STOPED. Why don't you add this check?
If the check is not exist, queue offloads are not updated until the next 
port
stop/start command is executed. Right?

2) Why is the queue-based VLAN offload API not used?
    Like, rte_eth_dev_set_vlan_strip_on_queue. It seems that this kind 
of API is
    dedicated to do this.
>> This means that queue offloads need to upadte by re-calling dev_configure
>> and setup all queues, right?
>> If it is, this adds a usage limitation.
>>>    	return;
>>>    }
>>>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
  2022-10-28  2:09     ` lihuisong (C)
@ 2022-11-03  1:28       ` Ye, MingjinX
  2022-11-03  7:01         ` lihuisong (C)
  0 siblings, 1 reply; 36+ messages in thread
From: Ye, MingjinX @ 2022-11-03  1:28 UTC (permalink / raw)
  To: lihuisong (C), dev; +Cc: stable, Zhou, YidingX, Singh, Aman Deep, Zhang, Yuying



> -----Original Message-----
> From: lihuisong (C) <lihuisong@huawei.com>
> Sent: 2022年10月28日 10:09
> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh, Aman
> Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> <yuying.zhang@intel.com>
> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
> 
> 
> 在 2022/10/27 19:02, Ye, MingjinX 写道:
> > Hi lihuisong,
> >
> > This means that queue offloads need to update by recalling
> > dev_configure and setup target queues.
> Why not update queue offloads in PMD?
> > Can you tell me, where is the limitation?
> According to other Rx/Tx offload configurations, this may not be a limitation.
> But it seems to create a dependency on user usage.
> 
> Port VLAN releated offloads are set by ethdev ops. There is no requirement
> in ehedev layer that this port needs to stopped when set these offloads.
> Now it depends on user does recall dev_configure and setup queues to
> update queue offloads because of setting these offloads.
> >
> > Thanks,
> > Mingjin
> >
> >> -----Original Message-----
> >> From: lihuisong (C) <lihuisong@huawei.com>
> >> Sent: 2022年10月26日 17:53
> >> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> >> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh,
> >> Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> >> <yuying.zhang@intel.com>
> >> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
> >>
> >>
> >> 在 2022/10/27 1:10, Mingjin Ye 写道:
> >>> After setting vlan offload in testpmd, the result is not updated to
> >>> rxq. Therefore, the queue needs to be reconfigured after executing
> >>> the "vlan offload" related commands.
> >>>
> >>> Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
> >>> Cc: stable@dpdk.org
> >>>
> >>> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> >>> ---
> >>>    app/test-pmd/cmdline.c | 1 +
> >>>    1 file changed, 1 insertion(+)
> >>>
> >>> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
> >>> 17be2de402..ce125f549f 100644
> >>> --- a/app/test-pmd/cmdline.c
> >>> +++ b/app/test-pmd/cmdline.c
> >>> @@ -4133,6 +4133,7 @@ cmd_vlan_offload_parsed(void *parsed_result,
> >>>    	else
> >>>    		vlan_extend_set(port_id, on);
> >>>
> >>> +	cmd_reconfig_device_queue(port_id, 1, 1);
> In addition, I have some comments:
> 1) Normally, the parsed function of testpmd command needed to re-config
> port and queue needs to check if port status is STOPED. Why don't you add
> this check?
The check is exist.
> If the check is not exist, queue offloads are not updated until the next port
> stop/start command is executed. Right?
yes
> 
> 2) Why is the queue-based VLAN offload API not used?
VLAN offload is a port-related configuration. If a single port is changed,
the associated queue needs to be all updated in configuration. Therefore, 
there will be no additional api to configure.
>     Like, rte_eth_dev_set_vlan_strip_on_queue. It seems that this kind of API
> is
>     dedicated to do this.
> >> This means that queue offloads need to upadte by re-calling
> >> dev_configure and setup all queues, right?
> >> If it is, this adds a usage limitation.
> >>>    	return;
> >>>    }
> >>>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
  2022-11-03  1:28       ` Ye, MingjinX
@ 2022-11-03  7:01         ` lihuisong (C)
  2022-11-04  8:21           ` Ye, MingjinX
  0 siblings, 1 reply; 36+ messages in thread
From: lihuisong (C) @ 2022-11-03  7:01 UTC (permalink / raw)
  To: Ye, MingjinX, dev; +Cc: stable, Zhou, YidingX, Singh, Aman Deep, Zhang, Yuying


在 2022/11/3 9:28, Ye, MingjinX 写道:
>
>> -----Original Message-----
>> From: lihuisong (C) <lihuisong@huawei.com>
>> Sent: 2022年10月28日 10:09
>> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
>> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh, Aman
>> Deep <aman.deep.singh@intel.com>; Zhang, Yuying
>> <yuying.zhang@intel.com>
>> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
>>
>>
>> 在 2022/10/27 19:02, Ye, MingjinX 写道:
>>> Hi lihuisong,
>>>
>>> This means that queue offloads need to update by recalling
>>> dev_configure and setup target queues.
>> Why not update queue offloads in PMD?
>>> Can you tell me, where is the limitation?
>> According to other Rx/Tx offload configurations, this may not be a limitation.
>> But it seems to create a dependency on user usage.
>>
>> Port VLAN releated offloads are set by ethdev ops. There is no requirement
>> in ehedev layer that this port needs to stopped when set these offloads.
>> Now it depends on user does recall dev_configure and setup queues to
>> update queue offloads because of setting these offloads.
>>> Thanks,
>>> Mingjin
>>>
>>>> -----Original Message-----
>>>> From: lihuisong (C) <lihuisong@huawei.com>
>>>> Sent: 2022年10月26日 17:53
>>>> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
>>>> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh,
>>>> Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
>>>> <yuying.zhang@intel.com>
>>>> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
>>>>
>>>>
>>>> 在 2022/10/27 1:10, Mingjin Ye 写道:
>>>>> After setting vlan offload in testpmd, the result is not updated to
>>>>> rxq. Therefore, the queue needs to be reconfigured after executing
>>>>> the "vlan offload" related commands.
>>>>>
>>>>> Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
>>>>> Cc: stable@dpdk.org
>>>>>
>>>>> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
>>>>> ---
>>>>>     app/test-pmd/cmdline.c | 1 +
>>>>>     1 file changed, 1 insertion(+)
>>>>>
>>>>> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
>>>>> 17be2de402..ce125f549f 100644
>>>>> --- a/app/test-pmd/cmdline.c
>>>>> +++ b/app/test-pmd/cmdline.c
>>>>> @@ -4133,6 +4133,7 @@ cmd_vlan_offload_parsed(void *parsed_result,
>>>>>     	else
>>>>>     		vlan_extend_set(port_id, on);
>>>>>
>>>>> +	cmd_reconfig_device_queue(port_id, 1, 1);
>> In addition, I have some comments:
>> 1) Normally, the parsed function of testpmd command needed to re-config
>> port and queue needs to check if port status is STOPED. Why don't you add
>> this check?
> The check is exist.
Where is the check? Currently, it seems that this check does not exist
in the this command parsed function.
>> If the check is not exist, queue offloads are not updated until the next port
>> stop/start command is executed. Right?
> yes
>> 2) Why is the queue-based VLAN offload API not used?
> VLAN offload is a port-related configuration. If a single port is changed,
> the associated queue needs to be all updated in configuration. Therefore,
> there will be no additional api to configure.
>>      Like, rte_eth_dev_set_vlan_strip_on_queue. It seems that this kind of API
>> is
>>      dedicated to do this.
>>>> This means that queue offloads need to upadte by re-calling
>>>> dev_configure and setup all queues, right?
>>>> If it is, this adds a usage limitation.
>>>>>     	return;
>>>>>     }
>>>>>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
  2022-11-03  7:01         ` lihuisong (C)
@ 2022-11-04  8:21           ` Ye, MingjinX
  2022-11-04 10:17             ` lihuisong (C)
  0 siblings, 1 reply; 36+ messages in thread
From: Ye, MingjinX @ 2022-11-04  8:21 UTC (permalink / raw)
  To: lihuisong (C), dev; +Cc: stable, Zhou, YidingX, Singh, Aman Deep, Zhang, Yuying



> -----Original Message-----
> From: lihuisong (C) <lihuisong@huawei.com>
> Sent: 2022年11月3日 15:01
> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh, Aman
> Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> <yuying.zhang@intel.com>
> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
> 
> 
> 在 2022/11/3 9:28, Ye, MingjinX 写道:
> >
> >> -----Original Message-----
> >> From: lihuisong (C) <lihuisong@huawei.com>
> >> Sent: 2022年10月28日 10:09
> >> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> >> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh,
> >> Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> >> <yuying.zhang@intel.com>
> >> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
> >>
> >>
> >> 在 2022/10/27 19:02, Ye, MingjinX 写道:
> >>> Hi lihuisong,
> >>>
> >>> This means that queue offloads need to update by recalling
> >>> dev_configure and setup target queues.
> >> Why not update queue offloads in PMD?
> >>> Can you tell me, where is the limitation?
> >> According to other Rx/Tx offload configurations, this may not be a
> limitation.
> >> But it seems to create a dependency on user usage.
> >>
> >> Port VLAN releated offloads are set by ethdev ops. There is no
> >> requirement in ehedev layer that this port needs to stopped when set
> these offloads.
> >> Now it depends on user does recall dev_configure and setup queues to
> >> update queue offloads because of setting these offloads.
> >>> Thanks,
> >>> Mingjin
> >>>
> >>>> -----Original Message-----
> >>>> From: lihuisong (C) <lihuisong@huawei.com>
> >>>> Sent: 2022年10月26日 17:53
> >>>> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> >>>> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh,
> >>>> Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> >>>> <yuying.zhang@intel.com>
> >>>> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
> >>>>
> >>>>
> >>>> 在 2022/10/27 1:10, Mingjin Ye 写道:
> >>>>> After setting vlan offload in testpmd, the result is not updated
> >>>>> to rxq. Therefore, the queue needs to be reconfigured after
> >>>>> executing the "vlan offload" related commands.
> >>>>>
> >>>>> Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
> >>>>> Cc: stable@dpdk.org
> >>>>>
> >>>>> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> >>>>> ---
> >>>>>     app/test-pmd/cmdline.c | 1 +
> >>>>>     1 file changed, 1 insertion(+)
> >>>>>
> >>>>> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
> >>>>> 17be2de402..ce125f549f 100644
> >>>>> --- a/app/test-pmd/cmdline.c
> >>>>> +++ b/app/test-pmd/cmdline.c
> >>>>> @@ -4133,6 +4133,7 @@ cmd_vlan_offload_parsed(void
> *parsed_result,
> >>>>>     	else
> >>>>>     		vlan_extend_set(port_id, on);
> >>>>>
> >>>>> +	cmd_reconfig_device_queue(port_id, 1, 1);
> >> In addition, I have some comments:
> >> 1) Normally, the parsed function of testpmd command needed to
> >> re-config port and queue needs to check if port status is STOPED. Why
> >> don't you add this check?
> > The check is exist.
> Where is the check? Currently, it seems that this check does not exist in the
> this command parsed function.
Check if the port is forwarded, in the source file test-pmd.c:2835.
> >> If the check is not exist, queue offloads are not updated until the
> >> next port stop/start command is executed. Right?
> > yes
> >> 2) Why is the queue-based VLAN offload API not used?
> > VLAN offload is a port-related configuration. If a single port is
> > changed, the associated queue needs to be all updated in
> > configuration. Therefore, there will be no additional api to configure.
> >>      Like, rte_eth_dev_set_vlan_strip_on_queue. It seems that this
> >> kind of API is
> >>      dedicated to do this.
> >>>> This means that queue offloads need to upadte by re-calling
> >>>> dev_configure and setup all queues, right?
> >>>> If it is, this adds a usage limitation.
> >>>>>     	return;
> >>>>>     }
> >>>>>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
  2022-11-04  8:21           ` Ye, MingjinX
@ 2022-11-04 10:17             ` lihuisong (C)
  2022-11-04 11:33               ` Ye, MingjinX
  0 siblings, 1 reply; 36+ messages in thread
From: lihuisong (C) @ 2022-11-04 10:17 UTC (permalink / raw)
  To: Ye, MingjinX, dev; +Cc: stable, Zhou, YidingX, Singh, Aman Deep, Zhang, Yuying


在 2022/11/4 16:21, Ye, MingjinX 写道:
>
>> -----Original Message-----
>> From: lihuisong (C) <lihuisong@huawei.com>
>> Sent: 2022年11月3日 15:01
>> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
>> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh, Aman
>> Deep <aman.deep.singh@intel.com>; Zhang, Yuying
>> <yuying.zhang@intel.com>
>> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
>>
>>
>> 在 2022/11/3 9:28, Ye, MingjinX 写道:
>>>> -----Original Message-----
>>>> From: lihuisong (C) <lihuisong@huawei.com>
>>>> Sent: 2022年10月28日 10:09
>>>> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
>>>> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh,
>>>> Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
>>>> <yuying.zhang@intel.com>
>>>> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
>>>>
>>>>
>>>> 在 2022/10/27 19:02, Ye, MingjinX 写道:
>>>>> Hi lihuisong,
>>>>>
>>>>> This means that queue offloads need to update by recalling
>>>>> dev_configure and setup target queues.
>>>> Why not update queue offloads in PMD?
>>>>> Can you tell me, where is the limitation?
>>>> According to other Rx/Tx offload configurations, this may not be a
>> limitation.
>>>> But it seems to create a dependency on user usage.
>>>>
>>>> Port VLAN releated offloads are set by ethdev ops. There is no
>>>> requirement in ehedev layer that this port needs to stopped when set
>> these offloads.
>>>> Now it depends on user does recall dev_configure and setup queues to
>>>> update queue offloads because of setting these offloads.
>>>>> Thanks,
>>>>> Mingjin
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: lihuisong (C) <lihuisong@huawei.com>
>>>>>> Sent: 2022年10月26日 17:53
>>>>>> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
>>>>>> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh,
>>>>>> Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
>>>>>> <yuying.zhang@intel.com>
>>>>>> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
>>>>>>
>>>>>>
>>>>>> 在 2022/10/27 1:10, Mingjin Ye 写道:
>>>>>>> After setting vlan offload in testpmd, the result is not updated
>>>>>>> to rxq. Therefore, the queue needs to be reconfigured after
>>>>>>> executing the "vlan offload" related commands.
>>>>>>>
>>>>>>> Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
>>>>>>> Cc: stable@dpdk.org
>>>>>>>
>>>>>>> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
>>>>>>> ---
>>>>>>>      app/test-pmd/cmdline.c | 1 +
>>>>>>>      1 file changed, 1 insertion(+)
>>>>>>>
>>>>>>> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
>>>>>>> 17be2de402..ce125f549f 100644
>>>>>>> --- a/app/test-pmd/cmdline.c
>>>>>>> +++ b/app/test-pmd/cmdline.c
>>>>>>> @@ -4133,6 +4133,7 @@ cmd_vlan_offload_parsed(void
>> *parsed_result,
>>>>>>>      	else
>>>>>>>      		vlan_extend_set(port_id, on);
>>>>>>>
>>>>>>> +	cmd_reconfig_device_queue(port_id, 1, 1);
>>>> In addition, I have some comments:
>>>> 1) Normally, the parsed function of testpmd command needed to
>>>> re-config port and queue needs to check if port status is STOPED. Why
>>>> don't you add this check?
>>> The check is exist.
>> Where is the check? Currently, it seems that this check does not exist in the
>> this command parsed function.
> Check if the port is forwarded, in the source file test-pmd.c:2835.
I don't understand why you mention the check in start_port(). It should 
be done in command parsed link other command.
>>>> If the check is not exist, queue offloads are not updated until the
>>>> next port stop/start command is executed. Right?
>>> yes
>>>> 2) Why is the queue-based VLAN offload API not used?
>>> VLAN offload is a port-related configuration. If a single port is
>>> changed, the associated queue needs to be all updated in
>>> configuration. Therefore, there will be no additional api to configure.
>>>>       Like, rte_eth_dev_set_vlan_strip_on_queue. It seems that this
>>>> kind of API is
>>>>       dedicated to do this.
>>>>>> This means that queue offloads need to upadte by re-calling
>>>>>> dev_configure and setup all queues, right?
>>>>>> If it is, this adds a usage limitation.
>>>>>>>      	return;
>>>>>>>      }
>>>>>>>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
  2022-11-04 10:17             ` lihuisong (C)
@ 2022-11-04 11:33               ` Ye, MingjinX
  2022-11-06 10:32                 ` Andrew Rybchenko
  0 siblings, 1 reply; 36+ messages in thread
From: Ye, MingjinX @ 2022-11-04 11:33 UTC (permalink / raw)
  To: lihuisong (C), dev, andrew.rybchenko
  Cc: stable, Zhou, YidingX, Singh, Aman Deep, Zhang, Yuying



> -----Original Message-----
> From: lihuisong (C) <lihuisong@huawei.com>
> Sent: 2022年11月4日 18:18
> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh, Aman
> Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> <yuying.zhang@intel.com>
> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
> 
> 
> 在 2022/11/4 16:21, Ye, MingjinX 写道:
> >
> >> -----Original Message-----
> >> From: lihuisong (C) <lihuisong@huawei.com>
> >> Sent: 2022年11月3日 15:01
> >> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> >> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh,
> >> Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> >> <yuying.zhang@intel.com>
> >> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
> >>
> >>
> >> 在 2022/11/3 9:28, Ye, MingjinX 写道:
> >>>> -----Original Message-----
> >>>> From: lihuisong (C) <lihuisong@huawei.com>
> >>>> Sent: 2022年10月28日 10:09
> >>>> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> >>>> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh,
> >>>> Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> >>>> <yuying.zhang@intel.com>
> >>>> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
> >>>>
> >>>>
> >>>> 在 2022/10/27 19:02, Ye, MingjinX 写道:
> >>>>> Hi lihuisong,
> >>>>>
> >>>>> This means that queue offloads need to update by recalling
> >>>>> dev_configure and setup target queues.
> >>>> Why not update queue offloads in PMD?
> >>>>> Can you tell me, where is the limitation?
> >>>> According to other Rx/Tx offload configurations, this may not be a
> >> limitation.
> >>>> But it seems to create a dependency on user usage.
> >>>>
> >>>> Port VLAN releated offloads are set by ethdev ops. There is no
> >>>> requirement in ehedev layer that this port needs to stopped when
> >>>> set
> >> these offloads.
> >>>> Now it depends on user does recall dev_configure and setup queues
> >>>> to update queue offloads because of setting these offloads.
> >>>>> Thanks,
> >>>>> Mingjin
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: lihuisong (C) <lihuisong@huawei.com>
> >>>>>> Sent: 2022年10月26日 17:53
> >>>>>> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> >>>>>> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>;
> >>>>>> Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> >>>>>> <yuying.zhang@intel.com>
> >>>>>> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
> >>>>>>
> >>>>>>
> >>>>>> 在 2022/10/27 1:10, Mingjin Ye 写道:
> >>>>>>> After setting vlan offload in testpmd, the result is not updated
> >>>>>>> to rxq. Therefore, the queue needs to be reconfigured after
> >>>>>>> executing the "vlan offload" related commands.
> >>>>>>>
> >>>>>>> Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
> >>>>>>> Cc: stable@dpdk.org
> >>>>>>>
> >>>>>>> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> >>>>>>> ---
> >>>>>>>      app/test-pmd/cmdline.c | 1 +
> >>>>>>>      1 file changed, 1 insertion(+)
> >>>>>>>
> >>>>>>> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> >>>>>>> index 17be2de402..ce125f549f 100644
> >>>>>>> --- a/app/test-pmd/cmdline.c
> >>>>>>> +++ b/app/test-pmd/cmdline.c
> >>>>>>> @@ -4133,6 +4133,7 @@ cmd_vlan_offload_parsed(void
> >> *parsed_result,
> >>>>>>>      	else
> >>>>>>>      		vlan_extend_set(port_id, on);
> >>>>>>>
> >>>>>>> +	cmd_reconfig_device_queue(port_id, 1, 1);
> >>>> In addition, I have some comments:
> >>>> 1) Normally, the parsed function of testpmd command needed to
> >>>> re-config port and queue needs to check if port status is STOPED.
> >>>> Why don't you add this check?
> >>> The check is exist.
> >> Where is the check? Currently, it seems that this check does not
> >> exist in the this command parsed function.
> > Check if the port is forwarded, in the source file test-pmd.c:2835.
> I don't understand why you mention the check in start_port(). It should be
> done in command parsed link other command.

The command types include configuration and show these two types of commands.
show commands: There is no need to judge whether the port has stopped working.
configuration commands: Not all commands need to stop the port, there will be judgment if necessary.

Hi, @andrew.rybchenko@oktetlabs.ru can you please help review this patch? Thanks.

> >>>> If the check is not exist, queue offloads are not updated until the
> >>>> next port stop/start command is executed. Right?
> >>> yes
> >>>> 2) Why is the queue-based VLAN offload API not used?
> >>> VLAN offload is a port-related configuration. If a single port is
> >>> changed, the associated queue needs to be all updated in
> >>> configuration. Therefore, there will be no additional api to configure.
> >>>>       Like, rte_eth_dev_set_vlan_strip_on_queue. It seems that this
> >>>> kind of API is
> >>>>       dedicated to do this.
> >>>>>> This means that queue offloads need to upadte by re-calling
> >>>>>> dev_configure and setup all queues, right?
> >>>>>> If it is, this adds a usage limitation.
> >>>>>>>      	return;
> >>>>>>>      }
> >>>>>>>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
  2022-11-04 11:33               ` Ye, MingjinX
@ 2022-11-06 10:32                 ` Andrew Rybchenko
  2022-11-07  7:18                   ` Ye, MingjinX
  0 siblings, 1 reply; 36+ messages in thread
From: Andrew Rybchenko @ 2022-11-06 10:32 UTC (permalink / raw)
  To: Ye, MingjinX, lihuisong (C), dev
  Cc: stable, Zhou, YidingX, Singh, Aman Deep, Zhang, Yuying

On 11/4/22 14:33, Ye, MingjinX wrote:
> 
> 
>> -----Original Message-----
>> From: lihuisong (C) <lihuisong@huawei.com>
>> Sent: 2022年11月4日 18:18
>> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
>> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh, Aman
>> Deep <aman.deep.singh@intel.com>; Zhang, Yuying
>> <yuying.zhang@intel.com>
>> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
>>
>>
>> 在 2022/11/4 16:21, Ye, MingjinX 写道:
>>>
>>>> -----Original Message-----
>>>> From: lihuisong (C) <lihuisong@huawei.com>
>>>> Sent: 2022年11月3日 15:01
>>>> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
>>>> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh,
>>>> Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
>>>> <yuying.zhang@intel.com>
>>>> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
>>>>
>>>>
>>>> 在 2022/11/3 9:28, Ye, MingjinX 写道:
>>>>>> -----Original Message-----
>>>>>> From: lihuisong (C) <lihuisong@huawei.com>
>>>>>> Sent: 2022年10月28日 10:09
>>>>>> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
>>>>>> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh,
>>>>>> Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
>>>>>> <yuying.zhang@intel.com>
>>>>>> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
>>>>>>
>>>>>>
>>>>>> 在 2022/10/27 19:02, Ye, MingjinX 写道:
>>>>>>> Hi lihuisong,
>>>>>>>
>>>>>>> This means that queue offloads need to update by recalling
>>>>>>> dev_configure and setup target queues.
>>>>>> Why not update queue offloads in PMD?
>>>>>>> Can you tell me, where is the limitation?
>>>>>> According to other Rx/Tx offload configurations, this may not be a
>>>> limitation.
>>>>>> But it seems to create a dependency on user usage.
>>>>>>
>>>>>> Port VLAN releated offloads are set by ethdev ops. There is no
>>>>>> requirement in ehedev layer that this port needs to stopped when
>>>>>> set
>>>> these offloads.
>>>>>> Now it depends on user does recall dev_configure and setup queues
>>>>>> to update queue offloads because of setting these offloads.
>>>>>>> Thanks,
>>>>>>> Mingjin
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: lihuisong (C) <lihuisong@huawei.com>
>>>>>>>> Sent: 2022年10月26日 17:53
>>>>>>>> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
>>>>>>>> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>;
>>>>>>>> Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
>>>>>>>> <yuying.zhang@intel.com>
>>>>>>>> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
>>>>>>>>
>>>>>>>>
>>>>>>>> 在 2022/10/27 1:10, Mingjin Ye 写道:
>>>>>>>>> After setting vlan offload in testpmd, the result is not updated
>>>>>>>>> to rxq. Therefore, the queue needs to be reconfigured after
>>>>>>>>> executing the "vlan offload" related commands.
>>>>>>>>>
>>>>>>>>> Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
>>>>>>>>> Cc: stable@dpdk.org
>>>>>>>>>
>>>>>>>>> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
>>>>>>>>> ---
>>>>>>>>>       app/test-pmd/cmdline.c | 1 +
>>>>>>>>>       1 file changed, 1 insertion(+)
>>>>>>>>>
>>>>>>>>> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
>>>>>>>>> index 17be2de402..ce125f549f 100644
>>>>>>>>> --- a/app/test-pmd/cmdline.c
>>>>>>>>> +++ b/app/test-pmd/cmdline.c
>>>>>>>>> @@ -4133,6 +4133,7 @@ cmd_vlan_offload_parsed(void
>>>> *parsed_result,
>>>>>>>>>       	else
>>>>>>>>>       		vlan_extend_set(port_id, on);
>>>>>>>>>
>>>>>>>>> +	cmd_reconfig_device_queue(port_id, 1, 1);
>>>>>> In addition, I have some comments:
>>>>>> 1) Normally, the parsed function of testpmd command needed to
>>>>>> re-config port and queue needs to check if port status is STOPED.
>>>>>> Why don't you add this check?
>>>>> The check is exist.
>>>> Where is the check? Currently, it seems that this check does not
>>>> exist in the this command parsed function.
>>> Check if the port is forwarded, in the source file test-pmd.c:2835.
>> I don't understand why you mention the check in start_port(). It should be
>> done in command parsed link other command.
> 
> The command types include configuration and show these two types of commands.
> show commands: There is no need to judge whether the port has stopped working.
> configuration commands: Not all commands need to stop the port, there will be judgment if necessary.
> 
> Hi, @andrew.rybchenko@oktetlabs.ru can you please help review this patch? Thanks.

cmd_vlan_offload_parsed() goes down to the follow ethdev API
functions:
  - rte_eth_dev_set_vlan_strip_on_queue()
  - rte_eth_dev_set_vlan_offload()
There functions may change settings when port is started, running and 
processing traffic. And, as far as I know, it
is the primary goal of these functions. So, we should not
require application to do stop and reconfigure to apply
these settings.

It is an interesting question what should happen if
application stops and starts the port back. As far as I can
see it is not documented and it should be improved. I'd say
that typical application would expect that dynamically done
changes still apply. So, it should not be an application
headache. The patch tries to care about it on application
side and it is wrong from my point of view.

Moreover, if we finally decide that application must care
itself, the second argument should be 1 in stripq case only
since other cases do not reconfigure Rx queue. However,
it will not help anyway since rx_vlan_strip_set_on_queue()
do not save configuration changes in testpmd structures.

> 
>>>>>> If the check is not exist, queue offloads are not updated until the
>>>>>> next port stop/start command is executed. Right?
>>>>> yes
>>>>>> 2) Why is the queue-based VLAN offload API not used?
>>>>> VLAN offload is a port-related configuration. If a single port is
>>>>> changed, the associated queue needs to be all updated in
>>>>> configuration. Therefore, there will be no additional api to configure.
>>>>>>        Like, rte_eth_dev_set_vlan_strip_on_queue. It seems that this
>>>>>> kind of API is
>>>>>>        dedicated to do this.
>>>>>>>> This means that queue offloads need to upadte by re-calling
>>>>>>>> dev_configure and setup all queues, right?
>>>>>>>> If it is, this adds a usage limitation.
>>>>>>>>>       	return;
>>>>>>>>>       }
>>>>>>>>>



^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
  2022-11-06 10:32                 ` Andrew Rybchenko
@ 2022-11-07  7:18                   ` Ye, MingjinX
  0 siblings, 0 replies; 36+ messages in thread
From: Ye, MingjinX @ 2022-11-07  7:18 UTC (permalink / raw)
  To: Andrew Rybchenko, lihuisong (C), dev
  Cc: stable, Zhou, YidingX, Singh, Aman Deep, Zhang, Yuying



> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: 2022年11月6日 18:33
> To: Ye, MingjinX <mingjinx.ye@intel.com>; lihuisong (C)
> <lihuisong@huawei.com>; dev@dpdk.org
> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh, Aman
> Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> <yuying.zhang@intel.com>
> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
> 
> On 11/4/22 14:33, Ye, MingjinX wrote:
> >
> >
> >> -----Original Message-----
> >> From: lihuisong (C) <lihuisong@huawei.com>
> >> Sent: 2022年11月4日 18:18
> >> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> >> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh,
> >> Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> >> <yuying.zhang@intel.com>
> >> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
> >>
> >>
> >> 在 2022/11/4 16:21, Ye, MingjinX 写道:
> >>>
> >>>> -----Original Message-----
> >>>> From: lihuisong (C) <lihuisong@huawei.com>
> >>>> Sent: 2022年11月3日 15:01
> >>>> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> >>>> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Singh,
> >>>> Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> >>>> <yuying.zhang@intel.com>
> >>>> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
> >>>>
> >>>>
> >>>> 在 2022/11/3 9:28, Ye, MingjinX 写道:
> >>>>>> -----Original Message-----
> >>>>>> From: lihuisong (C) <lihuisong@huawei.com>
> >>>>>> Sent: 2022年10月28日 10:09
> >>>>>> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> >>>>>> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>;
> >>>>>> Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> >>>>>> <yuying.zhang@intel.com>
> >>>>>> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
> >>>>>>
> >>>>>>
> >>>>>> 在 2022/10/27 19:02, Ye, MingjinX 写道:
> >>>>>>> Hi lihuisong,
> >>>>>>>
> >>>>>>> This means that queue offloads need to update by recalling
> >>>>>>> dev_configure and setup target queues.
> >>>>>> Why not update queue offloads in PMD?
> >>>>>>> Can you tell me, where is the limitation?
> >>>>>> According to other Rx/Tx offload configurations, this may not be
> >>>>>> a
> >>>> limitation.
> >>>>>> But it seems to create a dependency on user usage.
> >>>>>>
> >>>>>> Port VLAN releated offloads are set by ethdev ops. There is no
> >>>>>> requirement in ehedev layer that this port needs to stopped when
> >>>>>> set
> >>>> these offloads.
> >>>>>> Now it depends on user does recall dev_configure and setup queues
> >>>>>> to update queue offloads because of setting these offloads.
> >>>>>>> Thanks,
> >>>>>>> Mingjin
> >>>>>>>
> >>>>>>>> -----Original Message-----
> >>>>>>>> From: lihuisong (C) <lihuisong@huawei.com>
> >>>>>>>> Sent: 2022年10月26日 17:53
> >>>>>>>> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> >>>>>>>> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>;
> >>>>>>>> Singh, Aman Deep <aman.deep.singh@intel.com>; Zhang, Yuying
> >>>>>>>> <yuying.zhang@intel.com>
> >>>>>>>> Subject: Re: [PATCH v4 1/2] app/testpmd: fix vlan offload of
> >>>>>>>> rxq
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> 在 2022/10/27 1:10, Mingjin Ye 写道:
> >>>>>>>>> After setting vlan offload in testpmd, the result is not
> >>>>>>>>> updated to rxq. Therefore, the queue needs to be reconfigured
> >>>>>>>>> after executing the "vlan offload" related commands.
> >>>>>>>>>
> >>>>>>>>> Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
> >>>>>>>>> Cc: stable@dpdk.org
> >>>>>>>>>
> >>>>>>>>> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> >>>>>>>>> ---
> >>>>>>>>>       app/test-pmd/cmdline.c | 1 +
> >>>>>>>>>       1 file changed, 1 insertion(+)
> >>>>>>>>>
> >>>>>>>>> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> >>>>>>>>> index 17be2de402..ce125f549f 100644
> >>>>>>>>> --- a/app/test-pmd/cmdline.c
> >>>>>>>>> +++ b/app/test-pmd/cmdline.c
> >>>>>>>>> @@ -4133,6 +4133,7 @@ cmd_vlan_offload_parsed(void
> >>>> *parsed_result,
> >>>>>>>>>       	else
> >>>>>>>>>       		vlan_extend_set(port_id, on);
> >>>>>>>>>
> >>>>>>>>> +	cmd_reconfig_device_queue(port_id, 1, 1);
> >>>>>> In addition, I have some comments:
> >>>>>> 1) Normally, the parsed function of testpmd command needed to
> >>>>>> re-config port and queue needs to check if port status is STOPED.
> >>>>>> Why don't you add this check?
> >>>>> The check is exist.
> >>>> Where is the check? Currently, it seems that this check does not
> >>>> exist in the this command parsed function.
> >>> Check if the port is forwarded, in the source file test-pmd.c:2835.
> >> I don't understand why you mention the check in start_port(). It
> >> should be done in command parsed link other command.
> >
> > The command types include configuration and show these two types of
> commands.
> > show commands: There is no need to judge whether the port has stopped
> working.
> > configuration commands: Not all commands need to stop the port, there
> will be judgment if necessary.
> >
> > Hi, @andrew.rybchenko@oktetlabs.ru can you please help review this
> patch? Thanks.
> 
> cmd_vlan_offload_parsed() goes down to the follow ethdev API
> functions:
>   - rte_eth_dev_set_vlan_strip_on_queue()
>   - rte_eth_dev_set_vlan_offload()
> There functions may change settings when port is started, running and
> processing traffic. And, as far as I know, it is the primary goal of these
> functions. So, we should not require application to do stop and reconfigure
> to apply these settings.
> 
> It is an interesting question what should happen if application stops and
> starts the port back. As far as I can see it is not documented and it should be
> improved. I'd say that typical application would expect that dynamically done
> changes still apply. So, it should not be an application headache. The patch
> tries to care about it on application side and it is wrong from my point of view.
> 
> Moreover, if we finally decide that application must care itself, the second
> argument should be 1 in stripq case only since other cases do not reconfigure
> Rx queue. However, it will not help anyway since
> rx_vlan_strip_set_on_queue() do not save configuration changes in testpmd
> structures.
Thanks for your answer, I will flush the offloads of the queue in pmd as you suggested. Later, a new patch will be provided.
> 
> >
> >>>>>> If the check is not exist, queue offloads are not updated until
> >>>>>> the next port stop/start command is executed. Right?
> >>>>> yes
> >>>>>> 2) Why is the queue-based VLAN offload API not used?
> >>>>> VLAN offload is a port-related configuration. If a single port is
> >>>>> changed, the associated queue needs to be all updated in
> >>>>> configuration. Therefore, there will be no additional api to configure.
> >>>>>>        Like, rte_eth_dev_set_vlan_strip_on_queue. It seems that
> >>>>>> this kind of API is
> >>>>>>        dedicated to do this.
> >>>>>>>> This means that queue offloads need to upadte by re-calling
> >>>>>>>> dev_configure and setup all queues, right?
> >>>>>>>> If it is, this adds a usage limitation.
> >>>>>>>>>       	return;
> >>>>>>>>>       }
> >>>>>>>>>
> 


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v5 1/2] net/ice: fix vlan offload
  2022-10-26 17:10 [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq Mingjin Ye
                   ` (2 preceding siblings ...)
  2022-10-27  8:36 ` [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq Huang, ZhiminX
@ 2022-11-08 13:28 ` Mingjin Ye
  2022-11-08 13:28   ` [PATCH v5 2/2] net/ice: fix vlan offload of rxq Mingjin Ye
                     ` (2 more replies)
  3 siblings, 3 replies; 36+ messages in thread
From: Mingjin Ye @ 2022-11-08 13:28 UTC (permalink / raw)
  To: dev
  Cc: qiming.yang, stable, yidingx.zhou, Mingjin Ye, Bruce Richardson,
	Konstantin Ananyev, Qi Zhang, Wenzhuo Lu, Junyu Jiang, Leyi Rong,
	Ajit Khaparde, Jerin Jacob, Rosen Xu, Hemant Agrawal,
	Wisam Jaddo

The vlan tag and flag in Rx descriptor are not processed on vector path,
then the upper application can't fetch the tci from mbuf.

This patch is to add handling of vlan RX offloading.

Fixes: c68a52b8b38c ("net/ice: support vector SSE in Rx")
Fixes: ece1f8a8f1c8 ("net/ice: switch to flexible descriptor in SSE path")
Fixes: 12443386a0b0 ("net/ice: support flex Rx descriptor RxDID22")
Fixes: 214f452f7d5f ("net/ice: add AVX2 offload Rx")
Fixes: 7f85d5ebcfe1 ("net/ice: add AVX512 vector path")
Fixes: 295968d17407 ("ethdev: add namespace")
Fixes: 808a17b3c1e6 ("net/ice: add Rx AVX512 offload path")
Cc: stable@dpdk.org

Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>

v3:
	* Fix macros in ice_rxtx_vec_sse.c source file.
v4:
	* Fix ice_rx_desc_to_olflags_v define in ice_rxtx_vec_sse.c source file.
---
 drivers/net/ice/ice_rxtx_vec_avx2.c   | 135 +++++++++++++++++-----
 drivers/net/ice/ice_rxtx_vec_avx512.c | 154 +++++++++++++++++++++-----
 drivers/net/ice/ice_rxtx_vec_sse.c    | 132 ++++++++++++++++------
 3 files changed, 332 insertions(+), 89 deletions(-)

diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 31d6af42fd..bddfd6cf65 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -474,7 +474,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
+					(RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_VLAN)) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
 					_mm_load_si128
@@ -529,33 +529,112 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 				 * to shift the 32b RSS hash value to the
 				 * highest 32b of each 128b before mask
 				 */
-				__m256i rss_hash6_7 =
-					_mm256_slli_epi64(raw_desc_bh6_7, 32);
-				__m256i rss_hash4_5 =
-					_mm256_slli_epi64(raw_desc_bh4_5, 32);
-				__m256i rss_hash2_3 =
-					_mm256_slli_epi64(raw_desc_bh2_3, 32);
-				__m256i rss_hash0_1 =
-					_mm256_slli_epi64(raw_desc_bh0_1, 32);
-
-				__m256i rss_hash_msk =
-					_mm256_set_epi32(0xFFFFFFFF, 0, 0, 0,
-							 0xFFFFFFFF, 0, 0, 0);
-
-				rss_hash6_7 = _mm256_and_si256
-						(rss_hash6_7, rss_hash_msk);
-				rss_hash4_5 = _mm256_and_si256
-						(rss_hash4_5, rss_hash_msk);
-				rss_hash2_3 = _mm256_and_si256
-						(rss_hash2_3, rss_hash_msk);
-				rss_hash0_1 = _mm256_and_si256
-						(rss_hash0_1, rss_hash_msk);
-
-				mb6_7 = _mm256_or_si256(mb6_7, rss_hash6_7);
-				mb4_5 = _mm256_or_si256(mb4_5, rss_hash4_5);
-				mb2_3 = _mm256_or_si256(mb2_3, rss_hash2_3);
-				mb0_1 = _mm256_or_si256(mb0_1, rss_hash0_1);
-			} /* if() on RSS hash parsing */
+				if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
+						RTE_ETH_RX_OFFLOAD_RSS_HASH) {
+					__m256i rss_hash6_7 =
+						_mm256_slli_epi64(raw_desc_bh6_7, 32);
+					__m256i rss_hash4_5 =
+						_mm256_slli_epi64(raw_desc_bh4_5, 32);
+					__m256i rss_hash2_3 =
+						_mm256_slli_epi64(raw_desc_bh2_3, 32);
+					__m256i rss_hash0_1 =
+						_mm256_slli_epi64(raw_desc_bh0_1, 32);
+
+					__m256i rss_hash_msk =
+						_mm256_set_epi32(0xFFFFFFFF, 0, 0, 0,
+								0xFFFFFFFF, 0, 0, 0);
+
+					rss_hash6_7 = _mm256_and_si256
+							(rss_hash6_7, rss_hash_msk);
+					rss_hash4_5 = _mm256_and_si256
+							(rss_hash4_5, rss_hash_msk);
+					rss_hash2_3 = _mm256_and_si256
+							(rss_hash2_3, rss_hash_msk);
+					rss_hash0_1 = _mm256_and_si256
+							(rss_hash0_1, rss_hash_msk);
+
+					mb6_7 = _mm256_or_si256(mb6_7, rss_hash6_7);
+					mb4_5 = _mm256_or_si256(mb4_5, rss_hash4_5);
+					mb2_3 = _mm256_or_si256(mb2_3, rss_hash2_3);
+					mb0_1 = _mm256_or_si256(mb0_1, rss_hash0_1);
+				} /* if() on RSS hash parsing */
+
+				if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
+						RTE_ETH_RX_OFFLOAD_VLAN) {
+					/* merge the status/error-1 bits into one register */
+					const __m256i status1_4_7 =
+							_mm256_unpacklo_epi32(raw_desc_bh6_7,
+							raw_desc_bh4_5);
+					const __m256i status1_0_3 =
+							_mm256_unpacklo_epi32(raw_desc_bh2_3,
+							raw_desc_bh0_1);
+
+					const __m256i status1_0_7 =
+							_mm256_unpacklo_epi64(status1_4_7,
+							status1_0_3);
+
+					const __m256i l2tag2p_flag_mask =
+							_mm256_set1_epi32(1 << 11);
+
+					__m256i l2tag2p_flag_bits =
+							_mm256_and_si256
+							(status1_0_7, l2tag2p_flag_mask);
+
+					l2tag2p_flag_bits =
+							_mm256_srli_epi32(l2tag2p_flag_bits,
+									11);
+
+					__m256i vlan_flags = _mm256_setzero_si256();
+					const __m256i l2tag2_flags_shuf =
+							_mm256_set_epi8(0, 0, 0, 0,
+									0, 0, 0, 0,
+									0, 0, 0, 0,
+									0, 0, 0, 0,
+									/* end up 128-bits */
+									0, 0, 0, 0,
+									0, 0, 0, 0,
+									0, 0, 0, 0,
+									0, 0,
+									RTE_MBUF_F_RX_VLAN |
+									RTE_MBUF_F_RX_VLAN_STRIPPED,
+									0);
+					vlan_flags =
+							_mm256_shuffle_epi8(l2tag2_flags_shuf,
+							l2tag2p_flag_bits);
+
+					/* merge with vlan_flags */
+					mbuf_flags = _mm256_or_si256
+							(mbuf_flags, vlan_flags);
+
+					/* L2TAG2_2 */
+					__m256i vlan_tci6_7 =
+							_mm256_slli_si256(raw_desc_bh6_7, 4);
+					__m256i vlan_tci4_5 =
+							_mm256_slli_si256(raw_desc_bh4_5, 4);
+					__m256i vlan_tci2_3 =
+							_mm256_slli_si256(raw_desc_bh2_3, 4);
+					__m256i vlan_tci0_1 =
+							_mm256_slli_si256(raw_desc_bh0_1, 4);
+
+					const __m256i vlan_tci_msk =
+							_mm256_set_epi32(0, 0xFFFF0000, 0, 0,
+							0, 0xFFFF0000, 0, 0);
+
+					vlan_tci6_7 = _mm256_and_si256
+									(vlan_tci6_7, vlan_tci_msk);
+					vlan_tci4_5 = _mm256_and_si256
+									(vlan_tci4_5, vlan_tci_msk);
+					vlan_tci2_3 = _mm256_and_si256
+									(vlan_tci2_3, vlan_tci_msk);
+					vlan_tci0_1 = _mm256_and_si256
+									(vlan_tci0_1, vlan_tci_msk);
+
+					mb6_7 = _mm256_or_si256(mb6_7, vlan_tci6_7);
+					mb4_5 = _mm256_or_si256(mb4_5, vlan_tci4_5);
+					mb2_3 = _mm256_or_si256(mb2_3, vlan_tci2_3);
+					mb0_1 = _mm256_or_si256(mb0_1, vlan_tci0_1);
+				}
+			}
 #endif
 		}
 
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 5bfd5152df..5d5e4bf3cd 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -585,7 +585,7 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
+					(RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_VLAN)) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
 					_mm_load_si128
@@ -640,33 +640,131 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 				 * to shift the 32b RSS hash value to the
 				 * highest 32b of each 128b before mask
 				 */
-				__m256i rss_hash6_7 =
-					_mm256_slli_epi64(raw_desc_bh6_7, 32);
-				__m256i rss_hash4_5 =
-					_mm256_slli_epi64(raw_desc_bh4_5, 32);
-				__m256i rss_hash2_3 =
-					_mm256_slli_epi64(raw_desc_bh2_3, 32);
-				__m256i rss_hash0_1 =
-					_mm256_slli_epi64(raw_desc_bh0_1, 32);
-
-				__m256i rss_hash_msk =
-					_mm256_set_epi32(0xFFFFFFFF, 0, 0, 0,
-							 0xFFFFFFFF, 0, 0, 0);
-
-				rss_hash6_7 = _mm256_and_si256
-						(rss_hash6_7, rss_hash_msk);
-				rss_hash4_5 = _mm256_and_si256
-						(rss_hash4_5, rss_hash_msk);
-				rss_hash2_3 = _mm256_and_si256
-						(rss_hash2_3, rss_hash_msk);
-				rss_hash0_1 = _mm256_and_si256
-						(rss_hash0_1, rss_hash_msk);
-
-				mb6_7 = _mm256_or_si256(mb6_7, rss_hash6_7);
-				mb4_5 = _mm256_or_si256(mb4_5, rss_hash4_5);
-				mb2_3 = _mm256_or_si256(mb2_3, rss_hash2_3);
-				mb0_1 = _mm256_or_si256(mb0_1, rss_hash0_1);
-			} /* if() on RSS hash parsing */
+				if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
+						RTE_ETH_RX_OFFLOAD_RSS_HASH) {
+					__m256i rss_hash6_7 =
+						_mm256_slli_epi64(raw_desc_bh6_7, 32);
+					__m256i rss_hash4_5 =
+						_mm256_slli_epi64(raw_desc_bh4_5, 32);
+					__m256i rss_hash2_3 =
+						_mm256_slli_epi64(raw_desc_bh2_3, 32);
+					__m256i rss_hash0_1 =
+						_mm256_slli_epi64(raw_desc_bh0_1, 32);
+
+					__m256i rss_hash_msk =
+						_mm256_set_epi32(0xFFFFFFFF, 0, 0, 0,
+								0xFFFFFFFF, 0, 0, 0);
+
+					rss_hash6_7 = _mm256_and_si256
+							(rss_hash6_7, rss_hash_msk);
+					rss_hash4_5 = _mm256_and_si256
+							(rss_hash4_5, rss_hash_msk);
+					rss_hash2_3 = _mm256_and_si256
+							(rss_hash2_3, rss_hash_msk);
+					rss_hash0_1 = _mm256_and_si256
+							(rss_hash0_1, rss_hash_msk);
+
+					mb6_7 = _mm256_or_si256(mb6_7, rss_hash6_7);
+					mb4_5 = _mm256_or_si256(mb4_5, rss_hash4_5);
+					mb2_3 = _mm256_or_si256(mb2_3, rss_hash2_3);
+					mb0_1 = _mm256_or_si256(mb0_1, rss_hash0_1);
+				} /* if() on RSS hash parsing */
+
+				if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
+							RTE_ETH_RX_OFFLOAD_VLAN) {
+					/* merge the status/error-1 bits into one register */
+					const __m256i status1_4_7 =
+						_mm256_unpacklo_epi32
+						(raw_desc_bh6_7,
+						raw_desc_bh4_5);
+					const __m256i status1_0_3 =
+						_mm256_unpacklo_epi32
+						(raw_desc_bh2_3,
+						raw_desc_bh0_1);
+
+					const __m256i status1_0_7 =
+						_mm256_unpacklo_epi64
+						(status1_4_7, status1_0_3);
+
+					const __m256i l2tag2p_flag_mask =
+						_mm256_set1_epi32
+						(1 << 11);
+
+					__m256i l2tag2p_flag_bits =
+						_mm256_and_si256
+						(status1_0_7,
+						l2tag2p_flag_mask);
+
+					l2tag2p_flag_bits =
+						_mm256_srli_epi32
+						(l2tag2p_flag_bits,
+						11);
+					const __m256i l2tag2_flags_shuf =
+						_mm256_set_epi8
+							(0, 0, 0, 0,
+							0, 0, 0, 0,
+							0, 0, 0, 0,
+							0, 0, 0, 0,
+							/* end up 128-bits */
+							0, 0, 0, 0,
+							0, 0, 0, 0,
+							0, 0, 0, 0,
+							0, 0,
+							RTE_MBUF_F_RX_VLAN |
+							RTE_MBUF_F_RX_VLAN_STRIPPED,
+							0);
+					__m256i vlan_flags =
+						_mm256_shuffle_epi8
+							(l2tag2_flags_shuf,
+							l2tag2p_flag_bits);
+
+					/* merge with vlan_flags */
+					mbuf_flags = _mm256_or_si256
+							(mbuf_flags,
+							vlan_flags);
+
+					/* L2TAG2_2 */
+					__m256i vlan_tci6_7 =
+						_mm256_slli_si256
+							(raw_desc_bh6_7, 4);
+					__m256i vlan_tci4_5 =
+						_mm256_slli_si256
+							(raw_desc_bh4_5, 4);
+					__m256i vlan_tci2_3 =
+						_mm256_slli_si256
+							(raw_desc_bh2_3, 4);
+					__m256i vlan_tci0_1 =
+						_mm256_slli_si256
+							(raw_desc_bh0_1, 4);
+
+					const __m256i vlan_tci_msk =
+						_mm256_set_epi32
+						(0, 0xFFFF0000, 0, 0,
+						0, 0xFFFF0000, 0, 0);
+
+					vlan_tci6_7 = _mm256_and_si256
+							(vlan_tci6_7,
+							vlan_tci_msk);
+					vlan_tci4_5 = _mm256_and_si256
+							(vlan_tci4_5,
+							vlan_tci_msk);
+					vlan_tci2_3 = _mm256_and_si256
+							(vlan_tci2_3,
+							vlan_tci_msk);
+					vlan_tci0_1 = _mm256_and_si256
+							(vlan_tci0_1,
+							vlan_tci_msk);
+
+					mb6_7 = _mm256_or_si256
+							(mb6_7, vlan_tci6_7);
+					mb4_5 = _mm256_or_si256
+							(mb4_5, vlan_tci4_5);
+					mb2_3 = _mm256_or_si256
+							(mb2_3, vlan_tci2_3);
+					mb0_1 = _mm256_or_si256
+							(mb0_1, vlan_tci0_1);
+				}
+			}
 #endif
 		}
 
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index fd94cedde3..cc5b8510dc 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -100,9 +100,15 @@ ice_rxq_rearm(struct ice_rx_queue *rxq)
 	ICE_PCI_REG_WC_WRITE(rxq->qrx_tail, rx_id);
 }
 
+#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+static inline void
+ice_rx_desc_to_olflags_v(struct ice_rx_queue *rxq, __m128i descs[4], __m128i descs_bh[4],
+			 struct rte_mbuf **rx_pkts)
+#else
 static inline void
 ice_rx_desc_to_olflags_v(struct ice_rx_queue *rxq, __m128i descs[4],
 			 struct rte_mbuf **rx_pkts)
+#endif
 {
 	const __m128i mbuf_init = _mm_set_epi64x(0, rxq->mbuf_initializer);
 	__m128i rearm0, rearm1, rearm2, rearm3;
@@ -214,6 +220,38 @@ ice_rx_desc_to_olflags_v(struct ice_rx_queue *rxq, __m128i descs[4],
 	/* merge the flags */
 	flags = _mm_or_si128(flags, rss_vlan);
 
+ #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+	if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
+							RTE_ETH_RX_OFFLOAD_VLAN) {
+		const __m128i l2tag2_mask =
+			_mm_set1_epi32(1 << 11);
+		const __m128i vlan_tci0_1 =
+			_mm_unpacklo_epi32(descs_bh[0], descs_bh[1]);
+		const __m128i vlan_tci2_3 =
+			_mm_unpacklo_epi32(descs_bh[2], descs_bh[3]);
+		const __m128i vlan_tci0_3 =
+			_mm_unpacklo_epi64(vlan_tci0_1, vlan_tci2_3);
+
+		__m128i vlan_bits = _mm_and_si128(vlan_tci0_3, l2tag2_mask);
+
+		vlan_bits = _mm_srli_epi32(vlan_bits, 11);
+
+		const __m128i vlan_flags_shuf =
+			_mm_set_epi8(0, 0, 0, 0,
+					0, 0, 0, 0,
+					0, 0, 0, 0,
+					0, 0,
+					RTE_MBUF_F_RX_VLAN |
+					RTE_MBUF_F_RX_VLAN_STRIPPED,
+					0);
+
+		const __m128i vlan_flags = _mm_shuffle_epi8(vlan_flags_shuf, vlan_bits);
+
+		/* merge with vlan_flags */
+		flags = _mm_or_si128(flags, vlan_flags);
+	}
+#endif
+
 	if (rxq->fdir_enabled) {
 		const __m128i fdir_id0_1 =
 			_mm_unpackhi_epi32(descs[0], descs[1]);
@@ -405,6 +443,9 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 	     pos += ICE_DESCS_PER_LOOP,
 	     rxdp += ICE_DESCS_PER_LOOP) {
 		__m128i descs[ICE_DESCS_PER_LOOP];
+		#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
+		__m128i descs_bh[ICE_DESCS_PER_LOOP];
+		#endif
 		__m128i pkt_mb0, pkt_mb1, pkt_mb2, pkt_mb3;
 		__m128i staterr, sterr_tmp1, sterr_tmp2;
 		/* 2 64 bit or 4 32 bit mbuf pointers in one XMM reg. */
@@ -463,8 +504,6 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 		/* C.1 4=>2 filter staterr info only */
 		sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
 
-		ice_rx_desc_to_olflags_v(rxq, descs, &rx_pkts[pos]);
-
 		/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
 		pkt_mb3 = _mm_add_epi16(pkt_mb3, crc_adjust);
 		pkt_mb2 = _mm_add_epi16(pkt_mb2, crc_adjust);
@@ -479,21 +518,21 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 		 * will cause performance drop to get into this context.
 		 */
 		if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-				RTE_ETH_RX_OFFLOAD_RSS_HASH) {
+					(RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_VLAN)) {
 			/* load bottom half of every 32B desc */
-			const __m128i raw_desc_bh3 =
+			descs_bh[3] =
 				_mm_load_si128
 					((void *)(&rxdp[3].wb.status_error1));
 			rte_compiler_barrier();
-			const __m128i raw_desc_bh2 =
+			descs_bh[2] =
 				_mm_load_si128
 					((void *)(&rxdp[2].wb.status_error1));
 			rte_compiler_barrier();
-			const __m128i raw_desc_bh1 =
+			descs_bh[1] =
 				_mm_load_si128
 					((void *)(&rxdp[1].wb.status_error1));
 			rte_compiler_barrier();
-			const __m128i raw_desc_bh0 =
+			descs_bh[0] =
 				_mm_load_si128
 					((void *)(&rxdp[0].wb.status_error1));
 
@@ -501,32 +540,59 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * to shift the 32b RSS hash value to the
 			 * highest 32b of each 128b before mask
 			 */
-			__m128i rss_hash3 =
-				_mm_slli_epi64(raw_desc_bh3, 32);
-			__m128i rss_hash2 =
-				_mm_slli_epi64(raw_desc_bh2, 32);
-			__m128i rss_hash1 =
-				_mm_slli_epi64(raw_desc_bh1, 32);
-			__m128i rss_hash0 =
-				_mm_slli_epi64(raw_desc_bh0, 32);
-
-			__m128i rss_hash_msk =
-				_mm_set_epi32(0xFFFFFFFF, 0, 0, 0);
-
-			rss_hash3 = _mm_and_si128
-					(rss_hash3, rss_hash_msk);
-			rss_hash2 = _mm_and_si128
-					(rss_hash2, rss_hash_msk);
-			rss_hash1 = _mm_and_si128
-					(rss_hash1, rss_hash_msk);
-			rss_hash0 = _mm_and_si128
-					(rss_hash0, rss_hash_msk);
-
-			pkt_mb3 = _mm_or_si128(pkt_mb3, rss_hash3);
-			pkt_mb2 = _mm_or_si128(pkt_mb2, rss_hash2);
-			pkt_mb1 = _mm_or_si128(pkt_mb1, rss_hash1);
-			pkt_mb0 = _mm_or_si128(pkt_mb0, rss_hash0);
-		} /* if() on RSS hash parsing */
+			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
+							RTE_ETH_RX_OFFLOAD_RSS_HASH) {
+				__m128i rss_hash3 =
+					_mm_slli_epi64(descs_bh[3], 32);
+				__m128i rss_hash2 =
+					_mm_slli_epi64(descs_bh[2], 32);
+				__m128i rss_hash1 =
+					_mm_slli_epi64(descs_bh[1], 32);
+				__m128i rss_hash0 =
+					_mm_slli_epi64(descs_bh[0], 32);
+
+				__m128i rss_hash_msk =
+					_mm_set_epi32(0xFFFFFFFF, 0, 0, 0);
+
+				rss_hash3 = _mm_and_si128
+						(rss_hash3, rss_hash_msk);
+				rss_hash2 = _mm_and_si128
+						(rss_hash2, rss_hash_msk);
+				rss_hash1 = _mm_and_si128
+						(rss_hash1, rss_hash_msk);
+				rss_hash0 = _mm_and_si128
+						(rss_hash0, rss_hash_msk);
+
+				pkt_mb3 = _mm_or_si128(pkt_mb3, rss_hash3);
+				pkt_mb2 = _mm_or_si128(pkt_mb2, rss_hash2);
+				pkt_mb1 = _mm_or_si128(pkt_mb1, rss_hash1);
+				pkt_mb0 = _mm_or_si128(pkt_mb0, rss_hash0);
+			} /* if() on RSS hash parsing */
+
+			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
+							RTE_ETH_RX_OFFLOAD_VLAN) {
+									/* L2TAG2_2 */
+				__m128i vlan_tci3 = _mm_slli_si128(descs_bh[3], 4);
+				__m128i vlan_tci2 = _mm_slli_si128(descs_bh[2], 4);
+				__m128i vlan_tci1 = _mm_slli_si128(descs_bh[1], 4);
+				__m128i vlan_tci0 = _mm_slli_si128(descs_bh[0], 4);
+
+				const __m128i vlan_tci_msk = _mm_set_epi32(0, 0xFFFF0000, 0, 0);
+
+				vlan_tci3 = _mm_and_si128(vlan_tci3, vlan_tci_msk);
+				vlan_tci2 = _mm_and_si128(vlan_tci2, vlan_tci_msk);
+				vlan_tci1 = _mm_and_si128(vlan_tci1, vlan_tci_msk);
+				vlan_tci0 = _mm_and_si128(vlan_tci0, vlan_tci_msk);
+
+				pkt_mb3 = _mm_or_si128(pkt_mb3, vlan_tci3);
+				pkt_mb2 = _mm_or_si128(pkt_mb2, vlan_tci2);
+				pkt_mb1 = _mm_or_si128(pkt_mb1, vlan_tci1);
+				pkt_mb0 = _mm_or_si128(pkt_mb0, vlan_tci0);
+				}
+		ice_rx_desc_to_olflags_v(rxq, descs, descs_bh, &rx_pkts[pos]);
+		}
+#else
+		ice_rx_desc_to_olflags_v(rxq, descs, &rx_pkts[pos]);
 #endif
 
 		/* C.2 get 4 pkts staterr value  */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v5 2/2] net/ice: fix vlan offload of rxq
  2022-11-08 13:28 ` [PATCH v5 1/2] net/ice: fix vlan offload Mingjin Ye
@ 2022-11-08 13:28   ` Mingjin Ye
  2022-11-09  1:52     ` Huang, ZhiminX
  2022-11-21  2:54     ` [PATCH v6] doc: add PMD known issue Mingjin Ye
  2022-11-09  1:51   ` [PATCH v5 1/2] net/ice: fix vlan offload Huang, ZhiminX
  2022-11-11  3:34   ` Ye, MingjinX
  2 siblings, 2 replies; 36+ messages in thread
From: Mingjin Ye @ 2022-11-08 13:28 UTC (permalink / raw)
  To: dev
  Cc: qiming.yang, stable, yidingx.zhou, Mingjin Ye, Qi Zhang,
	Ferruh Yigit, Jingjing Wu, Xiaoyun Li, Wenzhuo Lu

After setting "vlan offload" in pmd, the configuration of rxq is not
updated.

This patch is to sync the rxmode offload config with rxq.

Fixes: e0dcf94a0d7f ("net/ice: support VLAN ops")
Cc: stable@dpdk.org

Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
---
 drivers/net/ice/ice_dcf_ethdev.c | 15 +++++++++++++++
 drivers/net/ice/ice_ethdev.c     |  7 +++++++
 2 files changed, 22 insertions(+)

diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index dcbf2af5b0..c32bf4ec03 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1227,6 +1227,8 @@ dcf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
 	struct ice_dcf_hw *hw = &adapter->real_hw;
 	bool enable;
 	int err;
+	size_t queue_idx;
+	struct ice_rx_queue *rxq;
 
 	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
@@ -1245,6 +1247,11 @@ dcf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
 			return -EIO;
 	}
 
+	for (queue_idx = 0; queue_idx < dev->data->nb_rx_queues; queue_idx++) {
+		rxq = dev->data->rx_queues[queue_idx];
+		rxq->offloads = rxmode->offloads;
+	}
+
 	return 0;
 }
 
@@ -1287,6 +1294,8 @@ dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct ice_dcf_adapter *adapter = dev->data->dev_private;
 	struct ice_dcf_hw *hw = &adapter->real_hw;
 	int err;
+	size_t queue_idx;
+	struct ice_rx_queue *rxq;
 
 	if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2)
 		return dcf_dev_vlan_offload_set_v2(dev, mask);
@@ -1305,6 +1314,12 @@ dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		if (err)
 			return -EIO;
 	}
+
+	for (queue_idx = 0; queue_idx < dev->data->nb_rx_queues; queue_idx++) {
+		rxq = dev->data->rx_queues[queue_idx];
+		rxq->offloads = dev_conf->rxmode.offloads;
+	}
+
 	return 0;
 }
 
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 8618a3e6b7..5562ceb671 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -4501,6 +4501,8 @@ ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	struct ice_vsi *vsi = pf->main_vsi;
 	struct rte_eth_rxmode *rxmode;
+	size_t queue_idx;
+	struct ice_rx_queue *rxq;
 
 	rxmode = &dev->data->dev_conf.rxmode;
 	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
@@ -4517,6 +4519,11 @@ ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			ice_vsi_config_vlan_stripping(vsi, false);
 	}
 
+	for (queue_idx = 0; queue_idx < dev->data->nb_rx_queues; queue_idx++) {
+		rxq = dev->data->rx_queues[queue_idx];
+		rxq->offloads = rxmode->offloads;
+	}
+
 	return 0;
 }
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v5 1/2] net/ice: fix vlan offload
  2022-11-08 13:28 ` [PATCH v5 1/2] net/ice: fix vlan offload Mingjin Ye
  2022-11-08 13:28   ` [PATCH v5 2/2] net/ice: fix vlan offload of rxq Mingjin Ye
@ 2022-11-09  1:51   ` Huang, ZhiminX
  2022-11-11  3:34   ` Ye, MingjinX
  2 siblings, 0 replies; 36+ messages in thread
From: Huang, ZhiminX @ 2022-11-09  1:51 UTC (permalink / raw)
  To: Ye, MingjinX, dev
  Cc: Yang, Qiming, stable, Zhou, YidingX, Ye, MingjinX, Richardson,
	Bruce, Konstantin Ananyev, Zhang, Qi Z, Lu, Wenzhuo, Junyu Jiang,
	Rong, Leyi, Ajit Khaparde, Jerin Jacob, Xu, Rosen,
	Hemant Agrawal, Wisam Jaddo

> -----Original Message-----
> From: Mingjin Ye <mingjinx.ye@intel.com>
> Sent: Tuesday, November 8, 2022 9:28 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou, YidingX
> <yidingx.zhou@intel.com>; Ye, MingjinX <mingjinx.ye@intel.com>; Richardson,
> Bruce <bruce.richardson@intel.com>; Konstantin Ananyev
> <konstantin.v.ananyev@yandex.ru>; Zhang, Qi Z <qi.z.zhang@intel.com>; Lu,
> Wenzhuo <wenzhuo.lu@intel.com>; Junyu Jiang <junyux.jiang@intel.com>;
> Rong, Leyi <leyi.rong@intel.com>; Ajit Khaparde
> <ajit.khaparde@broadcom.com>; Jerin Jacob <jerinj@marvell.com>; Xu, Rosen
> <rosen.xu@intel.com>; Hemant Agrawal <hemant.agrawal@nxp.com>;
> Wisam Jaddo <wisamm@nvidia.com>
> Subject: [PATCH v5 1/2] net/ice: fix vlan offload
> 
> The vlan tag and flag in Rx descriptor are not processed on vector path, then
> the upper application can't fetch the tci from mbuf.
> 
> This patch is to add handling of vlan RX offloading.
> 
> Fixes: c68a52b8b38c ("net/ice: support vector SSE in Rx")
> Fixes: ece1f8a8f1c8 ("net/ice: switch to flexible descriptor in SSE path")
> Fixes: 12443386a0b0 ("net/ice: support flex Rx descriptor RxDID22")
> Fixes: 214f452f7d5f ("net/ice: add AVX2 offload Rx")
> Fixes: 7f85d5ebcfe1 ("net/ice: add AVX512 vector path")
> Fixes: 295968d17407 ("ethdev: add namespace")
> Fixes: 808a17b3c1e6 ("net/ice: add Rx AVX512 offload path")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> 
Tested-by: Zhimin Huang <zhiminx.huang@intel.com >



^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v5 2/2] net/ice: fix vlan offload of rxq
  2022-11-08 13:28   ` [PATCH v5 2/2] net/ice: fix vlan offload of rxq Mingjin Ye
@ 2022-11-09  1:52     ` Huang, ZhiminX
  2022-11-21  2:54     ` [PATCH v6] doc: add PMD known issue Mingjin Ye
  1 sibling, 0 replies; 36+ messages in thread
From: Huang, ZhiminX @ 2022-11-09  1:52 UTC (permalink / raw)
  To: Ye, MingjinX, dev
  Cc: Yang, Qiming, stable, Zhou, YidingX, Ye, MingjinX, Zhang, Qi Z,
	Ferruh Yigit, Wu, Jingjing, Li, Xiaoyun, Lu, Wenzhuo

> -----Original Message-----
> From: Mingjin Ye <mingjinx.ye@intel.com>
> Sent: Tuesday, November 8, 2022 9:28 PM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou, YidingX
> <yidingx.zhou@intel.com>; Ye, MingjinX <mingjinx.ye@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>; Ferruh Yigit <ferruh.yigit@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Li, Xiaoyun <xiaoyun.li@intel.com>; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>
> Subject: [PATCH v5 2/2] net/ice: fix vlan offload of rxq
> 
> After setting "vlan offload" in pmd, the configuration of rxq is not updated.
> 
> This patch is to sync the rxmode offload config with rxq.
> 
> Fixes: e0dcf94a0d7f ("net/ice: support VLAN ops")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> ---
Tested-by: Zhimin Huang <zhiminx.huang@intel.com >



^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v5 1/2] net/ice: fix vlan offload
  2022-11-08 13:28 ` [PATCH v5 1/2] net/ice: fix vlan offload Mingjin Ye
  2022-11-08 13:28   ` [PATCH v5 2/2] net/ice: fix vlan offload of rxq Mingjin Ye
  2022-11-09  1:51   ` [PATCH v5 1/2] net/ice: fix vlan offload Huang, ZhiminX
@ 2022-11-11  3:34   ` Ye, MingjinX
  2 siblings, 0 replies; 36+ messages in thread
From: Ye, MingjinX @ 2022-11-11  3:34 UTC (permalink / raw)
  To: thomas, Zhang, Qi Z, Yang, Qiming
  Cc: dev, stable, Zhou,  YidingX, Richardson, Bruce,
	Konstantin Ananyev, Lu, Wenzhuo, Junyu Jiang, Rong, Leyi,
	Ajit Khaparde, Jerin Jacob, Xu, Rosen, Hemant Agrawal,
	Wisam Jaddo

Hi ALL,

Could you please review and provide suggestions if any.

Thanks,
Mingjin

> -----Original Message-----
> From: Ye, MingjinX <mingjinx.ye@intel.com>
> Sent: 2022年11月8日 21:28
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou, YidingX
> <yidingx.zhou@intel.com>; Ye, MingjinX <mingjinx.ye@intel.com>;
> Richardson, Bruce <bruce.richardson@intel.com>; Konstantin Ananyev
> <konstantin.v.ananyev@yandex.ru>; Zhang, Qi Z <qi.z.zhang@intel.com>; Lu,
> Wenzhuo <wenzhuo.lu@intel.com>; Junyu Jiang <junyux.jiang@intel.com>;
> Rong, Leyi <leyi.rong@intel.com>; Ajit Khaparde
> <ajit.khaparde@broadcom.com>; Jerin Jacob <jerinj@marvell.com>; Xu,
> Rosen <rosen.xu@intel.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>; Wisam Jaddo <wisamm@nvidia.com>
> Subject: [PATCH v5 1/2] net/ice: fix vlan offload
> 
> The vlan tag and flag in Rx descriptor are not processed on vector path, then
> the upper application can't fetch the tci from mbuf.
> 
> This patch is to add handling of vlan RX offloading.
> 
> Fixes: c68a52b8b38c ("net/ice: support vector SSE in Rx")
> Fixes: ece1f8a8f1c8 ("net/ice: switch to flexible descriptor in SSE path")
> Fixes: 12443386a0b0 ("net/ice: support flex Rx descriptor RxDID22")
> Fixes: 214f452f7d5f ("net/ice: add AVX2 offload Rx")
> Fixes: 7f85d5ebcfe1 ("net/ice: add AVX512 vector path")
> Fixes: 295968d17407 ("ethdev: add namespace")
> Fixes: 808a17b3c1e6 ("net/ice: add Rx AVX512 offload path")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> 
> v3:
> 	* Fix macros in ice_rxtx_vec_sse.c source file.
> v4:
> 	* Fix ice_rx_desc_to_olflags_v define in ice_rxtx_vec_sse.c source
> file.
> ---
>  drivers/net/ice/ice_rxtx_vec_avx2.c   | 135 +++++++++++++++++-----
>  drivers/net/ice/ice_rxtx_vec_avx512.c | 154 +++++++++++++++++++++-----
>  drivers/net/ice/ice_rxtx_vec_sse.c    | 132 ++++++++++++++++------
>  3 files changed, 332 insertions(+), 89 deletions(-)
> 
> diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c
> b/drivers/net/ice/ice_rxtx_vec_avx2.c
> index 31d6af42fd..bddfd6cf65 100644
> --- a/drivers/net/ice/ice_rxtx_vec_avx2.c
> +++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
> @@ -474,7 +474,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue
> *rxq, struct rte_mbuf **rx_pkts,
>  			 * will cause performance drop to get into this
> context.
>  			 */
>  			if (rxq->vsi->adapter->pf.dev_data-
> >dev_conf.rxmode.offloads &
> -					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
> +					(RTE_ETH_RX_OFFLOAD_RSS_HASH |
> RTE_ETH_RX_OFFLOAD_VLAN)) {
>  				/* load bottom half of every 32B desc */
>  				const __m128i raw_desc_bh7 =
>  					_mm_load_si128
> @@ -529,33 +529,112 @@ _ice_recv_raw_pkts_vec_avx2(struct
> ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
>  				 * to shift the 32b RSS hash value to the
>  				 * highest 32b of each 128b before mask
>  				 */
> -				__m256i rss_hash6_7 =
> -					_mm256_slli_epi64(raw_desc_bh6_7,
> 32);
> -				__m256i rss_hash4_5 =
> -					_mm256_slli_epi64(raw_desc_bh4_5,
> 32);
> -				__m256i rss_hash2_3 =
> -					_mm256_slli_epi64(raw_desc_bh2_3,
> 32);
> -				__m256i rss_hash0_1 =
> -					_mm256_slli_epi64(raw_desc_bh0_1,
> 32);
> -
> -				__m256i rss_hash_msk =
> -					_mm256_set_epi32(0xFFFFFFFF, 0, 0,
> 0,
> -							 0xFFFFFFFF, 0, 0, 0);
> -
> -				rss_hash6_7 = _mm256_and_si256
> -						(rss_hash6_7, rss_hash_msk);
> -				rss_hash4_5 = _mm256_and_si256
> -						(rss_hash4_5, rss_hash_msk);
> -				rss_hash2_3 = _mm256_and_si256
> -						(rss_hash2_3, rss_hash_msk);
> -				rss_hash0_1 = _mm256_and_si256
> -						(rss_hash0_1, rss_hash_msk);
> -
> -				mb6_7 = _mm256_or_si256(mb6_7,
> rss_hash6_7);
> -				mb4_5 = _mm256_or_si256(mb4_5,
> rss_hash4_5);
> -				mb2_3 = _mm256_or_si256(mb2_3,
> rss_hash2_3);
> -				mb0_1 = _mm256_or_si256(mb0_1,
> rss_hash0_1);
> -			} /* if() on RSS hash parsing */
> +				if (rxq->vsi->adapter->pf.dev_data-
> >dev_conf.rxmode.offloads &
> +
> 	RTE_ETH_RX_OFFLOAD_RSS_HASH) {
> +					__m256i rss_hash6_7 =
> +
> 	_mm256_slli_epi64(raw_desc_bh6_7, 32);
> +					__m256i rss_hash4_5 =
> +
> 	_mm256_slli_epi64(raw_desc_bh4_5, 32);
> +					__m256i rss_hash2_3 =
> +
> 	_mm256_slli_epi64(raw_desc_bh2_3, 32);
> +					__m256i rss_hash0_1 =
> +
> 	_mm256_slli_epi64(raw_desc_bh0_1, 32);
> +
> +					__m256i rss_hash_msk =
> +
> 	_mm256_set_epi32(0xFFFFFFFF, 0, 0, 0,
> +								0xFFFFFFFF, 0,
> 0, 0);
> +
> +					rss_hash6_7 = _mm256_and_si256
> +							(rss_hash6_7,
> rss_hash_msk);
> +					rss_hash4_5 = _mm256_and_si256
> +							(rss_hash4_5,
> rss_hash_msk);
> +					rss_hash2_3 = _mm256_and_si256
> +							(rss_hash2_3,
> rss_hash_msk);
> +					rss_hash0_1 = _mm256_and_si256
> +							(rss_hash0_1,
> rss_hash_msk);
> +
> +					mb6_7 = _mm256_or_si256(mb6_7,
> rss_hash6_7);
> +					mb4_5 = _mm256_or_si256(mb4_5,
> rss_hash4_5);
> +					mb2_3 = _mm256_or_si256(mb2_3,
> rss_hash2_3);
> +					mb0_1 = _mm256_or_si256(mb0_1,
> rss_hash0_1);
> +				} /* if() on RSS hash parsing */
> +
> +				if (rxq->vsi->adapter->pf.dev_data-
> >dev_conf.rxmode.offloads &
> +
> 	RTE_ETH_RX_OFFLOAD_VLAN) {
> +					/* merge the status/error-1 bits into
> one register */
> +					const __m256i status1_4_7 =
> +
> 	_mm256_unpacklo_epi32(raw_desc_bh6_7,
> +							raw_desc_bh4_5);
> +					const __m256i status1_0_3 =
> +
> 	_mm256_unpacklo_epi32(raw_desc_bh2_3,
> +							raw_desc_bh0_1);
> +
> +					const __m256i status1_0_7 =
> +
> 	_mm256_unpacklo_epi64(status1_4_7,
> +							status1_0_3);
> +
> +					const __m256i l2tag2p_flag_mask =
> +
> 	_mm256_set1_epi32(1 << 11);
> +
> +					__m256i l2tag2p_flag_bits =
> +							_mm256_and_si256
> +							(status1_0_7,
> l2tag2p_flag_mask);
> +
> +					l2tag2p_flag_bits =
> +
> 	_mm256_srli_epi32(l2tag2p_flag_bits,
> +									11);
> +
> +					__m256i vlan_flags =
> _mm256_setzero_si256();
> +					const __m256i l2tag2_flags_shuf =
> +							_mm256_set_epi8(0,
> 0, 0, 0,
> +									0, 0, 0,
> 0,
> +									0, 0, 0,
> 0,
> +									0, 0, 0,
> 0,
> +									/*
> end up 128-bits */
> +									0, 0, 0,
> 0,
> +									0, 0, 0,
> 0,
> +									0, 0, 0,
> 0,
> +									0, 0,
> +
> 	RTE_MBUF_F_RX_VLAN |
> +
> 	RTE_MBUF_F_RX_VLAN_STRIPPED,
> +									0);
> +					vlan_flags =
> +
> 	_mm256_shuffle_epi8(l2tag2_flags_shuf,
> +							l2tag2p_flag_bits);
> +
> +					/* merge with vlan_flags */
> +					mbuf_flags = _mm256_or_si256
> +							(mbuf_flags,
> vlan_flags);
> +
> +					/* L2TAG2_2 */
> +					__m256i vlan_tci6_7 =
> +
> 	_mm256_slli_si256(raw_desc_bh6_7, 4);
> +					__m256i vlan_tci4_5 =
> +
> 	_mm256_slli_si256(raw_desc_bh4_5, 4);
> +					__m256i vlan_tci2_3 =
> +
> 	_mm256_slli_si256(raw_desc_bh2_3, 4);
> +					__m256i vlan_tci0_1 =
> +
> 	_mm256_slli_si256(raw_desc_bh0_1, 4);
> +
> +					const __m256i vlan_tci_msk =
> +							_mm256_set_epi32(0,
> 0xFFFF0000, 0, 0,
> +							0, 0xFFFF0000, 0, 0);
> +
> +					vlan_tci6_7 = _mm256_and_si256
> +
> 	(vlan_tci6_7, vlan_tci_msk);
> +					vlan_tci4_5 = _mm256_and_si256
> +
> 	(vlan_tci4_5, vlan_tci_msk);
> +					vlan_tci2_3 = _mm256_and_si256
> +
> 	(vlan_tci2_3, vlan_tci_msk);
> +					vlan_tci0_1 = _mm256_and_si256
> +
> 	(vlan_tci0_1, vlan_tci_msk);
> +
> +					mb6_7 = _mm256_or_si256(mb6_7,
> vlan_tci6_7);
> +					mb4_5 = _mm256_or_si256(mb4_5,
> vlan_tci4_5);
> +					mb2_3 = _mm256_or_si256(mb2_3,
> vlan_tci2_3);
> +					mb0_1 = _mm256_or_si256(mb0_1,
> vlan_tci0_1);
> +				}
> +			}
>  #endif
>  		}
> 
> diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c
> b/drivers/net/ice/ice_rxtx_vec_avx512.c
> index 5bfd5152df..5d5e4bf3cd 100644
> --- a/drivers/net/ice/ice_rxtx_vec_avx512.c
> +++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
> @@ -585,7 +585,7 @@ _ice_recv_raw_pkts_vec_avx512(struct
> ice_rx_queue *rxq,
>  			 * will cause performance drop to get into this
> context.
>  			 */
>  			if (rxq->vsi->adapter->pf.dev_data-
> >dev_conf.rxmode.offloads &
> -					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
> +					(RTE_ETH_RX_OFFLOAD_RSS_HASH |
> RTE_ETH_RX_OFFLOAD_VLAN)) {
>  				/* load bottom half of every 32B desc */
>  				const __m128i raw_desc_bh7 =
>  					_mm_load_si128
> @@ -640,33 +640,131 @@ _ice_recv_raw_pkts_vec_avx512(struct
> ice_rx_queue *rxq,
>  				 * to shift the 32b RSS hash value to the
>  				 * highest 32b of each 128b before mask
>  				 */
> -				__m256i rss_hash6_7 =
> -					_mm256_slli_epi64(raw_desc_bh6_7,
> 32);
> -				__m256i rss_hash4_5 =
> -					_mm256_slli_epi64(raw_desc_bh4_5,
> 32);
> -				__m256i rss_hash2_3 =
> -					_mm256_slli_epi64(raw_desc_bh2_3,
> 32);
> -				__m256i rss_hash0_1 =
> -					_mm256_slli_epi64(raw_desc_bh0_1,
> 32);
> -
> -				__m256i rss_hash_msk =
> -					_mm256_set_epi32(0xFFFFFFFF, 0, 0,
> 0,
> -							 0xFFFFFFFF, 0, 0, 0);
> -
> -				rss_hash6_7 = _mm256_and_si256
> -						(rss_hash6_7, rss_hash_msk);
> -				rss_hash4_5 = _mm256_and_si256
> -						(rss_hash4_5, rss_hash_msk);
> -				rss_hash2_3 = _mm256_and_si256
> -						(rss_hash2_3, rss_hash_msk);
> -				rss_hash0_1 = _mm256_and_si256
> -						(rss_hash0_1, rss_hash_msk);
> -
> -				mb6_7 = _mm256_or_si256(mb6_7,
> rss_hash6_7);
> -				mb4_5 = _mm256_or_si256(mb4_5,
> rss_hash4_5);
> -				mb2_3 = _mm256_or_si256(mb2_3,
> rss_hash2_3);
> -				mb0_1 = _mm256_or_si256(mb0_1,
> rss_hash0_1);
> -			} /* if() on RSS hash parsing */
> +				if (rxq->vsi->adapter->pf.dev_data-
> >dev_conf.rxmode.offloads &
> +
> 	RTE_ETH_RX_OFFLOAD_RSS_HASH) {
> +					__m256i rss_hash6_7 =
> +
> 	_mm256_slli_epi64(raw_desc_bh6_7, 32);
> +					__m256i rss_hash4_5 =
> +
> 	_mm256_slli_epi64(raw_desc_bh4_5, 32);
> +					__m256i rss_hash2_3 =
> +
> 	_mm256_slli_epi64(raw_desc_bh2_3, 32);
> +					__m256i rss_hash0_1 =
> +
> 	_mm256_slli_epi64(raw_desc_bh0_1, 32);
> +
> +					__m256i rss_hash_msk =
> +
> 	_mm256_set_epi32(0xFFFFFFFF, 0, 0, 0,
> +								0xFFFFFFFF, 0,
> 0, 0);
> +
> +					rss_hash6_7 = _mm256_and_si256
> +							(rss_hash6_7,
> rss_hash_msk);
> +					rss_hash4_5 = _mm256_and_si256
> +							(rss_hash4_5,
> rss_hash_msk);
> +					rss_hash2_3 = _mm256_and_si256
> +							(rss_hash2_3,
> rss_hash_msk);
> +					rss_hash0_1 = _mm256_and_si256
> +							(rss_hash0_1,
> rss_hash_msk);
> +
> +					mb6_7 = _mm256_or_si256(mb6_7,
> rss_hash6_7);
> +					mb4_5 = _mm256_or_si256(mb4_5,
> rss_hash4_5);
> +					mb2_3 = _mm256_or_si256(mb2_3,
> rss_hash2_3);
> +					mb0_1 = _mm256_or_si256(mb0_1,
> rss_hash0_1);
> +				} /* if() on RSS hash parsing */
> +
> +				if (rxq->vsi->adapter->pf.dev_data-
> >dev_conf.rxmode.offloads &
> +
> 	RTE_ETH_RX_OFFLOAD_VLAN) {
> +					/* merge the status/error-1 bits into
> one register */
> +					const __m256i status1_4_7 =
> +						_mm256_unpacklo_epi32
> +						(raw_desc_bh6_7,
> +						raw_desc_bh4_5);
> +					const __m256i status1_0_3 =
> +						_mm256_unpacklo_epi32
> +						(raw_desc_bh2_3,
> +						raw_desc_bh0_1);
> +
> +					const __m256i status1_0_7 =
> +						_mm256_unpacklo_epi64
> +						(status1_4_7, status1_0_3);
> +
> +					const __m256i l2tag2p_flag_mask =
> +						_mm256_set1_epi32
> +						(1 << 11);
> +
> +					__m256i l2tag2p_flag_bits =
> +						_mm256_and_si256
> +						(status1_0_7,
> +						l2tag2p_flag_mask);
> +
> +					l2tag2p_flag_bits =
> +						_mm256_srli_epi32
> +						(l2tag2p_flag_bits,
> +						11);
> +					const __m256i l2tag2_flags_shuf =
> +						_mm256_set_epi8
> +							(0, 0, 0, 0,
> +							0, 0, 0, 0,
> +							0, 0, 0, 0,
> +							0, 0, 0, 0,
> +							/* end up 128-bits */
> +							0, 0, 0, 0,
> +							0, 0, 0, 0,
> +							0, 0, 0, 0,
> +							0, 0,
> +
> 	RTE_MBUF_F_RX_VLAN |
> +
> 	RTE_MBUF_F_RX_VLAN_STRIPPED,
> +							0);
> +					__m256i vlan_flags =
> +						_mm256_shuffle_epi8
> +							(l2tag2_flags_shuf,
> +							l2tag2p_flag_bits);
> +
> +					/* merge with vlan_flags */
> +					mbuf_flags = _mm256_or_si256
> +							(mbuf_flags,
> +							vlan_flags);
> +
> +					/* L2TAG2_2 */
> +					__m256i vlan_tci6_7 =
> +						_mm256_slli_si256
> +							(raw_desc_bh6_7, 4);
> +					__m256i vlan_tci4_5 =
> +						_mm256_slli_si256
> +							(raw_desc_bh4_5, 4);
> +					__m256i vlan_tci2_3 =
> +						_mm256_slli_si256
> +							(raw_desc_bh2_3, 4);
> +					__m256i vlan_tci0_1 =
> +						_mm256_slli_si256
> +							(raw_desc_bh0_1, 4);
> +
> +					const __m256i vlan_tci_msk =
> +						_mm256_set_epi32
> +						(0, 0xFFFF0000, 0, 0,
> +						0, 0xFFFF0000, 0, 0);
> +
> +					vlan_tci6_7 = _mm256_and_si256
> +							(vlan_tci6_7,
> +							vlan_tci_msk);
> +					vlan_tci4_5 = _mm256_and_si256
> +							(vlan_tci4_5,
> +							vlan_tci_msk);
> +					vlan_tci2_3 = _mm256_and_si256
> +							(vlan_tci2_3,
> +							vlan_tci_msk);
> +					vlan_tci0_1 = _mm256_and_si256
> +							(vlan_tci0_1,
> +							vlan_tci_msk);
> +
> +					mb6_7 = _mm256_or_si256
> +							(mb6_7, vlan_tci6_7);
> +					mb4_5 = _mm256_or_si256
> +							(mb4_5, vlan_tci4_5);
> +					mb2_3 = _mm256_or_si256
> +							(mb2_3, vlan_tci2_3);
> +					mb0_1 = _mm256_or_si256
> +							(mb0_1, vlan_tci0_1);
> +				}
> +			}
>  #endif
>  		}
> 
> diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c
> b/drivers/net/ice/ice_rxtx_vec_sse.c
> index fd94cedde3..cc5b8510dc 100644
> --- a/drivers/net/ice/ice_rxtx_vec_sse.c
> +++ b/drivers/net/ice/ice_rxtx_vec_sse.c
> @@ -100,9 +100,15 @@ ice_rxq_rearm(struct ice_rx_queue *rxq)
>  	ICE_PCI_REG_WC_WRITE(rxq->qrx_tail, rx_id);  }
> 
> +#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
> +static inline void
> +ice_rx_desc_to_olflags_v(struct ice_rx_queue *rxq, __m128i descs[4],
> __m128i descs_bh[4],
> +			 struct rte_mbuf **rx_pkts)
> +#else
>  static inline void
>  ice_rx_desc_to_olflags_v(struct ice_rx_queue *rxq, __m128i descs[4],
>  			 struct rte_mbuf **rx_pkts)
> +#endif
>  {
>  	const __m128i mbuf_init = _mm_set_epi64x(0, rxq-
> >mbuf_initializer);
>  	__m128i rearm0, rearm1, rearm2, rearm3; @@ -214,6 +220,38 @@
> ice_rx_desc_to_olflags_v(struct ice_rx_queue *rxq, __m128i descs[4],
>  	/* merge the flags */
>  	flags = _mm_or_si128(flags, rss_vlan);
> 
> + #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
> +	if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
> +
> 	RTE_ETH_RX_OFFLOAD_VLAN) {
> +		const __m128i l2tag2_mask =
> +			_mm_set1_epi32(1 << 11);
> +		const __m128i vlan_tci0_1 =
> +			_mm_unpacklo_epi32(descs_bh[0], descs_bh[1]);
> +		const __m128i vlan_tci2_3 =
> +			_mm_unpacklo_epi32(descs_bh[2], descs_bh[3]);
> +		const __m128i vlan_tci0_3 =
> +			_mm_unpacklo_epi64(vlan_tci0_1, vlan_tci2_3);
> +
> +		__m128i vlan_bits = _mm_and_si128(vlan_tci0_3,
> l2tag2_mask);
> +
> +		vlan_bits = _mm_srli_epi32(vlan_bits, 11);
> +
> +		const __m128i vlan_flags_shuf =
> +			_mm_set_epi8(0, 0, 0, 0,
> +					0, 0, 0, 0,
> +					0, 0, 0, 0,
> +					0, 0,
> +					RTE_MBUF_F_RX_VLAN |
> +					RTE_MBUF_F_RX_VLAN_STRIPPED,
> +					0);
> +
> +		const __m128i vlan_flags =
> _mm_shuffle_epi8(vlan_flags_shuf,
> +vlan_bits);
> +
> +		/* merge with vlan_flags */
> +		flags = _mm_or_si128(flags, vlan_flags);
> +	}
> +#endif
> +
>  	if (rxq->fdir_enabled) {
>  		const __m128i fdir_id0_1 =
>  			_mm_unpackhi_epi32(descs[0], descs[1]); @@ -
> 405,6 +443,9 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct
> rte_mbuf **rx_pkts,
>  	     pos += ICE_DESCS_PER_LOOP,
>  	     rxdp += ICE_DESCS_PER_LOOP) {
>  		__m128i descs[ICE_DESCS_PER_LOOP];
> +		#ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC
> +		__m128i descs_bh[ICE_DESCS_PER_LOOP];
> +		#endif
>  		__m128i pkt_mb0, pkt_mb1, pkt_mb2, pkt_mb3;
>  		__m128i staterr, sterr_tmp1, sterr_tmp2;
>  		/* 2 64 bit or 4 32 bit mbuf pointers in one XMM reg. */ @@ -
> 463,8 +504,6 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct
> rte_mbuf **rx_pkts,
>  		/* C.1 4=>2 filter staterr info only */
>  		sterr_tmp1 = _mm_unpackhi_epi32(descs[1], descs[0]);
> 
> -		ice_rx_desc_to_olflags_v(rxq, descs, &rx_pkts[pos]);
> -
>  		/* D.2 pkt 3,4 set in_port/nb_seg and remove crc */
>  		pkt_mb3 = _mm_add_epi16(pkt_mb3, crc_adjust);
>  		pkt_mb2 = _mm_add_epi16(pkt_mb2, crc_adjust); @@ -
> 479,21 +518,21 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq,
> struct rte_mbuf **rx_pkts,
>  		 * will cause performance drop to get into this context.
>  		 */
>  		if (rxq->vsi->adapter->pf.dev_data-
> >dev_conf.rxmode.offloads &
> -				RTE_ETH_RX_OFFLOAD_RSS_HASH) {
> +					(RTE_ETH_RX_OFFLOAD_RSS_HASH |
> RTE_ETH_RX_OFFLOAD_VLAN)) {
>  			/* load bottom half of every 32B desc */
> -			const __m128i raw_desc_bh3 =
> +			descs_bh[3] =
>  				_mm_load_si128
>  					((void
> *)(&rxdp[3].wb.status_error1));
>  			rte_compiler_barrier();
> -			const __m128i raw_desc_bh2 =
> +			descs_bh[2] =
>  				_mm_load_si128
>  					((void
> *)(&rxdp[2].wb.status_error1));
>  			rte_compiler_barrier();
> -			const __m128i raw_desc_bh1 =
> +			descs_bh[1] =
>  				_mm_load_si128
>  					((void
> *)(&rxdp[1].wb.status_error1));
>  			rte_compiler_barrier();
> -			const __m128i raw_desc_bh0 =
> +			descs_bh[0] =
>  				_mm_load_si128
>  					((void
> *)(&rxdp[0].wb.status_error1));
> 
> @@ -501,32 +540,59 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq,
> struct rte_mbuf **rx_pkts,
>  			 * to shift the 32b RSS hash value to the
>  			 * highest 32b of each 128b before mask
>  			 */
> -			__m128i rss_hash3 =
> -				_mm_slli_epi64(raw_desc_bh3, 32);
> -			__m128i rss_hash2 =
> -				_mm_slli_epi64(raw_desc_bh2, 32);
> -			__m128i rss_hash1 =
> -				_mm_slli_epi64(raw_desc_bh1, 32);
> -			__m128i rss_hash0 =
> -				_mm_slli_epi64(raw_desc_bh0, 32);
> -
> -			__m128i rss_hash_msk =
> -				_mm_set_epi32(0xFFFFFFFF, 0, 0, 0);
> -
> -			rss_hash3 = _mm_and_si128
> -					(rss_hash3, rss_hash_msk);
> -			rss_hash2 = _mm_and_si128
> -					(rss_hash2, rss_hash_msk);
> -			rss_hash1 = _mm_and_si128
> -					(rss_hash1, rss_hash_msk);
> -			rss_hash0 = _mm_and_si128
> -					(rss_hash0, rss_hash_msk);
> -
> -			pkt_mb3 = _mm_or_si128(pkt_mb3, rss_hash3);
> -			pkt_mb2 = _mm_or_si128(pkt_mb2, rss_hash2);
> -			pkt_mb1 = _mm_or_si128(pkt_mb1, rss_hash1);
> -			pkt_mb0 = _mm_or_si128(pkt_mb0, rss_hash0);
> -		} /* if() on RSS hash parsing */
> +			if (rxq->vsi->adapter->pf.dev_data-
> >dev_conf.rxmode.offloads &
> +
> 	RTE_ETH_RX_OFFLOAD_RSS_HASH) {
> +				__m128i rss_hash3 =
> +					_mm_slli_epi64(descs_bh[3], 32);
> +				__m128i rss_hash2 =
> +					_mm_slli_epi64(descs_bh[2], 32);
> +				__m128i rss_hash1 =
> +					_mm_slli_epi64(descs_bh[1], 32);
> +				__m128i rss_hash0 =
> +					_mm_slli_epi64(descs_bh[0], 32);
> +
> +				__m128i rss_hash_msk =
> +					_mm_set_epi32(0xFFFFFFFF, 0, 0, 0);
> +
> +				rss_hash3 = _mm_and_si128
> +						(rss_hash3, rss_hash_msk);
> +				rss_hash2 = _mm_and_si128
> +						(rss_hash2, rss_hash_msk);
> +				rss_hash1 = _mm_and_si128
> +						(rss_hash1, rss_hash_msk);
> +				rss_hash0 = _mm_and_si128
> +						(rss_hash0, rss_hash_msk);
> +
> +				pkt_mb3 = _mm_or_si128(pkt_mb3,
> rss_hash3);
> +				pkt_mb2 = _mm_or_si128(pkt_mb2,
> rss_hash2);
> +				pkt_mb1 = _mm_or_si128(pkt_mb1,
> rss_hash1);
> +				pkt_mb0 = _mm_or_si128(pkt_mb0,
> rss_hash0);
> +			} /* if() on RSS hash parsing */
> +
> +			if (rxq->vsi->adapter->pf.dev_data-
> >dev_conf.rxmode.offloads &
> +
> 	RTE_ETH_RX_OFFLOAD_VLAN) {
> +									/*
> L2TAG2_2 */
> +				__m128i vlan_tci3 =
> _mm_slli_si128(descs_bh[3], 4);
> +				__m128i vlan_tci2 =
> _mm_slli_si128(descs_bh[2], 4);
> +				__m128i vlan_tci1 =
> _mm_slli_si128(descs_bh[1], 4);
> +				__m128i vlan_tci0 =
> _mm_slli_si128(descs_bh[0], 4);
> +
> +				const __m128i vlan_tci_msk =
> _mm_set_epi32(0, 0xFFFF0000, 0, 0);
> +
> +				vlan_tci3 = _mm_and_si128(vlan_tci3,
> vlan_tci_msk);
> +				vlan_tci2 = _mm_and_si128(vlan_tci2,
> vlan_tci_msk);
> +				vlan_tci1 = _mm_and_si128(vlan_tci1,
> vlan_tci_msk);
> +				vlan_tci0 = _mm_and_si128(vlan_tci0,
> vlan_tci_msk);
> +
> +				pkt_mb3 = _mm_or_si128(pkt_mb3,
> vlan_tci3);
> +				pkt_mb2 = _mm_or_si128(pkt_mb2,
> vlan_tci2);
> +				pkt_mb1 = _mm_or_si128(pkt_mb1,
> vlan_tci1);
> +				pkt_mb0 = _mm_or_si128(pkt_mb0,
> vlan_tci0);
> +				}
> +		ice_rx_desc_to_olflags_v(rxq, descs, descs_bh,
> &rx_pkts[pos]);
> +		}
> +#else
> +		ice_rx_desc_to_olflags_v(rxq, descs, &rx_pkts[pos]);
>  #endif
> 
>  		/* C.2 get 4 pkts staterr value  */
> --
> 2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v6] doc: add PMD known issue
  2022-11-08 13:28   ` [PATCH v5 2/2] net/ice: fix vlan offload of rxq Mingjin Ye
  2022-11-09  1:52     ` Huang, ZhiminX
@ 2022-11-21  2:54     ` Mingjin Ye
  2022-11-25  1:55       ` Ye, MingjinX
                         ` (4 more replies)
  1 sibling, 5 replies; 36+ messages in thread
From: Mingjin Ye @ 2022-11-21  2:54 UTC (permalink / raw)
  To: dev
  Cc: qiming.yang, stable, yidingx.zhou, Mingjin Ye, Qi Zhang,
	Jie Zhou, Ranjit Menon, Ferruh Yigit, Pallavi Kadam

Add a known issue: Rx path dynamic routing is not supported for PMD.

Fixes: de853a3bb151 ("net/ice: disable DDP package on Windows")
Cc: stable@dpdk.org

Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
---
 doc/guides/nics/ice.rst | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index ce075e067c..60fd7834ed 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -395,3 +395,20 @@ file is used by both the kernel driver and the DPDK PMD.
 
       Windows support: The DDP package is not supported on Windows so,
       loading of the package is disabled on Windows.
+
+ice: Rx path is not supported after PF or DCF add vlan offload
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If pmd does not enable Vlan offload during initialization, it will
+automatically select Rx paths that do not support offload. Even if
+Vlan offload is subsequently enabled through the API, Vlan offload
+will not work because the selected Rx path does not support Vlan
+offload.
+
+Rx path dynamic routing is not supported. When the offload features is
+switched, the queue needs to be reconfigured, then takes effect. It would
+take additional workload for the network card to deal with.
+
+When applying VLAN offload on the PF or DCF, it must be configured
+firstly by the startup parameters.
+
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v6] doc: add PMD known issue
  2022-11-21  2:54     ` [PATCH v6] doc: add PMD known issue Mingjin Ye
@ 2022-11-25  1:55       ` Ye, MingjinX
  2022-12-09 10:20         ` Ye, MingjinX
  2022-12-13  1:41       ` Zhang, Qi Z
                         ` (3 subsequent siblings)
  4 siblings, 1 reply; 36+ messages in thread
From: Ye, MingjinX @ 2022-11-25  1:55 UTC (permalink / raw)
  To: dev, Yang, Qiming
  Cc: stable, Zhou, YidingX, Zhang, Qi Z, Jie Zhou, Menon, Ranjit,
	Ferruh Yigit, Kadam, Pallavi

Hi All,

Could you please review and provide suggestions if any.

Thanks,
Mingjin

> -----Original Message-----
> From: Ye, MingjinX <mingjinx.ye@intel.com>
> Sent: 2022年11月21日 10:55
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou, YidingX
> <yidingx.zhou@intel.com>; Ye, MingjinX <mingjinx.ye@intel.com>; Zhang, Qi
> Z <qi.z.zhang@intel.com>; Jie Zhou <jizh@microsoft.com>; Menon, Ranjit
> <ranjit.menon@intel.com>; Ferruh Yigit <ferruh.yigit@intel.com>; Kadam,
> Pallavi <pallavi.kadam@intel.com>
> Subject: [PATCH v6] doc: add PMD known issue
> 
> Add a known issue: Rx path dynamic routing is not supported for PMD.
> 
> Fixes: de853a3bb151 ("net/ice: disable DDP package on Windows")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> ---
>  doc/guides/nics/ice.rst | 17 +++++++++++++++++
>  1 file changed, 17 insertions(+)
> 
> diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index
> ce075e067c..60fd7834ed 100644
> --- a/doc/guides/nics/ice.rst
> +++ b/doc/guides/nics/ice.rst
> @@ -395,3 +395,20 @@ file is used by both the kernel driver and the DPDK
> PMD.
> 
>        Windows support: The DDP package is not supported on Windows so,
>        loading of the package is disabled on Windows.
> +
> +ice: Rx path is not supported after PF or DCF add vlan offload
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> ~~~~~
> +
> +If pmd does not enable Vlan offload during initialization, it will
> +automatically select Rx paths that do not support offload. Even if Vlan
> +offload is subsequently enabled through the API, Vlan offload will not
> +work because the selected Rx path does not support Vlan offload.
> +
> +Rx path dynamic routing is not supported. When the offload features is
> +switched, the queue needs to be reconfigured, then takes effect. It
> +would take additional workload for the network card to deal with.
> +
> +When applying VLAN offload on the PF or DCF, it must be configured
> +firstly by the startup parameters.
> +
> --
> 2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v6] doc: add PMD known issue
  2022-11-25  1:55       ` Ye, MingjinX
@ 2022-12-09 10:20         ` Ye, MingjinX
  0 siblings, 0 replies; 36+ messages in thread
From: Ye, MingjinX @ 2022-12-09 10:20 UTC (permalink / raw)
  To: dev, Zhang, Qi Z
  Cc: stable, Zhou, YidingX, Jie Zhou, Menon, Ranjit, Ferruh Yigit,
	Kadam, Pallavi

Hi All,

Could you please review and provide suggestions if any.

Thanks,
Mingjin

> > -----Original Message-----
> > From: Ye, MingjinX <mingjinx.ye@intel.com>
> > Sent: 2022年11月21日 10:55
> > To: dev@dpdk.org
> > Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou,
> > YidingX <yidingx.zhou@intel.com>; Ye, MingjinX
> > <mingjinx.ye@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Jie Zhou
> > <jizh@microsoft.com>; Menon, Ranjit <ranjit.menon@intel.com>; Ferruh
> > Yigit <ferruh.yigit@intel.com>; Kadam, Pallavi
> > <pallavi.kadam@intel.com>
> > Subject: [PATCH v6] doc: add PMD known issue
> >
> > Add a known issue: Rx path dynamic routing is not supported for PMD.
> >
> > Fixes: de853a3bb151 ("net/ice: disable DDP package on Windows")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> > ---
> >  doc/guides/nics/ice.rst | 17 +++++++++++++++++
> >  1 file changed, 17 insertions(+)
> >
> > diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index
> > ce075e067c..60fd7834ed 100644
> > --- a/doc/guides/nics/ice.rst
> > +++ b/doc/guides/nics/ice.rst
> > @@ -395,3 +395,20 @@ file is used by both the kernel driver and the
> > DPDK PMD.
> >
> >        Windows support: The DDP package is not supported on Windows so,
> >        loading of the package is disabled on Windows.
> > +
> > +ice: Rx path is not supported after PF or DCF add vlan offload
> >
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > ~~~~~
> > +
> > +If pmd does not enable Vlan offload during initialization, it will
> > +automatically select Rx paths that do not support offload. Even if
> > +Vlan offload is subsequently enabled through the API, Vlan offload
> > +will not work because the selected Rx path does not support Vlan offload.
> > +
> > +Rx path dynamic routing is not supported. When the offload features
> > +is switched, the queue needs to be reconfigured, then takes effect.
> > +It would take additional workload for the network card to deal with.
> > +
> > +When applying VLAN offload on the PF or DCF, it must be configured
> > +firstly by the startup parameters.
> > +
> > --
> > 2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v6] doc: add PMD known issue
  2022-11-21  2:54     ` [PATCH v6] doc: add PMD known issue Mingjin Ye
  2022-11-25  1:55       ` Ye, MingjinX
@ 2022-12-13  1:41       ` Zhang, Qi Z
  2022-12-13  4:25         ` Ye, MingjinX
  2022-12-23  7:32       ` [PATCH v7] " Mingjin Ye
                         ` (2 subsequent siblings)
  4 siblings, 1 reply; 36+ messages in thread
From: Zhang, Qi Z @ 2022-12-13  1:41 UTC (permalink / raw)
  To: Ye, MingjinX, dev
  Cc: Yang, Qiming, stable, Zhou, YidingX, Jie Zhou, Menon, Ranjit,
	Ferruh Yigit, Kadam, Pallavi



> -----Original Message-----
> From: Ye, MingjinX <mingjinx.ye@intel.com>
> Sent: Monday, November 21, 2022 10:55 AM
> To: dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou, YidingX
> <yidingx.zhou@intel.com>; Ye, MingjinX <mingjinx.ye@intel.com>; Zhang, Qi
> Z <qi.z.zhang@intel.com>; Jie Zhou <jizh@microsoft.com>; Menon, Ranjit
> <ranjit.menon@intel.com>; Ferruh Yigit <ferruh.yigit@intel.com>; Kadam,
> Pallavi <pallavi.kadam@intel.com>
> Subject: [PATCH v6] doc: add PMD known issue
> 
> Add a known issue: Rx path dynamic routing is not supported for PMD.
> 
> Fixes: de853a3bb151 ("net/ice: disable DDP package on Windows")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> ---
>  doc/guides/nics/ice.rst | 17 +++++++++++++++++
>  1 file changed, 17 insertions(+)
> 
> diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index
> ce075e067c..60fd7834ed 100644
> --- a/doc/guides/nics/ice.rst
> +++ b/doc/guides/nics/ice.rst
> @@ -395,3 +395,20 @@ file is used by both the kernel driver and the DPDK
> PMD.
> 
>        Windows support: The DDP package is not supported on Windows so,
>        loading of the package is disabled on Windows.
> +
> +ice: Rx path is not supported after PF or DCF add vlan offload
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> ~
> +
> +If pmd does not enable Vlan offload during initialization, it will
> +automatically select Rx paths that do not support offload. Even if Vlan
> +offload is subsequently enabled through the API, Vlan offload will not
> +work because the selected Rx path does not support Vlan offload.
> +
> +Rx path dynamic routing is not supported. When the offload features is
> +switched, the queue needs to be reconfigured, then takes effect. It
> +would take additional workload for the network card to deal with.

If offload features is switched, we need to re-configure the queue, this does not looks like a limitation.
Better to describe this from APIs calls 

> +
> +When applying VLAN offload on the PF or DCF, it must be configured
> +firstly by the startup parameters.

Better to explain what is the startup parameter precisely.

> +
> --
> 2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v6] doc: add PMD known issue
  2022-12-13  1:41       ` Zhang, Qi Z
@ 2022-12-13  4:25         ` Ye, MingjinX
  0 siblings, 0 replies; 36+ messages in thread
From: Ye, MingjinX @ 2022-12-13  4:25 UTC (permalink / raw)
  To: Zhang, Qi Z, dev
  Cc: Yang, Qiming, stable, Zhou, YidingX, Jie Zhou, Menon, Ranjit,
	Ferruh Yigit, Kadam, Pallavi



> -----Original Message-----
> From: Zhang, Qi Z <qi.z.zhang@intel.com>
> Sent: 2022年12月13日 9:41
> To: Ye, MingjinX <mingjinx.ye@intel.com>; dev@dpdk.org
> Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou, YidingX
> <yidingx.zhou@intel.com>; Jie Zhou <jizh@microsoft.com>; Menon, Ranjit
> <ranjit.menon@intel.com>; Ferruh Yigit <ferruh.yigit@intel.com>; Kadam,
> Pallavi <pallavi.kadam@intel.com>
> Subject: RE: [PATCH v6] doc: add PMD known issue
> 
> 
> 
> > -----Original Message-----
> > From: Ye, MingjinX <mingjinx.ye@intel.com>
> > Sent: Monday, November 21, 2022 10:55 AM
> > To: dev@dpdk.org
> > Cc: Yang, Qiming <qiming.yang@intel.com>; stable@dpdk.org; Zhou,
> > YidingX <yidingx.zhou@intel.com>; Ye, MingjinX
> > <mingjinx.ye@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Jie Zhou
> > <jizh@microsoft.com>; Menon, Ranjit <ranjit.menon@intel.com>; Ferruh
> > Yigit <ferruh.yigit@intel.com>; Kadam, Pallavi
> > <pallavi.kadam@intel.com>
> > Subject: [PATCH v6] doc: add PMD known issue
> >
> > Add a known issue: Rx path dynamic routing is not supported for PMD.
> >
> > Fixes: de853a3bb151 ("net/ice: disable DDP package on Windows")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> > ---
> >  doc/guides/nics/ice.rst | 17 +++++++++++++++++
> >  1 file changed, 17 insertions(+)
> >
> > diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index
> > ce075e067c..60fd7834ed 100644
> > --- a/doc/guides/nics/ice.rst
> > +++ b/doc/guides/nics/ice.rst
> > @@ -395,3 +395,20 @@ file is used by both the kernel driver and the
> > DPDK PMD.
> >
> >        Windows support: The DDP package is not supported on Windows so,
> >        loading of the package is disabled on Windows.
> > +
> > +ice: Rx path is not supported after PF or DCF add vlan offload
> >
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> ~~~~
> > ~
> > +
> > +If pmd does not enable Vlan offload during initialization, it will
> > +automatically select Rx paths that do not support offload. Even if
> > +Vlan offload is subsequently enabled through the API, Vlan offload
> > +will not work because the selected Rx path does not support Vlan offload.
> > +
> > +Rx path dynamic routing is not supported. When the offload features
> > +is switched, the queue needs to be reconfigured, then takes effect.
> > +It would take additional workload for the network card to deal with.
> 
> If offload features is switched, we need to re-configure the queue, this does
> not looks like a limitation.

Lihuisong<lihuisong@huawei.com>  thinks this is a limitation, and Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> agrees.


> Better to describe this from APIs calls

I will add a description.
> 
> > +
> > +When applying VLAN offload on the PF or DCF, it must be configured
> > +firstly by the startup parameters.
> 
> Better to explain what is the startup parameter precisely.

I will add a description of the startup parameters.
> 
> > +
> > --
> > 2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v7] doc: add PMD known issue
  2022-11-21  2:54     ` [PATCH v6] doc: add PMD known issue Mingjin Ye
  2022-11-25  1:55       ` Ye, MingjinX
  2022-12-13  1:41       ` Zhang, Qi Z
@ 2022-12-23  7:32       ` Mingjin Ye
  2022-12-26  2:52       ` Mingjin Ye
  2022-12-27  9:00       ` Mingjin Ye
  4 siblings, 0 replies; 36+ messages in thread
From: Mingjin Ye @ 2022-12-23  7:32 UTC (permalink / raw)
  To: dev
  Cc: qiming.yang, stable, yidingx.zhou, Mingjin Ye, Qi Zhang,
	Ranjit Menon, Jie Zhou, Pallavi Kadam, Ferruh Yigit

Add a known issue: Rx path dynamic routing is not supported for PMD.

Fixes: de853a3bb151 ("net/ice: disable DDP package on Windows")
Cc: stable@dpdk.org

Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
---
 doc/guides/nics/ice.rst | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index ce075e067c..53f2e9cb71 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -395,3 +395,26 @@ file is used by both the kernel driver and the DPDK PMD.
 
       Windows support: The DDP package is not supported on Windows so,
       loading of the package is disabled on Windows.
+
+ice: Rx path is not supported after PF or DCF add vlan offload
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If pmd does not enable Vlan offload during initialization, it will
+automatically select Rx paths that do not support offload. Even if
+Vlan offload is subsequently enabled through the API, Vlan offload
+will not work because the selected Rx path does not support Vlan
+offload.
+
+cmd_vlan_offload_parsed() goes down to the follow ethdev API functions:
+    - rte_eth_dev_set_vlan_strip_on_queue()
+    - rte_eth_dev_set_vlan_offload()
+These functions add offload settings when the port is started, running,
+and processing traffic. At this time, ``rte_eth_rx_queue_setup`` api is
+needed to reroute rxq to the RX path with offload function. But at this
+time, it is possible that the original Rx path is handling packages, so
+this is not thread-safe.
+
+When applying offload on the PF or DCF, starting the ``testpmd``
+application, use the ``--rx-offloads`` startup parameter to force the
+dpdk lib to choose the Rx path with the offload function by default.
+
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v7] doc: add PMD known issue
  2022-11-21  2:54     ` [PATCH v6] doc: add PMD known issue Mingjin Ye
                         ` (2 preceding siblings ...)
  2022-12-23  7:32       ` [PATCH v7] " Mingjin Ye
@ 2022-12-26  2:52       ` Mingjin Ye
  2022-12-27  9:00       ` Mingjin Ye
  4 siblings, 0 replies; 36+ messages in thread
From: Mingjin Ye @ 2022-12-26  2:52 UTC (permalink / raw)
  To: dev
  Cc: qiming.yang, stable, yidingx.zhou, Mingjin Ye, Qi Zhang,
	Jie Zhou, Ranjit Menon, Ferruh Yigit, Pallavi Kadam

Add a known issue: Rx path dynamic routing is not supported for PMD.

Fixes: de853a3bb151 ("net/ice: disable DDP package on Windows")
Cc: stable@dpdk.org

Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
---
 doc/guides/nics/ice.rst | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index ce075e067c..01fb96101f 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -395,3 +395,26 @@ file is used by both the kernel driver and the DPDK PMD.
 
       Windows support: The DDP package is not supported on Windows so,
       loading of the package is disabled on Windows.
+
+ice: Rx path is not supported after PF or DCF add vlan offload
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If pmd does not enable Vlan offload during initialization, it will
+automatically select Rx paths that do not support offload. Even if
+Vlan offload is subsequently enabled through the API, Vlan offload
+will not work because the selected Rx path does not support Vlan
+offload.
+
+cmd_vlan_offload_parsed() goes down to the follow ethdev API functions:
+    - rte_eth_dev_set_vlan_strip_on_queue()
+    - rte_eth_dev_set_vlan_offload()
+These functions add offload settings when the port is started, running
+and processing traffic. At this time, ``rte_eth_rx_queue_setup`` api is
+needed to reroute rxq to the RX path with offload function. But at this
+time, it is possible that the original Rx path is handling packages, so
+this is not thread-safe.
+
+When applying offload on the PF or DCF, starting the ``testpmd``
+application, use the ``--rx-offloads`` startup parameter to force the
+dpdk lib to choose the Rx path with the offload function by default.
+
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v7] doc: add PMD known issue
  2022-11-21  2:54     ` [PATCH v6] doc: add PMD known issue Mingjin Ye
                         ` (3 preceding siblings ...)
  2022-12-26  2:52       ` Mingjin Ye
@ 2022-12-27  9:00       ` Mingjin Ye
  2022-12-27 16:40         ` Stephen Hemminger
  2023-01-28  6:01         ` [PATCH v8] " Mingjin Ye
  4 siblings, 2 replies; 36+ messages in thread
From: Mingjin Ye @ 2022-12-27  9:00 UTC (permalink / raw)
  To: dev
  Cc: qiming.yang, stable, yidingx.zhou, Mingjin Ye, Qi Zhang,
	Ranjit Menon, Ferruh Yigit, Jie Zhou, Pallavi Kadam

Add a known issue: Rx path dynamic routing is not supported for PMD.

Fixes: de853a3bb151 ("net/ice: disable DDP package on Windows")
Cc: stable@dpdk.org

Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
---
 doc/guides/nics/ice.rst | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index ce075e067c..a0739d81b1 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -395,3 +395,27 @@ file is used by both the kernel driver and the DPDK PMD.
 
       Windows support: The DDP package is not supported on Windows so,
       loading of the package is disabled on Windows.
+
+ice: Rx path is not supported after PF or DCF add vlan offload
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If pmd does not enable Vlan offload during initialization, it will
+automatically select Rx paths that do not support offload. Even if
+Vlan offload is subsequently enabled through the API, Vlan offload
+will not work because the selected Rx path does not support Vlan
+offload.
+
+cmd_vlan_offload_parsed() goes down to the follow ethdev API functions:
+    - rte_eth_dev_set_vlan_strip_on_queue()
+    - rte_eth_dev_set_vlan_offload()
+
+These functions add offload settings when the port is started, running
+and processing traffic. At this time, ``rte_eth_rx_queue_setup`` api is
+needed to reroute rxq to the RX path with offload function. But at this
+time, it is possible that the original Rx path is handling packages, so
+this is not thread-safe.
+
+When applying offload on the PF or DCF, starting the ``testpmd``
+application, use the ``--rx-offloads`` startup parameter to force the
+dpdk lib to choose the Rx path with the offload function by default.
+
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v7] doc: add PMD known issue
  2022-12-27  9:00       ` Mingjin Ye
@ 2022-12-27 16:40         ` Stephen Hemminger
  2023-01-28  6:01         ` [PATCH v8] " Mingjin Ye
  1 sibling, 0 replies; 36+ messages in thread
From: Stephen Hemminger @ 2022-12-27 16:40 UTC (permalink / raw)
  To: Mingjin Ye
  Cc: dev, qiming.yang, stable, yidingx.zhou, Qi Zhang, Ranjit Menon,
	Ferruh Yigit, Jie Zhou, Pallavi Kadam

On Tue, 27 Dec 2022 17:00:40 +0800
Mingjin Ye <mingjinx.ye@intel.com> wrote:

> +
> +ice: Rx path is not supported after PF or DCF add vlan offload
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +If pmd does not enable Vlan offload during initialization, it will
> +automatically select Rx paths that do not support offload. Even if
> +Vlan offload is subsequently enabled through the API, Vlan offload
> +will not work because the selected Rx path does not support Vlan
> +offload.
> +
> +cmd_vlan_offload_parsed() goes down to the follow ethdev API functions:
> +    - rte_eth_dev_set_vlan_strip_on_queue()
> +    - rte_eth_dev_set_vlan_offload()
> +
> +These functions add offload settings when the port is started, running
> +and processing traffic. At this time, ``rte_eth_rx_queue_setup`` api is
> +needed to reroute rxq to the RX path with offload function. But at this
> +time, it is possible that the original Rx path is handling packages, so
> +this is not thread-safe.
> +
> +When applying offload on the PF or DCF, starting the ``testpmd``
> +application, use the ``--rx-offloads`` startup parameter to force the
> +dpdk lib to choose the Rx path with the offload function by default.
> +

This seems like just making excuses in the documentation for something
that should be fixed instead.

This situation is probably common to many PMD's.
Ideally, the drivers should reject changes to settings after device
is started if they can not support it.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v8] doc: add PMD known issue
  2022-12-27  9:00       ` Mingjin Ye
  2022-12-27 16:40         ` Stephen Hemminger
@ 2023-01-28  6:01         ` Mingjin Ye
  2023-01-28 17:17           ` Stephen Hemminger
  1 sibling, 1 reply; 36+ messages in thread
From: Mingjin Ye @ 2023-01-28  6:01 UTC (permalink / raw)
  To: dev; +Cc: qiming.yang, stable, yidingx.zhou, Mingjin Ye, Qi Zhang

Add a known issue: Rx path dynamic change is not supported for PMD.

Fixes: de853a3bb151 ("net/ice: disable DDP package on Windows")
Cc: stable@dpdk.org

Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
---
 doc/guides/nics/ice.rst | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index ce075e067c..115625523e 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -395,3 +395,15 @@ file is used by both the kernel driver and the DPDK PMD.
 
       Windows support: The DDP package is not supported on Windows so,
       loading of the package is disabled on Windows.
+
+ice: Rx path does not support dynamic change
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ice driver supports fast and offload rx path. When pmd is initialized,
+the fast rx path is selected by default. Even if offload is subsequently
+enabled through the API, which will not work because the past rx path is
+still used.
+
+The ice driver does not support to change the rx path after application
+is initialized. If HW offload is required, the ``--rx-offloads`` parameter
+should be used to choose the offload Rx path by default.
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8] doc: add PMD known issue
  2023-01-28  6:01         ` [PATCH v8] " Mingjin Ye
@ 2023-01-28 17:17           ` Stephen Hemminger
  2023-02-02  2:30             ` Ye, MingjinX
  0 siblings, 1 reply; 36+ messages in thread
From: Stephen Hemminger @ 2023-01-28 17:17 UTC (permalink / raw)
  To: Mingjin Ye; +Cc: dev, qiming.yang, stable, yidingx.zhou, Qi Zhang

On Sat, 28 Jan 2023 06:01:38 +0000
Mingjin Ye <mingjinx.ye@intel.com> wrote:

> Add a known issue: Rx path dynamic change is not supported for PMD.
> 
> Fixes: de853a3bb151 ("net/ice: disable DDP package on Windows")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> ---
>  doc/guides/nics/ice.rst | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
> index ce075e067c..115625523e 100644
> --- a/doc/guides/nics/ice.rst
> +++ b/doc/guides/nics/ice.rst
> @@ -395,3 +395,15 @@ file is used by both the kernel driver and the DPDK PMD.
>  
>        Windows support: The DDP package is not supported on Windows so,
>        loading of the package is disabled on Windows.
> +
> +ice: Rx path does not support dynamic change
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +The ice driver supports fast and offload rx path. When pmd is initialized,
> +the fast rx path is selected by default. Even if offload is subsequently
> +enabled through the API, which will not work because the past rx path is
> +still used.
> +
> +The ice driver does not support to change the rx path after application
> +is initialized. If HW offload is required, the ``--rx-offloads`` parameter
> +should be used to choose the offload Rx path by default.

Is this when the device is stopped, or running.
Dynamic configuration of offload parameters is not safe on many devices.
Usually the device driver requires the device not be started to change offloads.

The driver should reject in the API things it does not support.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v8] doc: add PMD known issue
  2023-01-28 17:17           ` Stephen Hemminger
@ 2023-02-02  2:30             ` Ye, MingjinX
  0 siblings, 0 replies; 36+ messages in thread
From: Ye, MingjinX @ 2023-02-02  2:30 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, Yang, Qiming, stable, Zhou, YidingX, Zhang, Qi Z



> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: 2023年1月29日 1:18
> To: Ye, MingjinX <mingjinx.ye@intel.com>
> Cc: dev@dpdk.org; Yang, Qiming <qiming.yang@intel.com>;
> stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>
> Subject: Re: [PATCH v8] doc: add PMD known issue
> 
> On Sat, 28 Jan 2023 06:01:38 +0000
> Mingjin Ye <mingjinx.ye@intel.com> wrote:
> 
> > Add a known issue: Rx path dynamic change is not supported for PMD.
> >
> > Fixes: de853a3bb151 ("net/ice: disable DDP package on Windows")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> > ---
> >  doc/guides/nics/ice.rst | 12 ++++++++++++
> >  1 file changed, 12 insertions(+)
> >
> > diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index
> > ce075e067c..115625523e 100644
> > --- a/doc/guides/nics/ice.rst
> > +++ b/doc/guides/nics/ice.rst
> > @@ -395,3 +395,15 @@ file is used by both the kernel driver and the DPDK
> PMD.
> >
> >        Windows support: The DDP package is not supported on Windows so,
> >        loading of the package is disabled on Windows.
> > +
> > +ice: Rx path does not support dynamic change
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +The ice driver supports fast and offload rx path. When pmd is
> > +initialized, the fast rx path is selected by default. Even if offload
> > +is subsequently enabled through the API, which will not work because
> > +the past rx path is still used.
> > +
> > +The ice driver does not support to change the rx path after
> > +application is initialized. If HW offload is required, the
> > +``--rx-offloads`` parameter should be used to choose the offload Rx path
> by default.
> 
> Is this when the device is stopped, or running.
> Dynamic configuration of offload parameters is not safe on many devices.
> Usually the device driver requires the device not be started to change
> offloads.
> 
> The driver should reject in the API things it does not support.

Thank you for your suggestion. I will review this issue with the reporter.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
@ 2022-10-27 11:09 Ye, MingjinX
  0 siblings, 0 replies; 36+ messages in thread
From: Ye, MingjinX @ 2022-10-27 11:09 UTC (permalink / raw)
  To: dev, Yang, Qiming, Zhang, Qi Z, Lu, Wenzhuo, Rong, Leyi
  Cc: stable, Zhou, YidingX, Singh, Aman Deep, Zhang, Yuying

Hi All,

Could you please review and provide suggestions if any.

Thanks,
Mingjin

> -----Original Message-----
> From: Ye, MingjinX <mingjinx.ye@intel.com>
> Sent: 2022年10月25日 1:27
> To: dev@dpdk.org
> Cc: stable@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>; Ye, MingjinX
> <mingjinx.ye@intel.com>; Singh, Aman Deep
> <aman.deep.singh@intel.com>; Zhang, Yuying <yuying.zhang@intel.com>
> Subject: [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
> 
> After setting vlan offload in testpmd, the result is not updated to rxq.
> Therefore, the queue needs to be reconfigured after executing the "vlan
> offload" related commands.
> 
> Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
> ---
>  app/test-pmd/cmdline.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
> 17be2de402..ce125f549f 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -4133,6 +4133,7 @@ cmd_vlan_offload_parsed(void *parsed_result,
>  	else
>  		vlan_extend_set(port_id, on);
> 
> +	cmd_reconfig_device_queue(port_id, 1, 1);
>  	return;
>  }
> 
> --
> 2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
@ 2022-10-24 17:27 Mingjin Ye
  0 siblings, 0 replies; 36+ messages in thread
From: Mingjin Ye @ 2022-10-24 17:27 UTC (permalink / raw)
  To: dev; +Cc: stable, yidingx.zhou, Mingjin Ye, Aman Singh, Yuying Zhang

After setting vlan offload in testpmd, the result is not updated
to rxq. Therefore, the queue needs to be reconfigured after
executing the "vlan offload" related commands.

Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
Cc: stable@dpdk.org

Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
---
 app/test-pmd/cmdline.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 17be2de402..ce125f549f 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -4133,6 +4133,7 @@ cmd_vlan_offload_parsed(void *parsed_result,
 	else
 		vlan_extend_set(port_id, on);
 
+	cmd_reconfig_device_queue(port_id, 1, 1);
 	return;
 }
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
@ 2022-10-24 16:24 Mingjin Ye
  0 siblings, 0 replies; 36+ messages in thread
From: Mingjin Ye @ 2022-10-24 16:24 UTC (permalink / raw)
  To: dev; +Cc: stable, yidingx.zhou, Mingjin Ye, Aman Singh, Yuying Zhang

After setting vlan offload in testpmd, the result is not updated
to rxq. Therefore, the queue needs to be reconfigured after
executing the "vlan offload" related commands.

Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
Cc: stable@dpdk.org

Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
---
 app/test-pmd/cmdline.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 17be2de402..ce125f549f 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -4133,6 +4133,7 @@ cmd_vlan_offload_parsed(void *parsed_result,
 	else
 		vlan_extend_set(port_id, on);
 
+	cmd_reconfig_device_queue(port_id, 1, 1);
 	return;
 }
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq
@ 2022-10-24 15:44 Mingjin Ye
  0 siblings, 0 replies; 36+ messages in thread
From: Mingjin Ye @ 2022-10-24 15:44 UTC (permalink / raw)
  To: dev; +Cc: stable, yidingx.zhou, Mingjin Ye, Aman Singh, Yuying Zhang

After setting vlan offload in testpmd, the result is not updated
to rxq. Therefore, the queue needs to be reconfigured after
executing the "vlan offload" related commands.

Fixes: a47aa8b97afe ("app/testpmd: add vlan offload support")
Cc: stable@dpdk.org

Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
---
 app/test-pmd/cmdline.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 17be2de402..ce125f549f 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -4133,6 +4133,7 @@ cmd_vlan_offload_parsed(void *parsed_result,
 	else
 		vlan_extend_set(port_id, on);
 
+	cmd_reconfig_device_queue(port_id, 1, 1);
 	return;
 }
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2023-02-02  2:30 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-26 17:10 [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq Mingjin Ye
2022-10-26  9:52 ` lihuisong (C)
2022-10-27 11:02   ` Ye, MingjinX
2022-10-28  2:09     ` lihuisong (C)
2022-11-03  1:28       ` Ye, MingjinX
2022-11-03  7:01         ` lihuisong (C)
2022-11-04  8:21           ` Ye, MingjinX
2022-11-04 10:17             ` lihuisong (C)
2022-11-04 11:33               ` Ye, MingjinX
2022-11-06 10:32                 ` Andrew Rybchenko
2022-11-07  7:18                   ` Ye, MingjinX
2022-10-26 17:10 ` [PATCH v4 2/2] net/ice: fix vlan offload Mingjin Ye
2022-10-27  8:36   ` Huang, ZhiminX
2022-10-27  8:36 ` [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq Huang, ZhiminX
2022-10-27 13:16   ` Singh, Aman Deep
2022-11-08 13:28 ` [PATCH v5 1/2] net/ice: fix vlan offload Mingjin Ye
2022-11-08 13:28   ` [PATCH v5 2/2] net/ice: fix vlan offload of rxq Mingjin Ye
2022-11-09  1:52     ` Huang, ZhiminX
2022-11-21  2:54     ` [PATCH v6] doc: add PMD known issue Mingjin Ye
2022-11-25  1:55       ` Ye, MingjinX
2022-12-09 10:20         ` Ye, MingjinX
2022-12-13  1:41       ` Zhang, Qi Z
2022-12-13  4:25         ` Ye, MingjinX
2022-12-23  7:32       ` [PATCH v7] " Mingjin Ye
2022-12-26  2:52       ` Mingjin Ye
2022-12-27  9:00       ` Mingjin Ye
2022-12-27 16:40         ` Stephen Hemminger
2023-01-28  6:01         ` [PATCH v8] " Mingjin Ye
2023-01-28 17:17           ` Stephen Hemminger
2023-02-02  2:30             ` Ye, MingjinX
2022-11-09  1:51   ` [PATCH v5 1/2] net/ice: fix vlan offload Huang, ZhiminX
2022-11-11  3:34   ` Ye, MingjinX
  -- strict thread matches above, loose matches on Subject: below --
2022-10-27 11:09 [PATCH v4 1/2] app/testpmd: fix vlan offload of rxq Ye, MingjinX
2022-10-24 17:27 Mingjin Ye
2022-10-24 16:24 Mingjin Ye
2022-10-24 15:44 Mingjin Ye

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).