* |WARNING| pw123067 [PATCH v9 1/1] common/idpf: add AVX512 data path for split queue model
[not found] <20230206034124.1915756-2-wenjun1.wu@intel.com>
@ 2023-02-06 3:20 ` qemudev
2023-02-06 3:34 ` checkpatch
1 sibling, 0 replies; 3+ messages in thread
From: qemudev @ 2023-02-06 3:20 UTC (permalink / raw)
To: test-report; +Cc: Wenjun Wu, zhoumin
Test-Label: loongarch-compilation
Test-Status: WARNING
http://dpdk.org/patch/123067
_apply patch failure_
Submitter: Wenjun Wu <wenjun1.wu@intel.com>
Date: Mon, 6 Feb 2023 03:41:24 +0000
DPDK git baseline: Repo:dpdk-next-net-intel
Branch: main
CommitID: 78431f756522c72be13770aab815079671767355
Apply patch set 123067 failed:
Checking patch drivers/common/idpf/idpf_common_rxtx.c...
error: drivers/common/idpf/idpf_common_rxtx.c: No such file or directory
Checking patch drivers/common/idpf/idpf_common_rxtx.h...
error: drivers/common/idpf/idpf_common_rxtx.h: No such file or directory
Checking patch drivers/common/idpf/idpf_common_rxtx_avx512.c...
error: drivers/common/idpf/idpf_common_rxtx_avx512.c: No such file or directory
Checking patch drivers/common/idpf/version.map...
error: while searching for:
idpf_dp_singleq_xmit_pkts;
idpf_dp_singleq_xmit_pkts_avx512;
idpf_dp_splitq_recv_pkts;
idpf_dp_splitq_xmit_pkts;
idpf_qc_rx_thresh_check;
idpf_qc_rx_queue_release;
error: patch failed: drivers/common/idpf/version.map:10
error: drivers/common/idpf/version.map: patch does not apply
Checking patch drivers/net/idpf/idpf_rxtx.c...
error: while searching for:
if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
#ifdef CC_AVX512_SUPPORT
if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
vport->rx_use_avx512 = true;
#else
PMD_DRV_LOG(NOTICE,
error: patch failed: drivers/net/idpf/idpf_rxtx.c:758
error: drivers/net/idpf/idpf_rxtx.c: patch does not apply
Checking patch drivers/net/idpf/idpf_rxtx_vec_common.h...
^ permalink raw reply [flat|nested] 3+ messages in thread
* |WARNING| pw123067 [PATCH v9 1/1] common/idpf: add AVX512 data path for split queue model
[not found] <20230206034124.1915756-2-wenjun1.wu@intel.com>
2023-02-06 3:20 ` |WARNING| pw123067 [PATCH v9 1/1] common/idpf: add AVX512 data path for split queue model qemudev
@ 2023-02-06 3:34 ` checkpatch
1 sibling, 0 replies; 3+ messages in thread
From: checkpatch @ 2023-02-06 3:34 UTC (permalink / raw)
To: test-report; +Cc: Wenjun Wu
Test-Label: checkpatch
Test-Status: WARNING
http://dpdk.org/patch/123067
_coding style issues_
Warning in drivers/common/idpf/idpf_common_rxtx_avx512.c:
Using rte_atomicNN_xxx
^ permalink raw reply [flat|nested] 3+ messages in thread
* |WARNING| pw123067 [PATCH] [v9, 1/1] common/idpf: add AVX512 data path for split queue model
@ 2023-02-06 3:58 dpdklab
0 siblings, 0 replies; 3+ messages in thread
From: dpdklab @ 2023-02-06 3:58 UTC (permalink / raw)
To: test-report; +Cc: dpdk-test-reports
[-- Attachment #1: Type: text/plain, Size: 10288 bytes --]
Test-Label: iol-testing
Test-Status: WARNING
http://dpdk.org/patch/123067
_apply patch failure_
Submitter: Wenjun Wu <wenjun1.wu@intel.com>
Date: Monday, February 06 2023 03:41:24
Applied on: CommitID:796b031608d8d52e040f83304c676d3cda5af617
Apply patch set 123067 failed:
Checking patch drivers/common/idpf/idpf_common_rxtx.c...
error: drivers/common/idpf/idpf_common_rxtx.c: does not exist in index
Checking patch drivers/common/idpf/idpf_common_rxtx.h...
error: drivers/common/idpf/idpf_common_rxtx.h: does not exist in index
Checking patch drivers/common/idpf/idpf_common_rxtx_avx512.c...
error: drivers/common/idpf/idpf_common_rxtx_avx512.c: does not exist in index
Checking patch drivers/common/idpf/version.map...
error: while searching for:
idpf_dp_singleq_xmit_pkts;
idpf_dp_singleq_xmit_pkts_avx512;
idpf_dp_splitq_recv_pkts;
idpf_dp_splitq_xmit_pkts;
idpf_qc_rx_thresh_check;
idpf_qc_rx_queue_release;
error: patch failed: drivers/common/idpf/version.map:10
error: while searching for:
idpf_qc_single_rxq_mbufs_alloc;
idpf_qc_single_tx_queue_reset;
idpf_qc_singleq_rx_vec_setup;
idpf_qc_singleq_tx_vec_avx512_setup;
idpf_qc_split_rx_bufq_reset;
idpf_qc_split_rx_descq_reset;
idpf_qc_split_rx_queue_reset;
error: patch failed: drivers/common/idpf/version.map:19
Checking patch drivers/net/idpf/idpf_rxtx.c...
error: while searching for:
if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
#ifdef CC_AVX512_SUPPORT
if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
vport->rx_use_avx512 = true;
#else
PMD_DRV_LOG(NOTICE,
error: patch failed: drivers/net/idpf/idpf_rxtx.c:758
error: while searching for:
#ifdef RTE_ARCH_X86
if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
} else {
if (vport->rx_vec_allowed) {
error: patch failed: drivers/net/idpf/idpf_rxtx.c:771
error: while searching for:
}
#ifdef CC_AVX512_SUPPORT
if (vport->rx_use_avx512) {
dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts_avx512;
return;
}
#endif /* CC_AVX512_SUPPORT */
}
dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
}
#else
if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
else
dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
#endif /* RTE_ARCH_X86 */
}
error: patch failed: drivers/net/idpf/idpf_rxtx.c:780
error: while searching for:
int i;
#endif /* CC_AVX512_SUPPORT */
if (idpf_rx_vec_dev_check_default(dev) == IDPF_VECTOR_PATH &&
rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
vport->tx_vec_allowed = true;
if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
#ifdef CC_AVX512_SUPPORT
if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
vport->tx_use_avx512 = true;
#else
PMD_DRV_LOG(NOTICE,
"AVX512 is not supported in build env");
error: patch failed: drivers/net/idpf/idpf_rxtx.c:806
error: while searching for:
}
#endif /* RTE_ARCH_X86 */
if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
dev->tx_pkt_prepare = idpf_dp_prep_pkts;
} else {
#ifdef RTE_ARCH_X86
if (vport->tx_vec_allowed) {
#ifdef CC_AVX512_SUPPORT
if (vport->tx_use_avx512) {
error: patch failed: drivers/net/idpf/idpf_rxtx.c:823
error: while searching for:
txq = dev->data->tx_queues[i];
if (txq == NULL)
continue;
idpf_qc_singleq_tx_vec_avx512_setup(txq);
}
dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts_avx512;
dev->tx_pkt_prepare = idpf_dp_prep_pkts;
return;
}
#endif /* CC_AVX512_SUPPORT */
}
#endif /* RTE_ARCH_X86 */
dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
dev->tx_pkt_prepare = idpf_dp_prep_pkts;
}
}
error: patch failed: drivers/net/idpf/idpf_rxtx.c:835
Checking patch drivers/net/idpf/idpf_rxtx_vec_common.h...
Applying patch drivers/common/idpf/version.map with 2 rejects...
Rejected hunk #1.
Rejected hunk #2.
Applying patch drivers/net/idpf/idpf_rxtx.c with 6 rejects...
Rejected hunk #1.
Rejected hunk #2.
Rejected hunk #3.
Rejected hunk #4.
Rejected hunk #5.
Rejected hunk #6.
Applied patch drivers/net/idpf/idpf_rxtx_vec_common.h cleanly.
diff a/drivers/common/idpf/version.map b/drivers/common/idpf/version.map (rejected hunks)
@@ -10,7 +10,9 @@ INTERNAL {
idpf_dp_singleq_xmit_pkts;
idpf_dp_singleq_xmit_pkts_avx512;
idpf_dp_splitq_recv_pkts;
+ idpf_dp_splitq_recv_pkts_avx512;
idpf_dp_splitq_xmit_pkts;
+ idpf_dp_splitq_xmit_pkts_avx512;
idpf_qc_rx_thresh_check;
idpf_qc_rx_queue_release;
@@ -19,7 +21,8 @@ INTERNAL {
idpf_qc_single_rxq_mbufs_alloc;
idpf_qc_single_tx_queue_reset;
idpf_qc_singleq_rx_vec_setup;
- idpf_qc_singleq_tx_vec_avx512_setup;
+ idpf_qc_splitq_rx_vec_setup;
+ idpf_qc_tx_vec_avx512_setup;
idpf_qc_split_rx_bufq_reset;
idpf_qc_split_rx_descq_reset;
idpf_qc_split_rx_queue_reset;
diff a/drivers/net/idpf/idpf_rxtx.c b/drivers/net/idpf/idpf_rxtx.c (rejected hunks)
@@ -758,7 +758,8 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
#ifdef CC_AVX512_SUPPORT
if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
- rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
+ rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1 &&
+ rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512DQ))
vport->rx_use_avx512 = true;
#else
PMD_DRV_LOG(NOTICE,
@@ -771,6 +772,24 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
#ifdef RTE_ARCH_X86
if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ if (vport->rx_vec_allowed) {
+ for (i = 0; i < dev->data->nb_rx_queues; i++) {
+ rxq = dev->data->rx_queues[i];
+ (void)idpf_qc_splitq_rx_vec_setup(rxq);
+ }
+#ifdef CC_AVX512_SUPPORT
+ if (vport->rx_use_avx512) {
+ PMD_DRV_LOG(NOTICE,
+ "Using Split AVX512 Vector Rx (port %d).",
+ dev->data->port_id);
+ dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts_avx512;
+ return;
+ }
+#endif /* CC_AVX512_SUPPORT */
+ }
+ PMD_DRV_LOG(NOTICE,
+ "Using Split Scalar Rx (port %d).",
+ dev->data->port_id);
dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
} else {
if (vport->rx_vec_allowed) {
@@ -780,19 +799,31 @@ idpf_set_rx_function(struct rte_eth_dev *dev)
}
#ifdef CC_AVX512_SUPPORT
if (vport->rx_use_avx512) {
+ PMD_DRV_LOG(NOTICE,
+ "Using Single AVX512 Vector Rx (port %d).",
+ dev->data->port_id);
dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts_avx512;
return;
}
#endif /* CC_AVX512_SUPPORT */
}
-
+ PMD_DRV_LOG(NOTICE,
+ "Using Single Scalar Rx (port %d).",
+ dev->data->port_id);
dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
}
#else
- if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT)
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ PMD_DRV_LOG(NOTICE,
+ "Using Split Scalar Rx (port %d).",
+ dev->data->port_id);
dev->rx_pkt_burst = idpf_dp_splitq_recv_pkts;
- else
+ } else {
+ PMD_DRV_LOG(NOTICE,
+ "Using Single Scalar Rx (port %d).",
+ dev->data->port_id);
dev->rx_pkt_burst = idpf_dp_singleq_recv_pkts;
+ }
#endif /* RTE_ARCH_X86 */
}
@@ -806,14 +837,22 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
int i;
#endif /* CC_AVX512_SUPPORT */
- if (idpf_rx_vec_dev_check_default(dev) == IDPF_VECTOR_PATH &&
+ if (idpf_tx_vec_dev_check_default(dev) == IDPF_VECTOR_PATH &&
rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) {
vport->tx_vec_allowed = true;
if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
#ifdef CC_AVX512_SUPPORT
+ {
if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&
rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) == 1)
vport->tx_use_avx512 = true;
+ if (vport->tx_use_avx512) {
+ for (i = 0; i < dev->data->nb_tx_queues; i++) {
+ txq = dev->data->tx_queues[i];
+ idpf_qc_tx_vec_avx512_setup(txq);
+ }
+ }
+ }
#else
PMD_DRV_LOG(NOTICE,
"AVX512 is not supported in build env");
@@ -823,11 +862,26 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
}
#endif /* RTE_ARCH_X86 */
+#ifdef RTE_ARCH_X86
if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ if (vport->tx_vec_allowed) {
+#ifdef CC_AVX512_SUPPORT
+ if (vport->tx_use_avx512) {
+ PMD_DRV_LOG(NOTICE,
+ "Using Split AVX512 Vector Tx (port %d).",
+ dev->data->port_id);
+ dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts_avx512;
+ dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+ return;
+ }
+#endif /* CC_AVX512_SUPPORT */
+ }
+ PMD_DRV_LOG(NOTICE,
+ "Using Split Scalar Tx (port %d).",
+ dev->data->port_id);
dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
dev->tx_pkt_prepare = idpf_dp_prep_pkts;
} else {
-#ifdef RTE_ARCH_X86
if (vport->tx_vec_allowed) {
#ifdef CC_AVX512_SUPPORT
if (vport->tx_use_avx512) {
@@ -835,16 +889,36 @@ idpf_set_tx_function(struct rte_eth_dev *dev)
txq = dev->data->tx_queues[i];
if (txq == NULL)
continue;
- idpf_qc_singleq_tx_vec_avx512_setup(txq);
+ idpf_qc_tx_vec_avx512_setup(txq);
}
+ PMD_DRV_LOG(NOTICE,
+ "Using Single AVX512 Vector Tx (port %d).",
+ dev->data->port_id);
dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts_avx512;
dev->tx_pkt_prepare = idpf_dp_prep_pkts;
return;
}
#endif /* CC_AVX512_SUPPORT */
}
-#endif /* RTE_ARCH_X86 */
+ PMD_DRV_LOG(NOTICE,
+ "Using Single Scalar Tx (port %d).",
+ dev->data->port_id);
+ dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
+ dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+ }
+#else
+ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ PMD_DRV_LOG(NOTICE,
+ "Using Split Scalar Tx (port %d).",
+ dev->data->port_id);
+ dev->tx_pkt_burst = idpf_dp_splitq_xmit_pkts;
+ dev->tx_pkt_prepare = idpf_dp_prep_pkts;
+ } else {
+ PMD_DRV_LOG(NOTICE,
+ "Using Single Scalar Tx (port %d).",
+ dev->data->port_id);
dev->tx_pkt_burst = idpf_dp_singleq_xmit_pkts;
dev->tx_pkt_prepare = idpf_dp_prep_pkts;
}
+#endif /* RTE_ARCH_X86 */
}
https://lab.dpdk.org/results/dashboard/patchsets/25215/
UNH-IOL DPDK Community Lab
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2023-02-06 3:58 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20230206034124.1915756-2-wenjun1.wu@intel.com>
2023-02-06 3:20 ` |WARNING| pw123067 [PATCH v9 1/1] common/idpf: add AVX512 data path for split queue model qemudev
2023-02-06 3:34 ` checkpatch
2023-02-06 3:58 |WARNING| pw123067 [PATCH] [v9, " dpdklab
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).