From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 833D245710; Thu, 1 Aug 2024 13:15:22 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6171942F27; Thu, 1 Aug 2024 13:15:22 +0200 (CEST) Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by mails.dpdk.org (Postfix) with ESMTP id 62C5A40DF8 for ; Thu, 1 Aug 2024 13:15:20 +0200 (CEST) Received: from mail.maildlp.com (unknown [172.19.88.214]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4WZRBh1gGlz1HFnk; Thu, 1 Aug 2024 19:12:28 +0800 (CST) Received: from dggpeml500024.china.huawei.com (unknown [7.185.36.10]) by mail.maildlp.com (Postfix) with ESMTPS id 501A01A016C; Thu, 1 Aug 2024 19:15:18 +0800 (CST) Received: from localhost.localdomain (10.50.165.33) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Thu, 1 Aug 2024 19:15:18 +0800 From: Chengwen Feng To: , CC: , Subject: [PATCH] examples/eventdev: fix segment fault with generic pipeline Date: Thu, 1 Aug 2024 11:11:20 +0000 Message-ID: <20240801111120.5380-1-fengchengwen@huawei.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.50.165.33] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpeml500024.china.huawei.com (7.185.36.10) X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org There was a segmentation fault when executing eventdev_pipeline with command [1] with ConnectX-5 NIC card: 0x000000000079208c in rte_eth_tx_buffer (tx_pkt=0x16f8ed300, buffer=0x100, queue_id=11, port_id=0) at ../lib/ethdev/rte_ethdev.h:6636 txa_service_tx (txa=0x17b19d080, ev=0xffffffffe500, n=4) at ../lib/eventdev/rte_event_eth_tx_adapter.c:631 0x0000000000792234 in txa_service_func (args=0x17b19d080) at ../lib/eventdev/rte_event_eth_tx_adapter.c:666 0x00000000008b0784 in service_runner_do_callback (s=0x17fffe100, cs=0x17ffb5f80, service_idx=2) at ../lib/eal/common/rte_service.c:405 0x00000000008b0ad8 in service_run (i=2, cs=0x17ffb5f80, service_mask=18446744073709551615, s=0x17fffe100, serialize_mt_unsafe=0) at ../lib/eal/common/rte_service.c:441 0x00000000008b0c68 in rte_service_run_iter_on_app_lcore (id=2, serialize_mt_unsafe=0) at ../lib/eal/common/rte_service.c:477 0x000000000057bcc4 in schedule_devices (lcore_id=0) at ../examples/eventdev_pipeline/pipeline_common.h:138 0x000000000057ca94 in worker_generic_burst (arg=0x17b131e80) at ../examples/eventdev_pipeline/pipeline_worker_generic.c:83 0x00000000005794a8 in main (argc=11, argv=0xfffffffff470) at ../examples/eventdev_pipeline/main.c:449 The root cause is that the queue_id (11) is invalid, the queue_id comes from mbuf.hash.txadapter.txq which may pre-write by NIC driver when receiving packets (e.g. pre-write mbuf.hash.fdir.hi field). Because this example only enabled one ethdev queue, so fixes it by reset txq to zero in the first worker stage. [1] dpdk-eventdev_pipeline -l 0-48 --vdev event_sw0 -- -r1 -t1 -e1 -w ff0 -s5 -n0 -c32 -W1000 -D When launch eventdev_pipeline with command [1], event_sw Fixes: 81fb40f95c82 ("examples/eventdev: add generic worker pipeline") Cc: stable@dpdk.org Signed-off-by: Chengwen Feng Reported-by: Chenxingyu Wang --- examples/eventdev_pipeline/pipeline_worker_generic.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c index 783f68c91e..831d7fd53d 100644 --- a/examples/eventdev_pipeline/pipeline_worker_generic.c +++ b/examples/eventdev_pipeline/pipeline_worker_generic.c @@ -38,10 +38,12 @@ worker_generic(void *arg) } received++; - /* The first worker stage does classification */ - if (ev.queue_id == cdata.qid[0]) + /* The first worker stage does classification and sets txq. */ + if (ev.queue_id == cdata.qid[0]) { ev.flow_id = ev.mbuf->hash.rss % cdata.num_fids; + rte_event_eth_tx_adapter_txq_set(ev.mbuf, 0); + } ev.queue_id = cdata.next_qid[ev.queue_id]; ev.op = RTE_EVENT_OP_FORWARD; @@ -96,10 +98,12 @@ worker_generic_burst(void *arg) for (i = 0; i < nb_rx; i++) { - /* The first worker stage does classification */ - if (events[i].queue_id == cdata.qid[0]) + /* The first worker stage does classification and sets txq. */ + if (events[i].queue_id == cdata.qid[0]) { events[i].flow_id = events[i].mbuf->hash.rss % cdata.num_fids; + rte_event_eth_tx_adapter_txq_set(events[i].mbuf, 0); + } events[i].queue_id = cdata.next_qid[events[i].queue_id]; events[i].op = RTE_EVENT_OP_FORWARD; -- 2.17.1