From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CBDE443A04; Tue, 30 Jan 2024 02:32:51 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 999BD402C2; Tue, 30 Jan 2024 02:32:51 +0100 (CET) Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by mails.dpdk.org (Postfix) with ESMTP id 9A2714029A for ; Tue, 30 Jan 2024 02:32:49 +0100 (CET) Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4TP72G0bLJzHvKK; Tue, 30 Jan 2024 09:32:22 +0800 (CST) Received: from dggpeml500011.china.huawei.com (unknown [7.185.36.84]) by mail.maildlp.com (Postfix) with ESMTPS id B37FD18005C; Tue, 30 Jan 2024 09:32:47 +0800 (CST) Received: from [10.67.121.193] (10.67.121.193) by dggpeml500011.china.huawei.com (7.185.36.84) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Tue, 30 Jan 2024 09:32:47 +0800 Message-ID: <5e096b2c-a026-4fda-b70f-c8e598b8d898@huawei.com> Date: Tue, 30 Jan 2024 09:32:47 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] app/testpmd: fix crash in multi-process packet forwarding Content-Language: en-US To: fengchengwen , CC: , , , References: <20240126024110.2671570-1-huangdengdui@huawei.com> From: huangdengdui In-Reply-To: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.67.121.193] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpeml500011.china.huawei.com (7.185.36.84) X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 2024/1/26 14:23, fengchengwen wrote: > Hi Dengdui, > > On 2024/1/26 10:41, Dengdui Huang wrote: >> On multi-process scenario, each process creates flows based on the >> number of queues. When nbcore is greater than 1, multiple cores may >> use the same queue to forward packet, like: >> dpdk-testpmd -a BDF --proc-type=auto -- -i --rxq=4 --txq=4 >> --nb-cores=2 --num-procs=2 --proc-id=0 >> testpmd> start >> mac packet forwarding - ports=1 - cores=2 - streams=4 - NUMA support >> enabled, MP allocation mode: native >> Logical Core 2 (socket 0) forwards packets on 2 streams: >> RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 >> RX P=0/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00 >> Logical Core 3 (socket 0) forwards packets on 2 streams: >> RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 >> RX P=0/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00 > > tip: it would be more readable if with an indent, just like below example. > > Acked-by: Chengwen Feng > > Thanks > OK, Thanks >> >> After this commit, the result will be: >> dpdk-testpmd -a BDF --proc-type=auto -- -i --rxq=4 --txq=4 >> --nb-cores=2 --num-procs=2 --proc-id=0 >> testpmd> start >> io packet forwarding - ports=1 - cores=2 - streams=2 - NUMA support >> enabled, MP allocation mode: native >> Logical Core 2 (socket 0) forwards packets on 1 streams: >> RX P=0/Q=0 (socket 2) -> TX P=0/Q=0 (socket 2) peer=02:00:00:00:00:00 >> Logical Core 3 (socket 0) forwards packets on 1 streams: >> RX P=0/Q=1 (socket 2) -> TX P=0/Q=1 (socket 2) peer=02:00:00:00:00:00 >> >> Fixes: a550baf24af9 ("app/testpmd: support multi-process") >> Cc: stable@dpdk.org >> >> Signed-off-by: Dengdui Huang >> --- >> app/test-pmd/config.c | 6 +----- >> 1 file changed, 1 insertion(+), 5 deletions(-) >> >> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c >> index cad7537bc6..2c4dedd603 100644 >> --- a/app/test-pmd/config.c >> +++ b/app/test-pmd/config.c >> @@ -4794,7 +4794,6 @@ rss_fwd_config_setup(void) >> queueid_t nb_q; >> streamid_t sm_id; >> int start; >> - int end; >> >> nb_q = nb_rxq; >> if (nb_q > nb_txq) >> @@ -4802,7 +4801,7 @@ rss_fwd_config_setup(void) >> cur_fwd_config.nb_fwd_lcores = (lcoreid_t) nb_fwd_lcores; >> cur_fwd_config.nb_fwd_ports = nb_fwd_ports; >> cur_fwd_config.nb_fwd_streams = >> - (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); >> + (streamid_t) (nb_q / num_procs * cur_fwd_config.nb_fwd_ports); >> >> if (cur_fwd_config.nb_fwd_streams < cur_fwd_config.nb_fwd_lcores) >> cur_fwd_config.nb_fwd_lcores = >> @@ -4824,7 +4823,6 @@ rss_fwd_config_setup(void) >> * the 2~3 queue for secondary process. >> */ >> start = proc_id * nb_q / num_procs; >> - end = start + nb_q / num_procs; >> rxp = 0; >> rxq = start; >> for (sm_id = 0; sm_id < cur_fwd_config.nb_fwd_streams; sm_id++) { >> @@ -4843,8 +4841,6 @@ rss_fwd_config_setup(void) >> continue; >> rxp = 0; >> rxq++; >> - if (rxq >= end) >> - rxq = start; >> } >> } >> >>