From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A9AFA439CC; Fri, 26 Jan 2024 07:23:21 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9ACEE42D28; Fri, 26 Jan 2024 07:23:21 +0100 (CET) Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by mails.dpdk.org (Postfix) with ESMTP id 19B69402C5 for ; Fri, 26 Jan 2024 07:23:19 +0100 (CET) Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4TLndl2GVWz1gxs6; Fri, 26 Jan 2024 14:21:31 +0800 (CST) Received: from dggpeml500024.china.huawei.com (unknown [7.185.36.10]) by mail.maildlp.com (Postfix) with ESMTPS id 04B05140416; Fri, 26 Jan 2024 14:23:17 +0800 (CST) Received: from [10.67.121.161] (10.67.121.161) by dggpeml500024.china.huawei.com (7.185.36.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 26 Jan 2024 14:23:16 +0800 Subject: Re: [PATCH] app/testpmd: fix crash in multi-process packet forwarding To: Dengdui Huang , CC: , , , References: <20240126024110.2671570-1-huangdengdui@huawei.com> From: fengchengwen Message-ID: Date: Fri, 26 Jan 2024 14:23:16 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: <20240126024110.2671570-1-huangdengdui@huawei.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.67.121.161] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpeml500024.china.huawei.com (7.185.36.10) X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hi Dengdui, On 2024/1/26 10:41, Dengdui Huang wrote: > On multi-process scenario, each process creates flows based on the > number of queues. When nbcore is greater than 1, multiple cores may > use the same queue to forward packet, like: > dpdk-testpmd -a BDF --proc-type=auto -- -i --rxq=4 --txq=4 > --nb-cores=2 --num-procs=2 --proc-id=0 > testpmd> start > mac packet forwarding - ports=1 - cores=2 - streams=4 - NUMA support > enabled, MP allocation mode: native > Logical Core 2 (socket 0) forwards packets on 2 streams: > RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 > RX P=0/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00 > Logical Core 3 (socket 0) forwards packets on 2 streams: > RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 > RX P=0/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00 tip: it would be more readable if with an indent, just like below example. Acked-by: Chengwen Feng Thanks > > After this commit, the result will be: > dpdk-testpmd -a BDF --proc-type=auto -- -i --rxq=4 --txq=4 > --nb-cores=2 --num-procs=2 --proc-id=0 > testpmd> start > io packet forwarding - ports=1 - cores=2 - streams=2 - NUMA support > enabled, MP allocation mode: native > Logical Core 2 (socket 0) forwards packets on 1 streams: > RX P=0/Q=0 (socket 2) -> TX P=0/Q=0 (socket 2) peer=02:00:00:00:00:00 > Logical Core 3 (socket 0) forwards packets on 1 streams: > RX P=0/Q=1 (socket 2) -> TX P=0/Q=1 (socket 2) peer=02:00:00:00:00:00 > > Fixes: a550baf24af9 ("app/testpmd: support multi-process") > Cc: stable@dpdk.org > > Signed-off-by: Dengdui Huang > --- > app/test-pmd/config.c | 6 +----- > 1 file changed, 1 insertion(+), 5 deletions(-) > > diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c > index cad7537bc6..2c4dedd603 100644 > --- a/app/test-pmd/config.c > +++ b/app/test-pmd/config.c > @@ -4794,7 +4794,6 @@ rss_fwd_config_setup(void) > queueid_t nb_q; > streamid_t sm_id; > int start; > - int end; > > nb_q = nb_rxq; > if (nb_q > nb_txq) > @@ -4802,7 +4801,7 @@ rss_fwd_config_setup(void) > cur_fwd_config.nb_fwd_lcores = (lcoreid_t) nb_fwd_lcores; > cur_fwd_config.nb_fwd_ports = nb_fwd_ports; > cur_fwd_config.nb_fwd_streams = > - (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports); > + (streamid_t) (nb_q / num_procs * cur_fwd_config.nb_fwd_ports); > > if (cur_fwd_config.nb_fwd_streams < cur_fwd_config.nb_fwd_lcores) > cur_fwd_config.nb_fwd_lcores = > @@ -4824,7 +4823,6 @@ rss_fwd_config_setup(void) > * the 2~3 queue for secondary process. > */ > start = proc_id * nb_q / num_procs; > - end = start + nb_q / num_procs; > rxp = 0; > rxq = start; > for (sm_id = 0; sm_id < cur_fwd_config.nb_fwd_streams; sm_id++) { > @@ -4843,8 +4841,6 @@ rss_fwd_config_setup(void) > continue; > rxp = 0; > rxq++; > - if (rxq >= end) > - rxq = start; > } > } > >