From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8AE64427E2 for ; Mon, 20 Mar 2023 13:53:21 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7B1EF41141; Mon, 20 Mar 2023 13:53:21 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id AD040406BC; Mon, 20 Mar 2023 13:53:18 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679316799; x=1710852799; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=j142CdcdO/aBVoRiYyzd7Q4ahcHAGmUiq8jA0UHBJWA=; b=NSnMdQP8XjVPyXgt8ntEZn9SRJKJM0TSDsQtGpZD/59zEvnna893ZtDi q4OwkQ1/+qk28z3hmFp5qFfjm5xDtYyT2J39dyWAh0VMXxQyNnxGbHIUO dbIG2uUVGS7yD4jY/xmJgudI/KT3a2H4xz3V2/oHa1bkK39AOJykjbdcD qJYUUct2FdnzFSY8JrUsv2UJMiI2LixUSAONPzEr8pCm3aKjJpoDCt6d7 5MQke8PZ+8L2195pF7VqspRAZ2a19n/3oKhI88ndSXsmjqo63tSomVOKY V6D8AdVU9XhUTvrStSNJDQOtnNbsaghh8gmp6xp48dUPLBG/V3qDru+q7 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10654"; a="336156669" X-IronPort-AV: E=Sophos;i="5.98,274,1673942400"; d="scan'208";a="336156669" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Mar 2023 05:53:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10654"; a="750084430" X-IronPort-AV: E=Sophos;i="5.98,274,1673942400"; d="scan'208";a="750084430" Received: from fmsmsx601.amr.corp.intel.com ([10.18.126.81]) by fmsmga004.fm.intel.com with ESMTP; 20 Mar 2023 05:53:17 -0700 Received: from fmsmsx603.amr.corp.intel.com (10.18.126.83) by fmsmsx601.amr.corp.intel.com (10.18.126.81) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Mon, 20 Mar 2023 05:53:17 -0700 Received: from fmsedg601.ED.cps.intel.com (10.1.192.135) by fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21 via Frontend Transport; Mon, 20 Mar 2023 05:53:17 -0700 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (104.47.58.169) by edgegateway.intel.com (192.55.55.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.21; Mon, 20 Mar 2023 05:52:52 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WvFQOcqMg0mazcsICMlBLcVGlsAwRriD9uXt9qQdaG1wQ4CYCfueBEcwUFgJkYTAXoPFcKCNDN1vVfZuKpEhOkvoL0EBdvl2t6uwE2/jjABn3EgO00nDhbQS1KfZudyefz2vYYsMCp2I8gMX30RiVPyHK/yLL7uXj+2ep74//v7FFnFVb0leGjR1DJVam1pBOs+9VGXj9F9blwNh29rLCZw0vq16EJHDYVh8nc2V+jNoHZWjAoyIC2V6GvQ1Sp1fi1OLSOX7h5OoYmcNPE7MjaeKObl3IxcscswLFqRrWAiqii3b7u+n20r13IkzILJXeKIivA4ufw9/hgfwP+IHyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4NLLKDbEBhnz622xeTmZUPfNnVNujAtQHDsl0cRsa6w=; b=L8fyZJ1j9LGbJLjH5SsyuCTQuW2vo90sTU+xbe/eUyXzwXJmg/C9amOLEmdOgG0JGS+OjgfSGwPA/ujHHpEAEHxZ1Z3j8Fn/kA+zj9tlaIJ0V7mwLazQh89I6Iiy8mq3AIbLhKBf1Kzi5jfuHvy8Xku8EFTxZSrm9axn/G+k1Gq9MFafka0HchwfLDbb0yiYiGI48jM3PIDmPoIqfP7IAm0BA6tlcOTdZHhfHBvF6jzIPQjqUNUC3LIHwJ5DjBFVfhRvY5SgeVAFzNBbPpe+dU0i8EvIEZmJ3QTPl6hzoZTgnLJTARBybef9aIfB9nj3KqVQcnMSJ8lafwcOyd3jrw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from DM4PR11MB5994.namprd11.prod.outlook.com (2603:10b6:8:5d::20) by DM4PR11MB6527.namprd11.prod.outlook.com (2603:10b6:8:8e::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6178.37; Mon, 20 Mar 2023 12:52:50 +0000 Received: from DM4PR11MB5994.namprd11.prod.outlook.com ([fe80::2c5d:49cd:a9b4:f764]) by DM4PR11MB5994.namprd11.prod.outlook.com ([fe80::2c5d:49cd:a9b4:f764%7]) with mapi id 15.20.6178.037; Mon, 20 Mar 2023 12:52:50 +0000 From: "Zhang, Qi Z" To: "Ye, MingjinX" , "dev@dpdk.org" CC: "Yang, Qiming" , "stable@dpdk.org" , "Zhou, YidingX" , "Zhang, Ke1X" Subject: RE: [PATCH v5] net/ice: fix ice dcf control thread crash Thread-Topic: [PATCH v5] net/ice: fix ice dcf control thread crash Thread-Index: AQHZWxC7K+e3peQS6kWvK4+rjfE4sK8Dmqgg Date: Mon, 20 Mar 2023 12:52:50 +0000 Message-ID: References: <20230317050936.5513-1-mingjinx.ye@intel.com> <20230320094030.18949-1-mingjinx.ye@intel.com> In-Reply-To: <20230320094030.18949-1-mingjinx.ye@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-traffictypediagnostic: DM4PR11MB5994:EE_|DM4PR11MB6527:EE_ x-ms-office365-filtering-correlation-id: 3d7edf5f-5ef7-48c9-f151-08db2942038e x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: hgVEDSs0Hr0ClROHXivDKJpyFSA7BFcZ5mXGMA2DccaKMJpVlL+taCXpqwDfcplmfdYSKcC8SoxV94woBTsZei++2/rbc0OGRSd9gHu3QVaiqzEdJBj4XyXyGwNabM1odc9iV/HhHjkSCX1lxJ1hDiKcLQ0r0RQASSWvz0uHm3Nuzua0dPEJjUwiNwmeuAcBt5rcd2MYGZv4Sh4ibUIMxYsGoPI4b5itSCVlCATAC5/vdwAx2syr4okeYbNd8MszvVmzKuvAnVNDaJMyutnyDAeg5OcAvLn5bLd4f8NnErcSR+GA1d4+ZCsrfHDslnVXg5Ll5pAL+EUM94Cvho7YvbLdI1dV8s21yrfPLupXZH0MdvrBpD28NEthII11CpcySbS43q2itje1iZHNXVuxnQHXQqKq3olv392/cxgZA81ItoaB+RZPNa23exAaAZ8d+9D0s/1mUrRaq6bBCrJYThCCJsFd2+sdBBfI4GBBXbwfjAbFEvaJ94V2WP3U3jSh9Bk/FfosCNzcwq5wOOF3CrKh/GzuZwwVZ9rYjMHBy8RQ63Yi0TLkCjjDSZMrzHBS0j59zqHmXo8+mRzbwAnvDnzsC+P1VfkW/0owlQtx1Ny07Gr0ZEedWz7DinP2dbXph2ZQPxK353SBiw3Mum8t5oyCIT5/hCw13RJ3hNEkF7a7nBTuYHBGdwoQF+AZ5WcfJKEYyY2cXhNuvrRVqitGnQ== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM4PR11MB5994.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230025)(39860400002)(346002)(366004)(376002)(396003)(136003)(451199018)(52536014)(8936002)(5660300002)(41300700001)(4326008)(55016003)(33656002)(86362001)(38070700005)(38100700002)(82960400001)(122000001)(2906002)(107886003)(83380400001)(71200400001)(478600001)(7696005)(316002)(6506007)(26005)(9686003)(110136005)(186003)(53546011)(450100002)(54906003)(64756008)(66556008)(66476007)(66446008)(76116006)(66946007)(8676002); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?+XdnVQpvjg9oGe9/5FCwsKph9EtnGZIPsl36cGOmoFxEB5Kd8Y+Hy8otoc8H?= =?us-ascii?Q?BbS4kVw+xqw1XDaEj9KnppDgTa7qAuF04nSbnj+ZocyPPq9Tw5bs49TOwGZZ?= =?us-ascii?Q?ldaCHfO9hFydtmWUYmKQJFN078iAt8Qydfvmbomvdw42s2+r4ktdCRZxv+xy?= =?us-ascii?Q?3QsAUeC0T6hMyd8IMUZxhOVWkTwpbbLDhF6lf5xziolrVkrhJ9uQ+hMrcb/U?= =?us-ascii?Q?JmQnFb8UMZ8hcjpTFSHYcZA0nyXbR8cMwBvm/bBQf1lPoIE+eb9gFOrkHO86?= =?us-ascii?Q?++CAqLsaXiRuBAO6Wj85csS4L7NRnP2IHNF65d5heIWf/oj3eBWM7RW8sZ7I?= =?us-ascii?Q?XfQL+C/dIfWZXosnNa9M64vxoFg3wvLE/ASqIywLt5Ip+nBYuwC/pvmJTtc7?= =?us-ascii?Q?3P2b1c9SBwm1JrurTGYwknTszVOncq62ekiUVxrcaMKnpQWi49PcQyWpQT2N?= =?us-ascii?Q?rLqE9L9Eurog5wzCOVtsPqpDD8q5tFXQ50SGKUfBsQm6cz4KapqToE1flqxu?= =?us-ascii?Q?7wuakwakzUjL1G+UstQb1wKM0gEnMKWQIHj04JjVwBAIAkf0SxAQJ4ZKk8xh?= =?us-ascii?Q?unvq6JsFPaP3mO2GMh0G3j8/GjIZ0g+LGXWnrTH6k5Mz9gVi9HU/avzsJgJw?= =?us-ascii?Q?f0GXd4ghB0tFyYJVxIhLxKx0GOoIbJKnjxMpR6e1zO+1/EhMnuMqYFy1oKbo?= =?us-ascii?Q?eVC79069nXxPmTq2p4Nxwg2iG/1IhnpYBlekAFmhNgOkxn/KGMqM9zNVRWLi?= =?us-ascii?Q?Gnup2eOdQJBpiL05pRTekP0z7rQFk2a+ixxBsqXOmhBWnGn7sOXLr3R9Zy/f?= =?us-ascii?Q?BV5I2AU/O54BvvyFgm2g2yZ+fHgtyYs6gYiNOR3hXPDMSyFPPPHjO7nu+pTh?= =?us-ascii?Q?f57Mz2xSaiPwAG4ifanZY7vmb8+39VzaUSdMCMCM3SusjvnPzggwKkr6SE5l?= =?us-ascii?Q?5JGr5KkmqDpqK0jRy3594lFirVEvk7Gt7wx7IizDkayYZdyk4QkFkEdmj/9O?= =?us-ascii?Q?hkta8YPzoiyfQG/xsvu+2fOqKrsFX8yaMDEX+W2LpQTr1j1U7w/QwjrFbIy5?= =?us-ascii?Q?38jZ/VTz819PKs/sm6+Z9GFc3tcj6+6xNngg7VnBHy8O7sSiwiLyJXZ3/0LD?= =?us-ascii?Q?6XbCASHRytTFpUGOlIkW4fG/9KCw5MP+sZazcxaZSMrZ2g6WHJdhSG0xrnZ6?= =?us-ascii?Q?ASJUB163BVKyxmj7DZJRQZFakXOS/qogg+cS7KpYfx4dRX+F+Xn4+6Tm6Ggv?= =?us-ascii?Q?MXuE4bUTDX0XoZfXV98+VRBL/TlHxOSQ9xZXBUW40HiNg4Q4wLMs+mR9BkIs?= =?us-ascii?Q?Fp5fWusoxmI6Jqm2QLFfSoGmHaWdph0XFYwQ5JR/6VJz7a8A3H8vmgVDGZHl?= =?us-ascii?Q?rUhQrRSghA5UU9/weRV/pyP3YkAXopLisXFZnXIQIbayMl8xSLIIEHOONneR?= =?us-ascii?Q?U1T6FkUpRg5tLxq4T/S4yRWXCEQtZZ8aaYcJVt7jT5Xq37f+/lyxt/OMvVLX?= =?us-ascii?Q?8KB1GgkvJIMF8fH4Tw+i0iq0YN/RLx9FuqEzojlHuf4RL70wLIIMRZOcXb9u?= =?us-ascii?Q?0sbWx5dM4MqotqYsjhG/MbdhQ1c/PMUMGYzM532G?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: DM4PR11MB5994.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 3d7edf5f-5ef7-48c9-f151-08db2942038e X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Mar 2023 12:52:50.7024 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: jBqwKGLSXcfxwExe3tJeMCuVvkBpE6T4B2No2VRuOl1vHyXIZZrK3ahztkUaUkUrr/psozkda//nkyNwYJYw8A== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR11MB6527 X-OriginatorOrg: intel.com X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org > -----Original Message----- > From: Ye, MingjinX > Sent: Monday, March 20, 2023 5:41 PM > To: dev@dpdk.org > Cc: Yang, Qiming ; stable@dpdk.org; Zhou, YidingX > ; Ye, MingjinX ; Zhang, > Ke1X ; Zhang, Qi Z > Subject: [PATCH v5] net/ice: fix ice dcf control thread crash >=20 > The control thread accesses the hardware resources after the resources we= re > released, which results in a segment error. >=20 > The 'ice-reset' threads are detached, so thread resources cannot be > reclaimed by `pthread_join` calls. >=20 > This commit synchronizes the number of 'ice-reset' threads by adding two > variables ('vsi_update_thread_num' and 'vsi_thread_lock' spinlock) to 'st= ruct > ice_dcf_hw'. When releasing HW resources, we clear the event callback > function. That makes these threads exit quickly. After the number of 'ice= - > reset' threads decreased to be 0, we release resources. >=20 > Fixes: 3b3757bda3c3 ("net/ice: get VF hardware index in DCF") > Fixes: 931ee54072b1 ("net/ice: support QoS bandwidth config after VF rese= t > in DCF") > Fixes: c7e1a1a3bfeb ("net/ice: refactor DCF VLAN handling") > Fixes: 0b02c9519432 ("net/ice: handle PF initialization by DCF") > Fixes: b71573ec2fc2 ("net/ice: retry getting VF VSI map after failure") > Fixes: 7564d5509611 ("net/ice: add DCF hardware initialization") > Cc: stable@dpdk.org >=20 > Signed-off-by: Ke Zhang > Signed-off-by: Mingjin Ye > --- > v2: add pthread_exit() for windows > --- > v3: Optimization. It is unsafe for a thread to forcibly exit, which will = cause > the spin lock to not be released correctly > --- > v4: Safely wait for all event threads to end > --- > v5: Spinlock moved to struct ice_dcf_hw > --- > drivers/net/ice/ice_dcf.c | 21 +++++++++++++++++++-- > drivers/net/ice/ice_dcf.h | 3 +++ > drivers/net/ice/ice_dcf_parent.c | 23 +++++++++++++++++++++++ > 3 files changed, 45 insertions(+), 2 deletions(-) >=20 > diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index > 1c3d22ae0f..53f62a06f4 100644 > --- a/drivers/net/ice/ice_dcf.c > +++ b/drivers/net/ice/ice_dcf.c > @@ -543,6 +543,8 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw > *hw) > ice_dcf_disable_irq0(hw); >=20 > for (;;) { > + if (hw->vc_event_msg_cb =3D=3D NULL) > + break; Can you explain why this is required, seems it not related with your commit= log > if (ice_dcf_get_vf_resource(hw) =3D=3D 0 && > ice_dcf_get_vf_vsi_map(hw) >=3D 0) { > err =3D 0; > @@ -555,8 +557,10 @@ ice_dcf_handle_vsi_update_event(struct ice_dcf_hw > *hw) > rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME); > } >=20 > - rte_intr_enable(pci_dev->intr_handle); > - ice_dcf_enable_irq0(hw); > + if (hw->vc_event_msg_cb !=3D NULL) { > + rte_intr_enable(pci_dev->intr_handle); > + ice_dcf_enable_irq0(hw); Same question as above > + } >=20 > rte_spinlock_unlock(&hw->vc_cmd_send_lock); >=20 > @@ -639,6 +643,9 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct > ice_dcf_hw *hw) > rte_spinlock_init(&hw->vc_cmd_queue_lock); > TAILQ_INIT(&hw->vc_cmd_queue); >=20 > + rte_spinlock_init(&hw->vsi_thread_lock); > + hw->vsi_update_thread_num =3D 0; > + > hw->arq_buf =3D rte_zmalloc("arq_buf", ICE_DCF_AQ_BUF_SZ, 0); > if (hw->arq_buf =3D=3D NULL) { > PMD_INIT_LOG(ERR, "unable to allocate AdminQ buffer > memory"); @@ -749,6 +756,12 @@ ice_dcf_uninit_hw(struct rte_eth_dev > *eth_dev, struct ice_dcf_hw *hw) > struct rte_pci_device *pci_dev =3D RTE_ETH_DEV_TO_PCI(eth_dev); > struct rte_intr_handle *intr_handle =3D pci_dev->intr_handle; >=20 > + /* Clear event callbacks, `VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE` > + * event will be ignored and all running `ice-thread` threads > + * will exit quickly. > + */ > + hw->vc_event_msg_cb =3D NULL; > + > if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_QOS) > if (hw->tm_conf.committed) { > ice_dcf_clear_bw(hw); > @@ -760,6 +773,10 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, > struct ice_dcf_hw *hw) > rte_intr_callback_unregister(intr_handle, > ice_dcf_dev_interrupt_handler, hw); >=20 > + /* Wait for all `ice-thread` threads to exit. */ > + while (hw->vsi_update_thread_num !=3D 0) > + rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME); > + > ice_dcf_mode_disable(hw); > iavf_shutdown_adminq(&hw->avf); >=20 > diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index > 7f42ebabe9..f95ef2794c 100644 > --- a/drivers/net/ice/ice_dcf.h > +++ b/drivers/net/ice/ice_dcf.h > @@ -105,6 +105,9 @@ struct ice_dcf_hw { > void (*vc_event_msg_cb)(struct ice_dcf_hw *dcf_hw, > uint8_t *msg, uint16_t msglen); >=20 > + rte_spinlock_t vsi_thread_lock; > + int vsi_update_thread_num; > + > uint8_t *arq_buf; >=20 > uint16_t num_vfs; > diff --git a/drivers/net/ice/ice_dcf_parent.c > b/drivers/net/ice/ice_dcf_parent.c > index 01e390ddda..e48eb69c1a 100644 > --- a/drivers/net/ice/ice_dcf_parent.c > +++ b/drivers/net/ice/ice_dcf_parent.c > @@ -130,6 +130,9 @@ ice_dcf_vsi_update_service_handler(void *param) >=20 > rte_spinlock_lock(&vsi_update_lock); >=20 > + if (hw->vc_event_msg_cb =3D=3D NULL) > + goto update_end; > + > if (!ice_dcf_handle_vsi_update_event(hw)) { > __atomic_store_n(&parent_adapter->dcf_state_on, true, > __ATOMIC_RELAXED); > @@ -150,10 +153,14 @@ ice_dcf_vsi_update_service_handler(void *param) > if (hw->tm_conf.committed) > ice_dcf_replay_vf_bw(hw, reset_param->vf_id); >=20 > +update_end: > rte_spinlock_unlock(&vsi_update_lock); >=20 > free(param); >=20 > + rte_spinlock_lock(&hw->vsi_thread_lock); > + hw->vsi_update_thread_num--; > + rte_spinlock_unlock(&hw->vsi_thread_lock); > return NULL; > } >=20 > @@ -183,6 +190,10 @@ start_vsi_reset_thread(struct ice_dcf_hw *dcf_hw, > bool vfr, uint16_t vf_id) > PMD_DRV_LOG(ERR, "Failed to start the thread for reset > handling"); > free(param); > } > + > + rte_spinlock_lock(&dcf_hw->vsi_thread_lock); > + dcf_hw->vsi_update_thread_num++; > + rte_spinlock_unlock(&dcf_hw->vsi_thread_lock); I think you can define vsi_update_thread_num as rte_atomic32_t and use rte_= atomic32_add/sub=20 > } >=20 > static uint32_t > @@ -262,6 +273,18 @@ ice_dcf_handle_pf_event_msg(struct ice_dcf_hw > *dcf_hw, > PMD_DRV_LOG(DEBUG, > "VIRTCHNL_EVENT_PF_DRIVER_CLOSE event"); > break; > case VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE: > + /* If the event handling callback is empty, the event cannot > + * be handled. Therefore we ignore this event. > + */ > + if (dcf_hw->vc_event_msg_cb =3D=3D NULL) { > + PMD_DRV_LOG(DEBUG, > + "VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE > event " > + "received: VF%u with VSI num %u, ignore > processing", > + pf_msg->event_data.vf_vsi_map.vf_id, > + pf_msg->event_data.vf_vsi_map.vsi_id); > + break; > + } > + > PMD_DRV_LOG(DEBUG, > "VIRTCHNL_EVENT_DCF_VSI_MAP_UPDATE event : VF%u with VSI num %u", > pf_msg->event_data.vf_vsi_map.vf_id, > pf_msg->event_data.vf_vsi_map.vsi_id); > -- > 2.25.1