From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 68149A0032; Wed, 17 Aug 2022 10:20:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5B20240DDA; Wed, 17 Aug 2022 10:20:23 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 3453E4068E for ; Wed, 17 Aug 2022 10:20:22 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660724422; x=1692260422; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=rYtgZ+5bbXQZjI46K3CJ5cASvksLzeiz+uZGQ3WR37M=; b=PNCCsIxB3AAhXVymPlrZzGepGSko6FC2u8JkhoqNGIf6pgzYzKMF9r7Z ugNvRCRjI9OVExlmiKGhoQdZ059kUbmMhPyJWLLhq9+pvLudbmgJvC46e HTMZJ0V4O5HcYOj5eE7KucbCNRJD5QPCQ16GarrJhEmVT4cmMMIjq7h0M sqiHpcvzVmSXFrNrkRcHpXWeMS/EczKzngxEo+9iqzghFdGaFAuBHWOz7 /JWTmprdD0S21aX+dhN8PSuE2F+o4h5U/5bGHF2NFOrPicliUBJbH7gou azX4/mj8uWrzUjMcujs/gxHZsHgYh567B7zIGchFxXpkpZVKrH9HtTEog g==; X-IronPort-AV: E=McAfee;i="6400,9594,10441"; a="272829113" X-IronPort-AV: E=Sophos;i="5.93,242,1654585200"; d="scan'208";a="272829113" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2022 01:20:21 -0700 X-IronPort-AV: E=Sophos;i="5.93,242,1654585200"; d="scan'208";a="636270413" Received: from unknown (HELO localhost.localdomain) ([10.239.252.103]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Aug 2022 01:20:19 -0700 From: zhichaox.zeng@intel.com To: dev@dpdk.org Cc: qiming.yang@intel.com, Zhichao Zeng , Qi Zhang Subject: [PATCH v2] net/ice: support disabling ACL engine in DCF via devargs Date: Wed, 17 Aug 2022 16:21:17 +0800 Message-Id: <20220817082117.176980-1-zhichaox.zeng@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220725031524.4063028-1-zhichaox.zeng@intel.com> References: <20220725031524.4063028-1-zhichaox.zeng@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Zhichao Zeng Support disabling DCF ACL engine via devarg "acl=off" in cmdline, aiming to shorten the DCF startup time. Signed-off-by: Zhichao Zeng --- v2: add document for the new devarg --- doc/guides/nics/ice.rst | 11 ++++++ drivers/net/ice/ice_dcf_ethdev.c | 58 +++++++++++++++++++++++------- drivers/net/ice/ice_dcf_ethdev.h | 6 ++++ drivers/net/ice/ice_dcf_parent.c | 3 ++ drivers/net/ice/ice_ethdev.h | 2 ++ drivers/net/ice/ice_generic_flow.c | 12 +++++++ 6 files changed, 79 insertions(+), 13 deletions(-) diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index 6b903b9bbc..3aa58d3f2c 100644 --- a/doc/guides/nics/ice.rst +++ b/doc/guides/nics/ice.rst @@ -296,6 +296,17 @@ The DCF PMD needs to advertise and acquire DCF capability which allows DCF to send AdminQ commands that it would like to execute over to the PF and receive responses for the same from PF. +Additional Options +++++++++++++++++++ + +- ``Disable ACL Engine`` (default ``enabled``) + + By default, all flow engines are enabled. But if user does not need the + ACL engine related functions, user can set ``devargs`` parameter + ``acl=off`` to disable the ACL engine and shorten the startup time. + + -a 18:01.0,cap=dcf,acl=off + .. _figure_ice_dcf: .. figure:: img/ice_dcf.* diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 0da267db1f..a51e404e64 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -45,6 +45,26 @@ ice_dcf_dev_init(struct rte_eth_dev *eth_dev); static int ice_dcf_dev_uninit(struct rte_eth_dev *eth_dev); +static int +ice_dcf_cap_check_handler(__rte_unused const char *key, + const char *value, __rte_unused void *opaque); + +static int +ice_dcf_engine_disabled_handler(__rte_unused const char *key, + const char *value, __rte_unused void *opaque); + +struct ice_devarg { + enum ice_dcf_devrarg type; + const char *key; + int (*handler)(__rte_unused const char *key, + const char *value, __rte_unused void *opaque); +}; + +static const struct ice_devarg ice_devargs_table[] = { + {ICE_DCF_DEVARG_CAP, "cap", ice_dcf_cap_check_handler}, + {ICE_DCF_DEVARG_ACL, "acl", ice_dcf_engine_disabled_handler}, +}; + struct rte_ice_dcf_xstats_name_off { char name[RTE_ETH_XSTATS_NAME_SIZE]; unsigned int offset; @@ -1909,6 +1929,16 @@ ice_dcf_dev_uninit(struct rte_eth_dev *eth_dev) return 0; } +static int +ice_dcf_engine_disabled_handler(__rte_unused const char *key, + const char *value, __rte_unused void *opaque) +{ + if (strcmp(value, "off")) + return -1; + + return 0; +} + static int ice_dcf_cap_check_handler(__rte_unused const char *key, const char *value, __rte_unused void *opaque) @@ -1919,11 +1949,11 @@ ice_dcf_cap_check_handler(__rte_unused const char *key, return 0; } -static int -ice_dcf_cap_selected(struct rte_devargs *devargs) +int +ice_devargs_check(struct rte_devargs *devargs, enum ice_dcf_devrarg devarg_type) { struct rte_kvargs *kvlist; - const char *key = "cap"; + unsigned int i = 0; int ret = 0; if (devargs == NULL) @@ -1933,16 +1963,18 @@ ice_dcf_cap_selected(struct rte_devargs *devargs) if (kvlist == NULL) return 0; - if (!rte_kvargs_count(kvlist, key)) - goto exit; - - /* dcf capability selected when there's a key-value pair: cap=dcf */ - if (rte_kvargs_process(kvlist, key, - ice_dcf_cap_check_handler, NULL) < 0) - goto exit; - - ret = 1; + for (i = 0; i < ARRAY_SIZE(ice_devargs_table); i++) { + if (devarg_type == ice_devargs_table[i].type) { + if (!rte_kvargs_count(kvlist, ice_devargs_table[i].key)) + goto exit; + if (rte_kvargs_process(kvlist, ice_devargs_table[i].key, + ice_devargs_table[i].handler, NULL) < 0) + goto exit; + ret = 1; + break; + } + } exit: rte_kvargs_free(kvlist); return ret; @@ -1960,7 +1992,7 @@ eth_ice_dcf_pci_probe(__rte_unused struct rte_pci_driver *pci_drv, uint16_t dcf_vsi_id; int i, ret; - if (!ice_dcf_cap_selected(pci_dev->device.devargs)) + if (!ice_devargs_check(pci_dev->device.devargs, ICE_DCF_DEVARG_CAP)) return 1; ret = rte_eth_devargs_parse(pci_dev->device.devargs->args, ð_da); diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h index 27f6402786..4baaec4b8b 100644 --- a/drivers/net/ice/ice_dcf_ethdev.h +++ b/drivers/net/ice/ice_dcf_ethdev.h @@ -64,12 +64,18 @@ struct ice_dcf_vf_repr { struct ice_dcf_vlan outer_vlan_info; /* DCF always handle outer VLAN */ }; +enum ice_dcf_devrarg { + ICE_DCF_DEVARG_CAP, + ICE_DCF_DEVARG_ACL, +}; + extern const struct rte_tm_ops ice_dcf_tm_ops; void ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw, uint8_t *msg, uint16_t msglen); int ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev); void ice_dcf_uninit_parent_adapter(struct rte_eth_dev *eth_dev); +int ice_devargs_check(struct rte_devargs *devargs, enum ice_dcf_devrarg devarg_type); int ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param); int ice_dcf_vf_repr_uninit(struct rte_eth_dev *vf_rep_eth_dev); int ice_dcf_vf_repr_init_vlan(struct rte_eth_dev *vf_rep_eth_dev); diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c index 2f96dedcce..c67c865d8e 100644 --- a/drivers/net/ice/ice_dcf_parent.c +++ b/drivers/net/ice/ice_dcf_parent.c @@ -466,6 +466,9 @@ ice_dcf_init_parent_adapter(struct rte_eth_dev *eth_dev) ice_dcf_update_vf_vsi_map(parent_hw, hw->num_vfs, hw->vf_vsi_map); + if (ice_devargs_check(eth_dev->device->devargs, ICE_DCF_DEVARG_ACL)) + parent_adapter->disabled_engine_mask |= BIT(ICE_FLOW_ENGINE_ACL); + err = ice_flow_init(parent_adapter); if (err) { PMD_INIT_LOG(ERR, "Failed to initialize flow"); diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index ec23dae665..5bd5ead0e6 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -610,6 +610,8 @@ struct ice_adapter { struct ice_rss_prof_info rss_prof_info[ICE_MAX_PTGS]; /* True if DCF state of the associated PF is on */ bool dcf_state_on; + /* Set bit if the engine is disabled */ + unsigned long disabled_engine_mask; struct ice_parser *psr; #ifdef RTE_ARCH_X86 bool rx_use_avx2; diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c index 57eb002bde..d496c28dec 100644 --- a/drivers/net/ice/ice_generic_flow.c +++ b/drivers/net/ice/ice_generic_flow.c @@ -28,6 +28,8 @@ /*Pipeline mode, fdir used at distributor stage*/ #define ICE_FLOW_CLASSIFY_STAGE_DISTRIBUTOR 2 +#define ICE_FLOW_ENGINE_DISABLED(mask, type) ((mask) & BIT(type)) + static struct ice_engine_list engine_list = TAILQ_HEAD_INITIALIZER(engine_list); @@ -1841,6 +1843,11 @@ ice_flow_init(struct ice_adapter *ad) return -ENOTSUP; } + if (ICE_FLOW_ENGINE_DISABLED(ad->disabled_engine_mask, engine->type)) { + PMD_INIT_LOG(INFO, "Engine %d disabled", engine->type); + continue; + } + ret = engine->init(ad); if (ret) { PMD_INIT_LOG(ERR, "Failed to initialize engine %d", @@ -1861,6 +1868,11 @@ ice_flow_uninit(struct ice_adapter *ad) void *temp; RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) { + if (ICE_FLOW_ENGINE_DISABLED(ad->disabled_engine_mask, engine->type)) { + PMD_DRV_LOG(DEBUG, "Engine %d disabled skip it", engine->type); + continue; + } + if (engine->uninit) engine->uninit(ad); } -- 2.25.1