From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 04DCCA00C5; Mon, 15 Aug 2022 08:45:05 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9B18440143; Mon, 15 Aug 2022 08:45:05 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id 11806400EF for ; Mon, 15 Aug 2022 08:45:02 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1660545903; x=1692081903; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-transfer-encoding:mime-version; bh=yNObqr51LMszoFlqLtV1gbTqccnkEbkxsvop8Oi/X2E=; b=luikO7qmH2lufwosP7/RrKwPnflujVVusBgqKiYG7zB/ls0YnLBgepai Ccv6cP4RI1xYYef1f7HGtiLu0x7Z6ghXJIGYpqzpX5858G7uE1lYjqssY ObUuS0hmCZ0e9gn9tATdlDCdgChwbS4gjRfL+BxaY+oj0e5oGXkiKXsW+ an0IYpgcLda3MWZWcqbZE8oT8L+HCfIoc8h8+X/hbYu9f8d2qFKBxo0zj jW88A3o6zbMrQYPL52Flsk9aD58zKqeknlVRCNM0+nOfLES02QqniGk0G jHKywFGCZX1hJ3q8voTSQ/bt4gKmzuEgfqVg65bimrABPdhfU8kQafY+v g==; X-IronPort-AV: E=McAfee;i="6400,9594,10439"; a="378191285" X-IronPort-AV: E=Sophos;i="5.93,237,1654585200"; d="scan'208";a="378191285" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Aug 2022 23:45:01 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,237,1654585200"; d="scan'208";a="639552174" Received: from fmsmsx603.amr.corp.intel.com ([10.18.126.83]) by orsmga001.jf.intel.com with ESMTP; 14 Aug 2022 23:45:01 -0700 Received: from fmsmsx608.amr.corp.intel.com (10.18.126.88) by fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Sun, 14 Aug 2022 23:45:00 -0700 Received: from fmsmsx611.amr.corp.intel.com (10.18.126.91) by fmsmsx608.amr.corp.intel.com (10.18.126.88) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28; Sun, 14 Aug 2022 23:44:59 -0700 Received: from FMSEDG603.ED.cps.intel.com (10.1.192.133) by fmsmsx611.amr.corp.intel.com (10.18.126.91) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.28 via Frontend Transport; Sun, 14 Aug 2022 23:44:59 -0700 Received: from NAM10-BN7-obe.outbound.protection.outlook.com (104.47.70.101) by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2375.28; Sun, 14 Aug 2022 23:44:59 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=T604JiyGWmPhDPleri2kfX/8zm3U+xY67AGMdXui6mwD4g6GWRpLoniyWkObcVTDs+srugD6UDKGM4T0zKNjIUdJIi7wB3GbjO7O4XsVrON40kd1bpaxF0kR3L58Qrhc2ZGoYJ2DtvuDyEzpQlEza654x04PasH2RQ1FcRyNwHRs34S8q0aQ5ney2ZoOBakhK9Nk1VVkuJqhsDg3xXmWkqMfyCCnDR8D/vEluhJwjMkx0WRgKaWrZ/x1o1BgWViXuOUU+NdHVjtlM2UW4jLXRyOlGGqj7V8Fxd285uf/AlqzSKuFXNZaIRMfdOsLJIROTEz1x+nUQKi643l3mgSazg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=7OlmeUqOlJ8G5dkMhXM+qQsmvzVGHR6JY3klyFPJ7oU=; b=KX47uQly4kJ7o+4QE5GuNBF/CLypa2dHoSYD9yMf6USBcytOlzhlguSxRlqL5pybeEPnOt5MuqSAZo+GKFlqruY43xfp1ZWYU4P23gHJrOGeX6ULjr5UTGD1HjHNRRL9x4Fw+muZBzFpgiUwKMLK79TyCkPff1nQsytT2Q+T1QsqzE1K8q7G3VnqH2V/QQb5pCYjaK2gvcBmXOKoD6wum2B3lQECBL3oxMULbviBxeSCBFMhQpqfKc1SoOTKXSpzqaq4IWIPLlr+qQ1sql0quf0MJnBSwlU8dL25KnleF537KcvYDDSh9+H59IbXzqI/AIA3mqGO6fQQ7YorVeCfSw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none Received: from MWHPR11MB1886.namprd11.prod.outlook.com (2603:10b6:300:110::9) by BN6PR11MB1729.namprd11.prod.outlook.com (2603:10b6:404:101::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5504.14; Mon, 15 Aug 2022 06:44:52 +0000 Received: from MWHPR11MB1886.namprd11.prod.outlook.com ([fe80::c96d:8042:77d9:722a]) by MWHPR11MB1886.namprd11.prod.outlook.com ([fe80::c96d:8042:77d9:722a%12]) with mapi id 15.20.5504.028; Mon, 15 Aug 2022 06:44:52 +0000 From: "Yang, Qiming" To: "Zhang, Qi Z" CC: "dev@dpdk.org" , "Temerkhanov, Sergey" , "Drewek, Wojciech" , "Nowlin, Dan" Subject: RE: [PATCH v2 20/70] net/ice/base: refactor DDP code Thread-Topic: [PATCH v2 20/70] net/ice/base: refactor DDP code Thread-Index: AQHYsDTJdTdKm9WPJE+rvkVKL5Bdwq2vhBUA Date: Mon, 15 Aug 2022 06:44:51 +0000 Message-ID: References: <20220815071306.2910599-1-qi.z.zhang@intel.com> <20220815073206.2917968-1-qi.z.zhang@intel.com> <20220815073206.2917968-21-qi.z.zhang@intel.com> In-Reply-To: <20220815073206.2917968-21-qi.z.zhang@intel.com> Accept-Language: en-US, zh-CN Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-version: 11.6.500.17 dlp-product: dlpe-windows dlp-reaction: no-action authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 19c75f0e-e3ed-4a07-cf40-08da7e89a7e8 x-ms-traffictypediagnostic: BN6PR11MB1729:EE_ x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: zIIFTyBOt2OEUFKVnQZ3+k6ec6WoTtoAt3T68oqRK+iFt9orwMY751A3g7UFNAY9Mv0fr2514uJElywYiDyEjCSXeIvyT+kYxFE2VlqDnmA8f1FIt/Lk50K985g042X90pGl5+Os6T53KZQZz3CiQi89u7DHl9J7d4crhn4Bnphq1mXeekaR2iZFUlbKVY/NQGBGaq1r5Qd+4Zw41fRO9jLQYf9RL/TrCQ1oWz/rMHQWGMP2uwjI7vFa2O7tn7Z/Dss9vvjnjS18wnwLXqn3ApuLX4gcj8tb4/MU+7Y59qayBAeA75YkLnlqRdJ9k9pDWyZUU/MoNN9XV0EUJlkACTqUM+ygTf1eDznl+NAl7bRVR88PT69kL1A6R4rChdRbhB4JlZ8sP0tCbmnFoeEI3GUKplRQ56uMjkAYJ/EUAWwZUTO5+mvNHfY/CRf/gw8QUtU2nn7EaCGZqK7mstsVRw5CQQKRnECDeJM1XNoyLmFDBHuy8u6OOQG38YXqspMq5R3WcszuZ7UCN4cw9YoCsLH57KBZ9R7d5Uc8Ppnc0FZV6nM0lbJ08tvdWtzf145wzU5ZIh+3W/cnJjIvucOs19NPMqRDcNd3J4mzup0nw/HbrhyuBLWBNdFsB4zHQAUF+I4cQvpLORhr84u1NvwSRm0ON6HqAsoMX1/wiplFn5kGSDKQZwdVuSN06mqy+zcTbCZdVwWaqI7kksJxD4jYsmMy1VHT+VKfKq61A/pXKINYZ4NJHuV0Mrshd8VZkkrUlU4gMzO9cj1ZBN4rHmQVau63Q+uCEs8QaK5wDuW+NtCVbbaVV8yTJTLCm4AU8Lkh x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:MWHPR11MB1886.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230016)(396003)(136003)(39860400002)(366004)(376002)(346002)(478600001)(186003)(107886003)(52536014)(8936002)(6862004)(86362001)(71200400001)(55016003)(82960400001)(38070700005)(53546011)(9686003)(7696005)(26005)(41300700001)(66946007)(66556008)(66476007)(66446008)(64756008)(8676002)(76116006)(316002)(6636002)(54906003)(4326008)(6506007)(122000001)(2906002)(30864003)(33656002)(83380400001)(38100700002)(5660300002)(559001)(579004); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?/jbkXxlC4ZYZ1qZQHi54Eexh5ruy6aojxqfaS/sa3vgcMrDtBcl9wFr76GVF?= =?us-ascii?Q?T9STsiaoxQFoCGWh59klv5WHXYo20sV+B7FN9TPXdvWSKt8YeE5Fk0YgRNX/?= =?us-ascii?Q?wmGzlz3RdSbr4WRLm7nfzcmpmvmfK4ngR1HCxH+Ikhv1M2C9xnmHFzs/rGRh?= =?us-ascii?Q?WtaMDxsaXKDfbX9N7XZ0m/lXcGZsWXXpTNrZ/xtT8hPhEi2BSR1FirYiDJ/R?= =?us-ascii?Q?zLm/lCuZvpS33E/j19pAAbLHTgpdppLPm0ycHJG2sXn8OInO8T3gIkuCuksY?= =?us-ascii?Q?D35TFGtsxQlggWycvdYi6jtf3fLfb0wqIyEKQ9M2QKv4RMnd/xAa6qXL5F9D?= =?us-ascii?Q?d6UR5kUByYC/iuEyR1AwZt6Z1T8pH972ggXYHKaBujul1oNRMtrnLonC0ckP?= =?us-ascii?Q?O6+SXtVhAQbITt5i3wes7io24yhjXh3NOCadoAC3pJ2wxdu70XwsfoE35ht1?= =?us-ascii?Q?CYSageVAA4PmQp/fMq/WmKGnp1uYVUgGX0w0TtmYZsR5IvrC85Gqj2dmpOyi?= =?us-ascii?Q?UtCaF5qBQT9D/WdtwbgsCiEsLcvvEmQwAshiXZYN2vOLAT9wZYCMO/WSwVNV?= =?us-ascii?Q?ZDDcVbxs+OgnHVLPFl+t6PgmhhpTyOilCNBbtHYB6yd3eVM4GzzhgU6YJ0AG?= =?us-ascii?Q?eha+ugn7Qcc2AOQInDXMfbwCfceHjnEMgE6CDcXpdCsoP3UISSD+OxM0B60a?= =?us-ascii?Q?p3ETsFBulkWxcJztosLyFMhmsQk2OtY5PhlbAXHa0K0xq+TDeIH1Eo1YtbBW?= =?us-ascii?Q?ABlEAkORhNtpx4MsXpUrtncFW0OmzApRrWG/Ni23W3SMK0wItbdYbdfP0R1X?= =?us-ascii?Q?g6rOvhvAhxYLS9agmCje9AOrtqPTPOyoN3xp8dnSey5wPGuauN2ZBkLd6wDc?= =?us-ascii?Q?lxNrF9PCjBgCMhe8dsl0uvBcMov5Kzk4qUYRcRX+Mfuy0h36GGUw6l+OPbly?= =?us-ascii?Q?sKSGgU/D+7iNvYMyLCWencTAaGS54SYs1Q6+qdB9svO1VZkDnBkawDFDqWCZ?= =?us-ascii?Q?bLY8bACux3npM3pReIq3IcAZ/axoeTMdglMKYg1opZDmThDJ3IcaqBHuwqVg?= =?us-ascii?Q?RCDvatkijOqyjNIBMRmgGUwzFiM5I+9AY2Rlufdt1oqCvX2K7nXgznjkkJA/?= =?us-ascii?Q?PZivYRQdsdnGv/yXTiy16p78vNPjgSfl6r9WCXikQlvF89Ao7J+hRPce49RD?= =?us-ascii?Q?bintk3V4bEk0xHmZ5abn9t8FzT9k1FS3dahpWsIiM5Q5MbSLzXRspigp7fJS?= =?us-ascii?Q?FfzegeRSRRPZuABNmqTEPZUxptT0HB1ibDceiUMi7oznUZmNs5LrpkTUoCX4?= =?us-ascii?Q?Gd1bKNHzthmsFLRICyZgs2NmAXTpSLD4Jypm+0DomovbeRI45mIRe91EMnpq?= =?us-ascii?Q?nbMlLzVaK5+01f8yPnDqUP7cbLDj/D0GGox5enVOrVrTsbtXcLdQaAd78LBz?= =?us-ascii?Q?3BfVuSjy60gOCi6ye+4EkT/kz84PNz9sACaxym1Fco9x5Y+2AhCmG/4GHgbE?= =?us-ascii?Q?5cfEA5wE12VPBXUAv3QLwxql9rwpsxpwCoG8+xky1wr8tP1NXhiL4MHkUkIf?= =?us-ascii?Q?rTsI6/ZWWmFbIXoHWIEzo5XMAO8DP4C7JpYbGqjk?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1886.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 19c75f0e-e3ed-4a07-cf40-08da7e89a7e8 X-MS-Exchange-CrossTenant-originalarrivaltime: 15 Aug 2022 06:44:51.8549 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: y63GAuAiCKZJsv+xEJfEc1SItyHz1NHkc4VbSJOOP5RpZJ5TGJpW9AQEQM/o0s0cMbn+n5u8sBop9+fwdwA6MQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR11MB1729 X-OriginatorOrg: intel.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org > -----Original Message----- > From: Zhang, Qi Z > Sent: Monday, August 15, 2022 3:31 PM > To: Yang, Qiming > Cc: dev@dpdk.org; Zhang, Qi Z ; Temerkhanov, > Sergey ; Drewek, Wojciech > ; Nowlin, Dan > Subject: [PATCH v2 20/70] net/ice/base: refactor DDP code >=20 > Move DDP related into ice_ddp.c. > Refactor status flow for DDP load. > Aslo added support for DDP signature segments. >=20 > Signed-off-by: Sergey Temerkhanov > Signed-off-by: Wojciech Drewek > Signed-off-by: Dan Nowlin > Signed-off-by: Qi Zhang > --- Is this patch also cover some new adminqs for qos? > drivers/net/ice/base/ice_adminq_cmd.h | 32 + > drivers/net/ice/base/ice_bitops.h | 5 +- > drivers/net/ice/base/ice_ddp.c | 2475 +++++++++++++++++++++++++ > drivers/net/ice/base/ice_ddp.h | 466 +++++ > drivers/net/ice/base/ice_defs.h | 49 + > drivers/net/ice/base/ice_flex_pipe.c | 2175 ++-------------------- > drivers/net/ice/base/ice_flex_pipe.h | 57 +- > drivers/net/ice/base/ice_flex_type.h | 286 +-- > drivers/net/ice/base/ice_switch.c | 36 +- > drivers/net/ice/base/ice_type.h | 54 +- > drivers/net/ice/base/ice_vlan_mode.c | 1 + > drivers/net/ice/base/meson.build | 1 + > 12 files changed, 3233 insertions(+), 2404 deletions(-) > create mode 100644 drivers/net/ice/base/ice_ddp.c > create mode 100644 drivers/net/ice/base/ice_ddp.h > create mode 100644 drivers/net/ice/base/ice_defs.h >=20 > diff --git a/drivers/net/ice/base/ice_adminq_cmd.h > b/drivers/net/ice/base/ice_adminq_cmd.h > index 517af4b6ef..8f7e13096c 100644 > --- a/drivers/net/ice/base/ice_adminq_cmd.h > +++ b/drivers/net/ice/base/ice_adminq_cmd.h > @@ -9,10 +9,19 @@ > * descriptor format. It is shared between Firmware and Software. > */ >=20 > +#include "ice_osdep.h" > +#include "ice_defs.h" > +#include "ice_bitops.h" > + > #define ICE_MAX_VSI 768 > #define ICE_AQC_TOPO_MAX_LEVEL_NUM 0x9 > #define ICE_AQ_SET_MAC_FRAME_SIZE_MAX 9728 >=20 > +enum ice_aq_res_access_type { > + ICE_RES_READ =3D 1, > + ICE_RES_WRITE > +}; > + > struct ice_aqc_generic { > __le32 param0; > __le32 param1; > @@ -1035,6 +1044,24 @@ struct ice_aqc_get_topo { > __le32 addr_low; > }; >=20 > +/* Get/Set Tx Topology (indirect 0x0418/0x0417) */ > +struct ice_aqc_get_set_tx_topo { > + u8 set_flags; > +#define ICE_AQC_TX_TOPO_FLAGS_CORRER BIT(0) > +#define ICE_AQC_TX_TOPO_FLAGS_SRC_RAM BIT(1) > +#define ICE_AQC_TX_TOPO_FLAGS_SET_PSM BIT(2) > +#define ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW BIT(4) > +#define ICE_AQC_TX_TOPO_FLAGS_ISSUED BIT(5) > + u8 get_flags; > +#define ICE_AQC_TX_TOPO_GET_NO_UPDATE 0 > +#define ICE_AQC_TX_TOPO_GET_PSM 1 > +#define ICE_AQC_TX_TOPO_GET_RAM 2 > + __le16 reserved1; > + __le32 reserved2; > + __le32 addr_high; > + __le32 addr_low; > +}; > + > /* Update TSE (indirect 0x0403) > * Get TSE (indirect 0x0404) > * Add TSE (indirect 0x0401) > @@ -3008,6 +3035,7 @@ struct ice_aq_desc { > struct ice_aqc_clear_health_status clear_health_status; > struct ice_aqc_prog_topo_dev_nvm prog_topo_dev_nvm; > struct ice_aqc_read_topo_dev_nvm read_topo_dev_nvm; > + struct ice_aqc_get_set_tx_topo get_set_tx_topo; > } params; > }; >=20 > @@ -3164,6 +3192,10 @@ enum ice_adminq_opc { > ice_aqc_opc_query_node_to_root =3D 0x0413, > ice_aqc_opc_cfg_l2_node_cgd =3D 0x0414, > ice_aqc_opc_remove_rl_profiles =3D 0x0415, > + ice_aqc_opc_set_tx_topo =3D 0x0417, > + ice_aqc_opc_get_tx_topo =3D 0x0418, > + ice_aqc_opc_cfg_node_attr =3D 0x0419, > + ice_aqc_opc_query_node_attr =3D 0x041A, >=20 > /* PHY commands */ > ice_aqc_opc_get_phy_caps =3D 0x0600, > diff --git a/drivers/net/ice/base/ice_bitops.h > b/drivers/net/ice/base/ice_bitops.h > index 21ec2014e1..8060c103fa 100644 > --- a/drivers/net/ice/base/ice_bitops.h > +++ b/drivers/net/ice/base/ice_bitops.h > @@ -5,6 +5,9 @@ > #ifndef _ICE_BITOPS_H_ > #define _ICE_BITOPS_H_ >=20 > +#include "ice_defs.h" > +#include "ice_osdep.h" > + > /* Define the size of the bitmap chunk */ > typedef u32 ice_bitmap_t; >=20 > @@ -13,7 +16,7 @@ typedef u32 ice_bitmap_t; > /* Determine which chunk a bit belongs in */ > #define BIT_CHUNK(nr) ((nr) / BITS_PER_CHUNK) > /* How many chunks are required to store this many bits */ > -#define BITS_TO_CHUNKS(sz) DIVIDE_AND_ROUND_UP((sz), > BITS_PER_CHUNK) > +#define BITS_TO_CHUNKS(sz) (((sz) + BITS_PER_CHUNK - 1) / > BITS_PER_CHUNK) > /* Which bit inside a chunk this bit corresponds to */ > #define BIT_IN_CHUNK(nr) ((nr) % BITS_PER_CHUNK) > /* How many bits are valid in the last chunk, assumes nr > 0 */ > diff --git a/drivers/net/ice/base/ice_ddp.c b/drivers/net/ice/base/ice_dd= p.c > new file mode 100644 > index 0000000000..d1cae48047 > --- /dev/null > +++ b/drivers/net/ice/base/ice_ddp.c > @@ -0,0 +1,2475 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2001-2022 Intel Corporation > + */ > + > +#include "ice_ddp.h" > +#include "ice_type.h" > +#include "ice_common.h" > +#include "ice_sched.h" > + > +/** > + * ice_aq_download_pkg > + * @hw: pointer to the hardware structure > + * @pkg_buf: the package buffer to transfer > + * @buf_size: the size of the package buffer > + * @last_buf: last buffer indicator > + * @error_offset: returns error offset > + * @error_info: returns error information > + * @cd: pointer to command details structure or NULL > + * > + * Download Package (0x0C40) > + */ > +static enum ice_status > +ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, > + u16 buf_size, bool last_buf, u32 *error_offset, > + u32 *error_info, struct ice_sq_cd *cd) > +{ > + struct ice_aqc_download_pkg *cmd; > + struct ice_aq_desc desc; > + enum ice_status status; > + > + if (error_offset) > + *error_offset =3D 0; > + if (error_info) > + *error_info =3D 0; > + > + cmd =3D &desc.params.download_pkg; > + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_download_pkg); > + desc.flags |=3D CPU_TO_LE16(ICE_AQ_FLAG_RD); > + > + if (last_buf) > + cmd->flags |=3D ICE_AQC_DOWNLOAD_PKG_LAST_BUF; > + > + status =3D ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd); > + if (status =3D=3D ICE_ERR_AQ_ERROR) { > + /* Read error from buffer only when the FW returned an > error */ > + struct ice_aqc_download_pkg_resp *resp; > + > + resp =3D (struct ice_aqc_download_pkg_resp *)pkg_buf; > + if (error_offset) > + *error_offset =3D LE32_TO_CPU(resp->error_offset); > + if (error_info) > + *error_info =3D LE32_TO_CPU(resp->error_info); > + } > + > + return status; > +} > + > +/** > + * ice_aq_upload_section > + * @hw: pointer to the hardware structure > + * @pkg_buf: the package buffer which will receive the section > + * @buf_size: the size of the package buffer > + * @cd: pointer to command details structure or NULL > + * > + * Upload Section (0x0C41) > + */ > +enum ice_status > +ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, > + u16 buf_size, struct ice_sq_cd *cd) > +{ > + struct ice_aq_desc desc; > + > + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_upload_section); > + desc.flags |=3D CPU_TO_LE16(ICE_AQ_FLAG_RD); > + > + return ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd); > +} > + > +/** > + * ice_aq_update_pkg > + * @hw: pointer to the hardware structure > + * @pkg_buf: the package cmd buffer > + * @buf_size: the size of the package cmd buffer > + * @last_buf: last buffer indicator > + * @error_offset: returns error offset > + * @error_info: returns error information > + * @cd: pointer to command details structure or NULL > + * > + * Update Package (0x0C42) > + */ > +static enum ice_status > +ice_aq_update_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, u16 > buf_size, > + bool last_buf, u32 *error_offset, u32 *error_info, > + struct ice_sq_cd *cd) > +{ > + struct ice_aqc_download_pkg *cmd; > + struct ice_aq_desc desc; > + enum ice_status status; > + > + if (error_offset) > + *error_offset =3D 0; > + if (error_info) > + *error_info =3D 0; > + > + cmd =3D &desc.params.download_pkg; > + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_update_pkg); > + desc.flags |=3D CPU_TO_LE16(ICE_AQ_FLAG_RD); > + > + if (last_buf) > + cmd->flags |=3D ICE_AQC_DOWNLOAD_PKG_LAST_BUF; > + > + status =3D ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd); > + if (status =3D=3D ICE_ERR_AQ_ERROR) { > + /* Read error from buffer only when the FW returned an > error */ > + struct ice_aqc_download_pkg_resp *resp; > + > + resp =3D (struct ice_aqc_download_pkg_resp *)pkg_buf; > + if (error_offset) > + *error_offset =3D LE32_TO_CPU(resp->error_offset); > + if (error_info) > + *error_info =3D LE32_TO_CPU(resp->error_info); > + } > + > + return status; > +} > + > +/** > + * ice_find_seg_in_pkg > + * @hw: pointer to the hardware structure > + * @seg_type: the segment type to search for (i.e., SEGMENT_TYPE_CPK) > + * @pkg_hdr: pointer to the package header to be searched > + * > + * This function searches a package file for a particular segment type. = On > + * success it returns a pointer to the segment header, otherwise it will > + * return NULL. > + */ > +struct ice_generic_seg_hdr * > +ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type, > + struct ice_pkg_hdr *pkg_hdr) > +{ > + u32 i; > + > + ice_debug(hw, ICE_DBG_PKG, "Package format > version: %d.%d.%d.%d\n", > + pkg_hdr->pkg_format_ver.major, pkg_hdr- > >pkg_format_ver.minor, > + pkg_hdr->pkg_format_ver.update, > + pkg_hdr->pkg_format_ver.draft); > + > + /* Search all package segments for the requested segment type */ > + for (i =3D 0; i < LE32_TO_CPU(pkg_hdr->seg_count); i++) { > + struct ice_generic_seg_hdr *seg; > + > + seg =3D (struct ice_generic_seg_hdr *) > + ((u8 *)pkg_hdr + LE32_TO_CPU(pkg_hdr- > >seg_offset[i])); > + > + if (LE32_TO_CPU(seg->seg_type) =3D=3D seg_type) > + return seg; > + } > + > + return NULL; > +} > + > +/** > + * ice_get_pkg_seg_by_idx > + * @pkg_hdr: pointer to the package header to be searched > + * @idx: index of segment > + */ > +static struct ice_generic_seg_hdr * > +ice_get_pkg_seg_by_idx(struct ice_pkg_hdr *pkg_hdr, u32 idx) > +{ > + struct ice_generic_seg_hdr *seg =3D NULL; > + > + if (idx < LE32_TO_CPU(pkg_hdr->seg_count)) > + seg =3D (struct ice_generic_seg_hdr *) > + ((u8 *)pkg_hdr + > + LE32_TO_CPU(pkg_hdr->seg_offset[idx])); > + > + return seg; > +} > + > +/** > + * ice_is_signing_seg_at_idx - determine if segment is a signing segment > + * @pkg_hdr: pointer to package header > + * @idx: segment index > + */ > +static bool ice_is_signing_seg_at_idx(struct ice_pkg_hdr *pkg_hdr, u32 i= dx) > +{ > + struct ice_generic_seg_hdr *seg; > + bool retval =3D false; > + > + seg =3D ice_get_pkg_seg_by_idx(pkg_hdr, idx); > + if (seg) > + retval =3D LE32_TO_CPU(seg->seg_type) =3D=3D > SEGMENT_TYPE_SIGNING; > + > + return retval; > +} > + > +/** > + * ice_is_signing_seg_type_at_idx > + * @pkg_hdr: pointer to package header > + * @idx: segment index > + * @seg_id: segment id that is expected > + * @sign_type: signing type > + * > + * Determine if a segment is a signing segment of the correct type > + */ > +static bool > +ice_is_signing_seg_type_at_idx(struct ice_pkg_hdr *pkg_hdr, u32 idx, > + u32 seg_id, u32 sign_type) > +{ > + bool result =3D false; > + > + if (ice_is_signing_seg_at_idx(pkg_hdr, idx)) { > + struct ice_sign_seg *seg; > + > + seg =3D (struct ice_sign_seg *)ice_get_pkg_seg_by_idx(pkg_hdr, > + idx); > + if (seg && LE32_TO_CPU(seg->seg_id) =3D=3D seg_id && > + LE32_TO_CPU(seg->sign_type) =3D=3D sign_type) > + result =3D true; > + } > + > + return result; > +} > + > +/** > + * ice_update_pkg_no_lock > + * @hw: pointer to the hardware structure > + * @bufs: pointer to an array of buffers > + * @count: the number of buffers in the array > + */ > +enum ice_status > +ice_update_pkg_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 > count) > +{ > + enum ice_status status =3D ICE_SUCCESS; > + u32 i; > + > + for (i =3D 0; i < count; i++) { > + struct ice_buf_hdr *bh =3D (struct ice_buf_hdr *)(bufs + i); > + bool last =3D ((i + 1) =3D=3D count); > + u32 offset, info; > + > + status =3D ice_aq_update_pkg(hw, bh, LE16_TO_CPU(bh- > >data_end), > + last, &offset, &info, NULL); > + > + if (status) { > + ice_debug(hw, ICE_DBG_PKG, "Update pkg failed: > err %d off %d inf %d\n", > + status, offset, info); > + break; > + } > + } > + > + return status; > +} > + > +/** > + * ice_update_pkg > + * @hw: pointer to the hardware structure > + * @bufs: pointer to an array of buffers > + * @count: the number of buffers in the array > + * > + * Obtains change lock and updates package. > + */ > +enum ice_status > +ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) > +{ > + enum ice_status status; > + > + status =3D ice_acquire_change_lock(hw, ICE_RES_WRITE); > + if (status) > + return status; > + > + status =3D ice_update_pkg_no_lock(hw, bufs, count); > + > + ice_release_change_lock(hw); > + > + return status; > +} > + > +static enum ice_ddp_state > +ice_map_aq_err_to_ddp_state(enum ice_aq_err aq_err) > +{ > + switch (aq_err) { > + case ICE_AQ_RC_ENOSEC: > + return ICE_DDP_PKG_NO_SEC_MANIFEST; > + case ICE_AQ_RC_EBADSIG: > + return ICE_DDP_PKG_FILE_SIGNATURE_INVALID; > + case ICE_AQ_RC_ESVN: > + return ICE_DDP_PKG_SECURE_VERSION_NBR_TOO_LOW; > + case ICE_AQ_RC_EBADMAN: > + return ICE_DDP_PKG_MANIFEST_INVALID; > + case ICE_AQ_RC_EBADBUF: > + return ICE_DDP_PKG_BUFFER_INVALID; > + default: > + return ICE_DDP_PKG_ERR; > + } > +} > + > +/** > + * ice_is_buffer_metadata - determine if package buffer is a metadata > buffer > + * @buf: pointer to buffer header > + */ > +static bool ice_is_buffer_metadata(struct ice_buf_hdr *buf) > +{ > + bool metadata =3D false; > + > + if (LE32_TO_CPU(buf->section_entry[0].type) & ICE_METADATA_BUF) > + metadata =3D true; > + > + return metadata; > +} > + > +/** > + * ice_is_last_download_buffer > + * @buf: pointer to current buffer header > + * @idx: index of the buffer in the current sequence > + * @count: the buffer count in the current sequence > + * > + * Note: this routine should only be called if the buffer is not the las= t > buffer > + */ > +static bool > +ice_is_last_download_buffer(struct ice_buf_hdr *buf, u32 idx, u32 count) > +{ > + bool last =3D ((idx + 1) =3D=3D count); > + > + /* A set metadata flag in the next buffer will signal that the current > + * buffer will be the last buffer downloaded > + */ > + if (!last) { > + struct ice_buf *next_buf =3D ((struct ice_buf *)buf) + 1; > + > + last =3D ice_is_buffer_metadata((struct ice_buf_hdr > *)next_buf); > + } > + > + return last; > +} > + > +/** > + * ice_dwnld_cfg_bufs_no_lock > + * @hw: pointer to the hardware structure > + * @bufs: pointer to an array of buffers > + * @start: buffer index of first buffer to download > + * @count: the number of buffers to download > + * @indicate_last: if true, then set last buffer flag on last buffer dow= nload > + * > + * Downloads package configuration buffers to the firmware. Metadata > buffers > + * are skipped, and the first metadata buffer found indicates that the r= est > + * of the buffers are all metadata buffers. > + */ > +static enum ice_ddp_state > +ice_dwnld_cfg_bufs_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 > start, > + u32 count, bool indicate_last) > +{ > + enum ice_ddp_state state =3D ICE_DDP_PKG_SUCCESS; > + struct ice_buf_hdr *bh; > + enum ice_aq_err err; > + u32 offset, info, i; > + > + if (!bufs || !count) > + return ICE_DDP_PKG_ERR; > + > + /* If the first buffer's first section has its metadata bit set > + * then there are no buffers to be downloaded, and the operation is > + * considered a success. > + */ > + bh =3D (struct ice_buf_hdr *)(bufs + start); > + if (LE32_TO_CPU(bh->section_entry[0].type) & ICE_METADATA_BUF) > + return ICE_DDP_PKG_SUCCESS; > + > + for (i =3D 0; i < count; i++) { > + enum ice_status status; > + bool last =3D false; > + > + bh =3D (struct ice_buf_hdr *)(bufs + start + i); > + > + if (indicate_last) > + last =3D ice_is_last_download_buffer(bh, i, count); > + > + status =3D ice_aq_download_pkg(hw, bh, ICE_PKG_BUF_SIZE, > last, > + &offset, &info, NULL); > + > + /* Save AQ status from download package */ > + if (status) { > + ice_debug(hw, ICE_DBG_PKG, "Pkg download failed: > err %d off %d inf %d\n", > + status, offset, info); > + err =3D hw->adminq.sq_last_status; > + state =3D ice_map_aq_err_to_ddp_state(err); > + break; > + } > + > + if (last) > + break; > + } > + > + return state; > +} > + > +/** > + * ice_aq_get_pkg_info_list > + * @hw: pointer to the hardware structure > + * @pkg_info: the buffer which will receive the information list > + * @buf_size: the size of the pkg_info information buffer > + * @cd: pointer to command details structure or NULL > + * > + * Get Package Info List (0x0C43) > + */ > +static enum ice_status > +ice_aq_get_pkg_info_list(struct ice_hw *hw, > + struct ice_aqc_get_pkg_info_resp *pkg_info, > + u16 buf_size, struct ice_sq_cd *cd) > +{ > + struct ice_aq_desc desc; > + > + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_pkg_info_list); > + > + return ice_aq_send_cmd(hw, &desc, pkg_info, buf_size, cd); > +} > + > +/** > + * ice_has_signing_seg - determine if package has a signing segment > + * @hw: pointer to the hardware structure > + * @pkg_hdr: pointer to the driver's package hdr > + */ > +static bool ice_has_signing_seg(struct ice_hw *hw, struct ice_pkg_hdr > *pkg_hdr) > +{ > + struct ice_generic_seg_hdr *seg_hdr; > + > + seg_hdr =3D (struct ice_generic_seg_hdr *) > + ice_find_seg_in_pkg(hw, SEGMENT_TYPE_SIGNING, pkg_hdr); > + > + return seg_hdr ? true : false; > +} > + > +/** > + * ice_get_pkg_segment_id - get correct package segment id, based on > device > + * @mac_type: MAC type of the device > + */ > +static u32 ice_get_pkg_segment_id(enum ice_mac_type mac_type) > +{ > + u32 seg_id; > + > + switch (mac_type) { > + case ICE_MAC_GENERIC: > + case ICE_MAC_GENERIC_3K: > + default: > + seg_id =3D SEGMENT_TYPE_ICE_E810; > + break; > + } > + > + return seg_id; > +} > + > +/** > + * ice_get_pkg_sign_type - get package segment sign type, based on devic= e > + * @mac_type: MAC type of the device > + */ > +static u32 ice_get_pkg_sign_type(enum ice_mac_type mac_type) > +{ > + u32 sign_type; > + > + switch (mac_type) { > + case ICE_MAC_GENERIC_3K: > + sign_type =3D SEGMENT_SIGN_TYPE_RSA3K; > + break; > + case ICE_MAC_GENERIC: > + default: > + sign_type =3D SEGMENT_SIGN_TYPE_RSA2K; > + break; > + } > + > + return sign_type; > +} > + > +/** > + * ice_get_signing_req - get correct package requirements, based on devi= ce > + * @hw: pointer to the hardware structure > + */ > +static void ice_get_signing_req(struct ice_hw *hw) > +{ > + hw->pkg_seg_id =3D ice_get_pkg_segment_id(hw->mac_type); > + hw->pkg_sign_type =3D ice_get_pkg_sign_type(hw->mac_type); > +} > + > +/** > + * ice_download_pkg_sig_seg - download a signature segment > + * @hw: pointer to the hardware structure > + * @seg: pointer to signature segment > + */ > +static enum ice_ddp_state > +ice_download_pkg_sig_seg(struct ice_hw *hw, struct ice_sign_seg *seg) > +{ > + enum ice_ddp_state state; > + > + state =3D ice_dwnld_cfg_bufs_no_lock(hw, seg->buf_tbl.buf_array, 0, > + LE32_TO_CPU(seg- > >buf_tbl.buf_count), > + false); > + > + return state; > +} > + > +/** > + * ice_download_pkg_config_seg - download a config segment > + * @hw: pointer to the hardware structure > + * @pkg_hdr: pointer to package header > + * @idx: segment index > + * @start: starting buffer > + * @count: buffer count > + * > + * Note: idx must reference a ICE segment > + */ > +static enum ice_ddp_state > +ice_download_pkg_config_seg(struct ice_hw *hw, struct ice_pkg_hdr > *pkg_hdr, > + u32 idx, u32 start, u32 count) > +{ > + struct ice_buf_table *bufs; > + enum ice_ddp_state state; > + struct ice_seg *seg; > + u32 buf_count; > + > + seg =3D (struct ice_seg *)ice_get_pkg_seg_by_idx(pkg_hdr, idx); > + if (!seg) > + return ICE_DDP_PKG_ERR; > + > + bufs =3D ice_find_buf_table(seg); > + buf_count =3D LE32_TO_CPU(bufs->buf_count); > + > + if (start >=3D buf_count || start + count > buf_count) > + return ICE_DDP_PKG_ERR; > + > + state =3D ice_dwnld_cfg_bufs_no_lock(hw, bufs->buf_array, start, > count, > + true); > + > + return state; > +} > + > +/** > + * ice_dwnld_sign_and_cfg_segs - download a signing segment and config > segment > + * @hw: pointer to the hardware structure > + * @pkg_hdr: pointer to package header > + * @idx: segment index (must be a signature segment) > + * > + * Note: idx must reference a signature segment > + */ > +static enum ice_ddp_state > +ice_dwnld_sign_and_cfg_segs(struct ice_hw *hw, struct ice_pkg_hdr > *pkg_hdr, > + u32 idx) > +{ > + enum ice_ddp_state state; > + struct ice_sign_seg *seg; > + u32 conf_idx; > + u32 start; > + u32 count; > + > + seg =3D (struct ice_sign_seg *)ice_get_pkg_seg_by_idx(pkg_hdr, idx); > + if (!seg) { > + state =3D ICE_DDP_PKG_ERR; > + goto exit; > + } > + > + conf_idx =3D LE32_TO_CPU(seg->signed_seg_idx); > + start =3D LE32_TO_CPU(seg->signed_buf_start); > + count =3D LE32_TO_CPU(seg->signed_buf_count); > + > + state =3D ice_download_pkg_sig_seg(hw, seg); > + if (state) > + goto exit; > + > + state =3D ice_download_pkg_config_seg(hw, pkg_hdr, conf_idx, start, > + count); > + > +exit: > + return state; > +} > + > +/** > + * ice_match_signing_seg - determine if a matching signing segment exist= s > + * @pkg_hdr: pointer to package header > + * @seg_id: segment id that is expected > + * @sign_type: signing type > + */ > +static bool > +ice_match_signing_seg(struct ice_pkg_hdr *pkg_hdr, u32 seg_id, u32 > sign_type) > +{ > + bool match =3D false; > + u32 i; > + > + for (i =3D 0; i < LE32_TO_CPU(pkg_hdr->seg_count); i++) { > + if (ice_is_signing_seg_type_at_idx(pkg_hdr, i, seg_id, > + sign_type)) { > + match =3D true; > + break; > + } > + } > + > + return match; > +} > + > +/** > + * ice_post_dwnld_pkg_actions - perform post download package actions > + * @hw: pointer to the hardware structure > + */ > +static enum ice_ddp_state > +ice_post_dwnld_pkg_actions(struct ice_hw *hw) > +{ > + enum ice_ddp_state state =3D ICE_DDP_PKG_SUCCESS; > + enum ice_status status; > + > + status =3D ice_set_vlan_mode(hw); > + if (status) { > + ice_debug(hw, ICE_DBG_PKG, "Failed to set VLAN mode: > err %d\n", > + status); > + state =3D ICE_DDP_PKG_ERR; > + } > + > + return state; > +} > + > +/** > + * ice_download_pkg_with_sig_seg - download package using signature > segments > + * @hw: pointer to the hardware structure > + * @pkg_hdr: pointer to package header > + */ > +static enum ice_ddp_state > +ice_download_pkg_with_sig_seg(struct ice_hw *hw, struct ice_pkg_hdr > *pkg_hdr) > +{ > + enum ice_aq_err aq_err =3D hw->adminq.sq_last_status; > + enum ice_ddp_state state =3D ICE_DDP_PKG_ERR; > + enum ice_status status; > + u32 i; > + > + ice_debug(hw, ICE_DBG_INIT, "Segment ID %d\n", hw->pkg_seg_id); > + ice_debug(hw, ICE_DBG_INIT, "Signature type %d\n", hw- > >pkg_sign_type); > + > + status =3D ice_acquire_global_cfg_lock(hw, ICE_RES_WRITE); > + if (status) { > + if (status =3D=3D ICE_ERR_AQ_NO_WORK) > + state =3D ICE_DDP_PKG_ALREADY_LOADED; > + else > + state =3D ice_map_aq_err_to_ddp_state(aq_err); > + return state; > + } > + > + for (i =3D 0; i < LE32_TO_CPU(pkg_hdr->seg_count); i++) { > + if (!ice_is_signing_seg_type_at_idx(pkg_hdr, i, hw- > >pkg_seg_id, > + hw->pkg_sign_type)) > + continue; > + > + state =3D ice_dwnld_sign_and_cfg_segs(hw, pkg_hdr, i); > + if (state) > + break; > + } > + > + if (!state) > + state =3D ice_post_dwnld_pkg_actions(hw); > + > + ice_release_global_cfg_lock(hw); > + > + return state; > +} > + > +/** > + * ice_dwnld_cfg_bufs > + * @hw: pointer to the hardware structure > + * @bufs: pointer to an array of buffers > + * @count: the number of buffers in the array > + * > + * Obtains global config lock and downloads the package configuration > buffers > + * to the firmware. > + */ > +static enum ice_ddp_state > +ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count) > +{ > + enum ice_ddp_state state =3D ICE_DDP_PKG_SUCCESS; > + enum ice_status status; > + struct ice_buf_hdr *bh; > + > + if (!bufs || !count) > + return ICE_DDP_PKG_ERR; > + > + /* If the first buffer's first section has its metadata bit set > + * then there are no buffers to be downloaded, and the operation is > + * considered a success. > + */ > + bh =3D (struct ice_buf_hdr *)bufs; > + if (LE32_TO_CPU(bh->section_entry[0].type) & ICE_METADATA_BUF) > + return ICE_DDP_PKG_SUCCESS; > + > + status =3D ice_acquire_global_cfg_lock(hw, ICE_RES_WRITE); > + if (status) { > + if (status =3D=3D ICE_ERR_AQ_NO_WORK) > + return ICE_DDP_PKG_ALREADY_LOADED; > + return ice_map_aq_err_to_ddp_state(hw- > >adminq.sq_last_status); > + } > + > + state =3D ice_dwnld_cfg_bufs_no_lock(hw, bufs, 0, count, true); > + if (!state) > + state =3D ice_post_dwnld_pkg_actions(hw); > + > + ice_release_global_cfg_lock(hw); > + > + return state; > +} > + > +/** > + * ice_download_pkg_without_sig_seg > + * @hw: pointer to the hardware structure > + * @ice_seg: pointer to the segment of the package to be downloaded > + * > + * Handles the download of a complete package without signature > segment. > + */ > +static enum ice_ddp_state > +ice_download_pkg_without_sig_seg(struct ice_hw *hw, struct ice_seg > *ice_seg) > +{ > + struct ice_buf_table *ice_buf_tbl; > + enum ice_ddp_state state; > + > + ice_debug(hw, ICE_DBG_PKG, "Segment format > version: %d.%d.%d.%d\n", > + ice_seg->hdr.seg_format_ver.major, > + ice_seg->hdr.seg_format_ver.minor, > + ice_seg->hdr.seg_format_ver.update, > + ice_seg->hdr.seg_format_ver.draft); > + > + ice_debug(hw, ICE_DBG_PKG, "Seg: type 0x%X, size %d, name %s\n", > + LE32_TO_CPU(ice_seg->hdr.seg_type), > + LE32_TO_CPU(ice_seg->hdr.seg_size), ice_seg->hdr.seg_id); > + > + ice_buf_tbl =3D ice_find_buf_table(ice_seg); > + > + ice_debug(hw, ICE_DBG_PKG, "Seg buf count: %d\n", > + LE32_TO_CPU(ice_buf_tbl->buf_count)); > + > + state =3D ice_dwnld_cfg_bufs(hw, ice_buf_tbl->buf_array, > + LE32_TO_CPU(ice_buf_tbl->buf_count)); > + > + return state; > +} > + > +/** > + * ice_download_pkg > + * @hw: pointer to the hardware structure > + * @pkg_hdr: pointer to package header > + * @ice_seg: pointer to the segment of the package to be downloaded > + * > + * Handles the download of a complete package. > + */ > +static enum ice_ddp_state > +ice_download_pkg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr, > + struct ice_seg *ice_seg) > +{ > + enum ice_ddp_state state; > + > + if (hw->pkg_has_signing_seg) > + state =3D ice_download_pkg_with_sig_seg(hw, pkg_hdr); > + else > + state =3D ice_download_pkg_without_sig_seg(hw, ice_seg); > + > + ice_post_pkg_dwnld_vlan_mode_cfg(hw); > + > + return state; > +} > + > +/** > + * ice_init_pkg_info > + * @hw: pointer to the hardware structure > + * @pkg_hdr: pointer to the driver's package hdr > + * > + * Saves off the package details into the HW structure. > + */ > +static enum ice_ddp_state > +ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr) > +{ > + struct ice_generic_seg_hdr *seg_hdr; > + > + if (!pkg_hdr) > + return ICE_DDP_PKG_ERR; > + > + hw->pkg_has_signing_seg =3D ice_has_signing_seg(hw, pkg_hdr); > + ice_get_signing_req(hw); > + > + ice_debug(hw, ICE_DBG_INIT, "Pkg using segment id: 0x%08X\n", > + hw->pkg_seg_id); > + > + seg_hdr =3D (struct ice_generic_seg_hdr *) > + ice_find_seg_in_pkg(hw, hw->pkg_seg_id, pkg_hdr); > + if (seg_hdr) { > + struct ice_meta_sect *meta; > + struct ice_pkg_enum state; > + > + ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); > + > + /* Get package information from the Metadata Section */ > + meta =3D (struct ice_meta_sect *) > + ice_pkg_enum_section((struct ice_seg *)seg_hdr, > &state, > + ICE_SID_METADATA); > + if (!meta) { > + ice_debug(hw, ICE_DBG_INIT, "Did not find ice > metadata section in package\n"); > + return ICE_DDP_PKG_INVALID_FILE; > + } > + > + hw->pkg_ver =3D meta->ver; > + ice_memcpy(hw->pkg_name, meta->name, sizeof(meta- > >name), > + ICE_NONDMA_TO_NONDMA); > + > + ice_debug(hw, ICE_DBG_PKG, "Pkg: %d.%d.%d.%d, %s\n", > + meta->ver.major, meta->ver.minor, meta- > >ver.update, > + meta->ver.draft, meta->name); > + > + hw->ice_seg_fmt_ver =3D seg_hdr->seg_format_ver; > + ice_memcpy(hw->ice_seg_id, seg_hdr->seg_id, > + sizeof(hw->ice_seg_id), > ICE_NONDMA_TO_NONDMA); > + > + ice_debug(hw, ICE_DBG_PKG, "Ice Seg: %d.%d.%d.%d, %s\n", > + seg_hdr->seg_format_ver.major, > + seg_hdr->seg_format_ver.minor, > + seg_hdr->seg_format_ver.update, > + seg_hdr->seg_format_ver.draft, > + seg_hdr->seg_id); > + } else { > + ice_debug(hw, ICE_DBG_INIT, "Did not find ice segment in > driver package\n"); > + return ICE_DDP_PKG_INVALID_FILE; > + } > + > + return ICE_DDP_PKG_SUCCESS; > +} > + > +/** > + * ice_get_pkg_info > + * @hw: pointer to the hardware structure > + * > + * Store details of the package currently loaded in HW into the HW > structure. > + */ > +enum ice_ddp_state ice_get_pkg_info(struct ice_hw *hw) > +{ > + enum ice_ddp_state state =3D ICE_DDP_PKG_SUCCESS; > + struct ice_aqc_get_pkg_info_resp *pkg_info; > + u16 size; > + u32 i; > + > + size =3D ice_struct_size(pkg_info, pkg_info, ICE_PKG_CNT); > + pkg_info =3D (struct ice_aqc_get_pkg_info_resp *)ice_malloc(hw, size); > + if (!pkg_info) > + return ICE_DDP_PKG_ERR; > + > + if (ice_aq_get_pkg_info_list(hw, pkg_info, size, NULL)) { > + state =3D ICE_DDP_PKG_ERR; > + goto init_pkg_free_alloc; > + } > + > + for (i =3D 0; i < LE32_TO_CPU(pkg_info->count); i++) { > +#define ICE_PKG_FLAG_COUNT 4 > + char flags[ICE_PKG_FLAG_COUNT + 1] =3D { 0 }; > + u8 place =3D 0; > + > + if (pkg_info->pkg_info[i].is_active) { > + flags[place++] =3D 'A'; > + hw->active_pkg_ver =3D pkg_info->pkg_info[i].ver; > + hw->active_track_id =3D > + LE32_TO_CPU(pkg_info->pkg_info[i].track_id); > + ice_memcpy(hw->active_pkg_name, > + pkg_info->pkg_info[i].name, > + sizeof(pkg_info->pkg_info[i].name), > + ICE_NONDMA_TO_NONDMA); > + hw->active_pkg_in_nvm =3D pkg_info- > >pkg_info[i].is_in_nvm; > + } > + if (pkg_info->pkg_info[i].is_active_at_boot) > + flags[place++] =3D 'B'; > + if (pkg_info->pkg_info[i].is_modified) > + flags[place++] =3D 'M'; > + if (pkg_info->pkg_info[i].is_in_nvm) > + flags[place++] =3D 'N'; > + > + ice_debug(hw, ICE_DBG_PKG, > "Pkg[%d]: %d.%d.%d.%d,%s,%s\n", > + i, pkg_info->pkg_info[i].ver.major, > + pkg_info->pkg_info[i].ver.minor, > + pkg_info->pkg_info[i].ver.update, > + pkg_info->pkg_info[i].ver.draft, > + pkg_info->pkg_info[i].name, flags); > + } > + > +init_pkg_free_alloc: > + ice_free(hw, pkg_info); > + > + return state; > +} > + > +/** > + * ice_label_enum_handler > + * @sect_type: section type > + * @section: pointer to section > + * @index: index of the label entry to be returned > + * @offset: pointer to receive absolute offset, always zero for label > sections > + * > + * This is a callback function that can be passed to ice_pkg_enum_entry. > + * Handles enumeration of individual label entries. > + */ > +static void * > +ice_label_enum_handler(u32 __ALWAYS_UNUSED sect_type, void *section, > u32 index, > + u32 *offset) > +{ > + struct ice_label_section *labels; > + > + if (!section) > + return NULL; > + > + if (index > ICE_MAX_LABELS_IN_BUF) > + return NULL; > + > + if (offset) > + *offset =3D 0; > + > + labels =3D (struct ice_label_section *)section; > + if (index >=3D LE16_TO_CPU(labels->count)) > + return NULL; > + > + return labels->label + index; > +} > + > +/** > + * ice_enum_labels > + * @ice_seg: pointer to the ice segment (NULL on subsequent calls) > + * @type: the section type that will contain the label (0 on subsequent > calls) > + * @state: ice_pkg_enum structure that will hold the state of the > enumeration > + * @value: pointer to a value that will return the label's value if foun= d > + * > + * Enumerates a list of labels in the package. The caller will call > + * ice_enum_labels(ice_seg, type, ...) to start the enumeration, then ca= ll > + * ice_enum_labels(NULL, 0, ...) to continue. When the function returns = a > NULL > + * the end of the list has been reached. > + */ > +static char * > +ice_enum_labels(struct ice_seg *ice_seg, u32 type, struct ice_pkg_enum > *state, > + u16 *value) > +{ > + struct ice_label *label; > + > + /* Check for valid label section on first call */ > + if (type && !(type >=3D ICE_SID_LBL_FIRST && type <=3D > ICE_SID_LBL_LAST)) > + return NULL; > + > + label =3D (struct ice_label *)ice_pkg_enum_entry(ice_seg, state, type, > + NULL, > + > ice_label_enum_handler); > + if (!label) > + return NULL; > + > + *value =3D LE16_TO_CPU(label->value); > + return label->name; > +} > + > +/** > + * ice_verify_pkg - verify package > + * @pkg: pointer to the package buffer > + * @len: size of the package buffer > + * > + * Verifies various attributes of the package file, including length, fo= rmat > + * version, and the requirement of at least one segment. > + */ > +enum ice_ddp_state ice_verify_pkg(struct ice_pkg_hdr *pkg, u32 len) > +{ > + u32 seg_count; > + u32 i; > + > + if (len < ice_struct_size(pkg, seg_offset, 1)) > + return ICE_DDP_PKG_INVALID_FILE; > + > + if (pkg->pkg_format_ver.major !=3D ICE_PKG_FMT_VER_MAJ || > + pkg->pkg_format_ver.minor !=3D ICE_PKG_FMT_VER_MNR || > + pkg->pkg_format_ver.update !=3D ICE_PKG_FMT_VER_UPD || > + pkg->pkg_format_ver.draft !=3D ICE_PKG_FMT_VER_DFT) > + return ICE_DDP_PKG_INVALID_FILE; > + > + /* pkg must have at least one segment */ > + seg_count =3D LE32_TO_CPU(pkg->seg_count); > + if (seg_count < 1) > + return ICE_DDP_PKG_INVALID_FILE; > + > + /* make sure segment array fits in package length */ > + if (len < ice_struct_size(pkg, seg_offset, seg_count)) > + return ICE_DDP_PKG_INVALID_FILE; > + > + /* all segments must fit within length */ > + for (i =3D 0; i < seg_count; i++) { > + u32 off =3D LE32_TO_CPU(pkg->seg_offset[i]); > + struct ice_generic_seg_hdr *seg; > + > + /* segment header must fit */ > + if (len < off + sizeof(*seg)) > + return ICE_DDP_PKG_INVALID_FILE; > + > + seg =3D (struct ice_generic_seg_hdr *)((u8 *)pkg + off); > + > + /* segment body must fit */ > + if (len < off + LE32_TO_CPU(seg->seg_size)) > + return ICE_DDP_PKG_INVALID_FILE; > + } > + > + return ICE_DDP_PKG_SUCCESS; > +} > + > +/** > + * ice_free_seg - free package segment pointer > + * @hw: pointer to the hardware structure > + * > + * Frees the package segment pointer in the proper manner, depending on > if the > + * segment was allocated or just the passed in pointer was stored. > + */ > +void ice_free_seg(struct ice_hw *hw) > +{ > + if (hw->pkg_copy) { > + ice_free(hw, hw->pkg_copy); > + hw->pkg_copy =3D NULL; > + hw->pkg_size =3D 0; > + } > + hw->seg =3D NULL; > +} > + > +/** > + * ice_chk_pkg_version - check package version for compatibility with > driver > + * @pkg_ver: pointer to a version structure to check > + * > + * Check to make sure that the package about to be downloaded is > compatible with > + * the driver. To be compatible, the major and minor components of the > package > + * version must match our ICE_PKG_SUPP_VER_MAJ and > ICE_PKG_SUPP_VER_MNR > + * definitions. > + */ > +static enum ice_ddp_state ice_chk_pkg_version(struct ice_pkg_ver > *pkg_ver) > +{ > + if (pkg_ver->major > ICE_PKG_SUPP_VER_MAJ || > + (pkg_ver->major =3D=3D ICE_PKG_SUPP_VER_MAJ && > + pkg_ver->minor > ICE_PKG_SUPP_VER_MNR)) > + return ICE_DDP_PKG_FILE_VERSION_TOO_HIGH; > + else if (pkg_ver->major < ICE_PKG_SUPP_VER_MAJ || > + (pkg_ver->major =3D=3D ICE_PKG_SUPP_VER_MAJ && > + pkg_ver->minor < ICE_PKG_SUPP_VER_MNR)) > + return ICE_DDP_PKG_FILE_VERSION_TOO_LOW; > + > + return ICE_DDP_PKG_SUCCESS; > +} > + > +/** > + * ice_chk_pkg_compat > + * @hw: pointer to the hardware structure > + * @ospkg: pointer to the package hdr > + * @seg: pointer to the package segment hdr > + * > + * This function checks the package version compatibility with driver an= d > NVM > + */ > +static enum ice_ddp_state > +ice_chk_pkg_compat(struct ice_hw *hw, struct ice_pkg_hdr *ospkg, > + struct ice_seg **seg) > +{ > + struct ice_aqc_get_pkg_info_resp *pkg; > + enum ice_ddp_state state; > + u16 size; > + u32 i; > + > + /* Check package version compatibility */ > + state =3D ice_chk_pkg_version(&hw->pkg_ver); > + if (state) { > + ice_debug(hw, ICE_DBG_INIT, "Package version check > failed.\n"); > + return state; > + } > + > + /* find ICE segment in given package */ > + *seg =3D (struct ice_seg *)ice_find_seg_in_pkg(hw, hw->pkg_seg_id, > + ospkg); > + if (!*seg) { > + ice_debug(hw, ICE_DBG_INIT, "no ice segment in > package.\n"); > + return ICE_DDP_PKG_INVALID_FILE; > + } > + > + /* Check if FW is compatible with the OS package */ > + size =3D ice_struct_size(pkg, pkg_info, ICE_PKG_CNT); > + pkg =3D (struct ice_aqc_get_pkg_info_resp *)ice_malloc(hw, size); > + if (!pkg) > + return ICE_DDP_PKG_ERR; > + > + if (ice_aq_get_pkg_info_list(hw, pkg, size, NULL)) { > + state =3D ICE_DDP_PKG_ERR; > + goto fw_ddp_compat_free_alloc; > + } > + > + for (i =3D 0; i < LE32_TO_CPU(pkg->count); i++) { > + /* loop till we find the NVM package */ > + if (!pkg->pkg_info[i].is_in_nvm) > + continue; > + if ((*seg)->hdr.seg_format_ver.major !=3D > + pkg->pkg_info[i].ver.major || > + (*seg)->hdr.seg_format_ver.minor > > + pkg->pkg_info[i].ver.minor) { > + state =3D ICE_DDP_PKG_FW_MISMATCH; > + ice_debug(hw, ICE_DBG_INIT, "OS package is not > compatible with NVM.\n"); > + } > + /* done processing NVM package so break */ > + break; > + } > +fw_ddp_compat_free_alloc: > + ice_free(hw, pkg); > + return state; > +} > + > +/** > + * ice_sw_fv_handler > + * @sect_type: section type > + * @section: pointer to section > + * @index: index of the field vector entry to be returned > + * @offset: ptr to variable that receives the offset in the field vector= table > + * > + * This is a callback function that can be passed to ice_pkg_enum_entry. > + * This function treats the given section as of type ice_sw_fv_section a= nd > + * enumerates offset field. "offset" is an index into the field vector t= able. > + */ > +static void * > +ice_sw_fv_handler(u32 sect_type, void *section, u32 index, u32 *offset) > +{ > + struct ice_sw_fv_section *fv_section =3D > + (struct ice_sw_fv_section *)section; > + > + if (!section || sect_type !=3D ICE_SID_FLD_VEC_SW) > + return NULL; > + if (index >=3D LE16_TO_CPU(fv_section->count)) > + return NULL; > + if (offset) > + /* "index" passed in to this function is relative to a given > + * 4k block. To get to the true index into the field vector > + * table need to add the relative index to the base_offset > + * field of this section > + */ > + *offset =3D LE16_TO_CPU(fv_section->base_offset) + index; > + return fv_section->fv + index; > +} > + > +/** > + * ice_get_prof_index_max - get the max profile index for used profile > + * @hw: pointer to the HW struct > + * > + * Calling this function will get the max profile index for used profile > + * and store the index number in struct ice_switch_info *switch_info > + * in hw for following use. > + */ > +static int ice_get_prof_index_max(struct ice_hw *hw) > +{ > + u16 prof_index =3D 0, j, max_prof_index =3D 0; > + struct ice_pkg_enum state; > + struct ice_seg *ice_seg; > + bool flag =3D false; > + struct ice_fv *fv; > + u32 offset; > + > + ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); > + > + if (!hw->seg) > + return ICE_ERR_PARAM; > + > + ice_seg =3D hw->seg; > + > + do { > + fv =3D (struct ice_fv *) > + ice_pkg_enum_entry(ice_seg, &state, > ICE_SID_FLD_VEC_SW, > + &offset, ice_sw_fv_handler); > + if (!fv) > + break; > + ice_seg =3D NULL; > + > + /* in the profile that not be used, the prot_id is set to 0xff > + * and the off is set to 0x1ff for all the field vectors. > + */ > + for (j =3D 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++) > + if (fv->ew[j].prot_id !=3D ICE_PROT_INVALID || > + fv->ew[j].off !=3D ICE_FV_OFFSET_INVAL) > + flag =3D true; > + if (flag && prof_index > max_prof_index) > + max_prof_index =3D prof_index; > + > + prof_index++; > + flag =3D false; > + } while (fv); > + > + hw->switch_info->max_used_prof_index =3D max_prof_index; > + > + return ICE_SUCCESS; > +} > + > +/** > + * ice_get_ddp_pkg_state - get DDP pkg state after download > + * @hw: pointer to the HW struct > + * @already_loaded: indicates if pkg was already loaded onto the device > + * > + */ > +static enum ice_ddp_state > +ice_get_ddp_pkg_state(struct ice_hw *hw, bool already_loaded) > +{ > + if (hw->pkg_ver.major =3D=3D hw->active_pkg_ver.major && > + hw->pkg_ver.minor =3D=3D hw->active_pkg_ver.minor && > + hw->pkg_ver.update =3D=3D hw->active_pkg_ver.update && > + hw->pkg_ver.draft =3D=3D hw->active_pkg_ver.draft && > + !memcmp(hw->pkg_name, hw->active_pkg_name, sizeof(hw- > >pkg_name))) { > + if (already_loaded) > + return > ICE_DDP_PKG_SAME_VERSION_ALREADY_LOADED; > + else > + return ICE_DDP_PKG_SUCCESS; > + } else if (hw->active_pkg_ver.major !=3D ICE_PKG_SUPP_VER_MAJ || > + hw->active_pkg_ver.minor !=3D ICE_PKG_SUPP_VER_MNR) { > + return ICE_DDP_PKG_ALREADY_LOADED_NOT_SUPPORTED; > + } else if (hw->active_pkg_ver.major =3D=3D ICE_PKG_SUPP_VER_MAJ && > + hw->active_pkg_ver.minor =3D=3D ICE_PKG_SUPP_VER_MNR) { > + return ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED; > + } else { > + return ICE_DDP_PKG_ERR; > + } > +} > + > +/** > + * ice_init_pkg_regs - initialize additional package registers > + * @hw: pointer to the hardware structure > + */ > +static void ice_init_pkg_regs(struct ice_hw *hw) > +{ > +#define ICE_SW_BLK_INP_MASK_L 0xFFFFFFFF > +#define ICE_SW_BLK_INP_MASK_H 0x0000FFFF > +#define ICE_SW_BLK_IDX 0 > + if (hw->dcf_enabled) > + return; > + > + /* setup Switch block input mask, which is 48-bits in two parts */ > + wr32(hw, GL_PREEXT_L2_PMASK0(ICE_SW_BLK_IDX), > ICE_SW_BLK_INP_MASK_L); > + wr32(hw, GL_PREEXT_L2_PMASK1(ICE_SW_BLK_IDX), > ICE_SW_BLK_INP_MASK_H); > +} > + > +/** > + * ice_hw_ptype_ena - check if the PTYPE is enabled or not > + * @hw: pointer to the HW structure > + * @ptype: the hardware PTYPE > + */ > +bool ice_hw_ptype_ena(struct ice_hw *hw, u16 ptype) > +{ > + return ptype < ICE_FLOW_PTYPE_MAX && > + ice_is_bit_set(hw->hw_ptype, ptype); > +} > + > +/** > + * ice_marker_ptype_tcam_handler > + * @sect_type: section type > + * @section: pointer to section > + * @index: index of the Marker PType TCAM entry to be returned > + * @offset: pointer to receive absolute offset, always 0 for ptype TCAM > sections > + * > + * This is a callback function that can be passed to ice_pkg_enum_entry. > + * Handles enumeration of individual Marker PType TCAM entries. > + */ > +static void * > +ice_marker_ptype_tcam_handler(u32 sect_type, void *section, u32 index, > + u32 *offset) > +{ > + struct ice_marker_ptype_tcam_section *marker_ptype; > + > + if (!section) > + return NULL; > + > + if (sect_type !=3D ICE_SID_RXPARSER_MARKER_PTYPE) > + return NULL; > + > + if (index > ICE_MAX_MARKER_PTYPE_TCAMS_IN_BUF) > + return NULL; > + > + if (offset) > + *offset =3D 0; > + > + marker_ptype =3D (struct ice_marker_ptype_tcam_section *)section; > + if (index >=3D LE16_TO_CPU(marker_ptype->count)) > + return NULL; > + > + return marker_ptype->tcam + index; > +} > + > +/** > + * ice_fill_hw_ptype - fill the enabled PTYPE bit information > + * @hw: pointer to the HW structure > + */ > +static void > +ice_fill_hw_ptype(struct ice_hw *hw) > +{ > + struct ice_marker_ptype_tcam_entry *tcam; > + struct ice_seg *seg =3D hw->seg; > + struct ice_pkg_enum state; > + > + ice_zero_bitmap(hw->hw_ptype, ICE_FLOW_PTYPE_MAX); > + if (!seg) > + return; > + > + ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); > + > + do { > + tcam =3D (struct ice_marker_ptype_tcam_entry *) > + ice_pkg_enum_entry(seg, &state, > + ICE_SID_RXPARSER_MARKER_PTYPE, > NULL, > + ice_marker_ptype_tcam_handler); > + if (tcam && > + LE16_TO_CPU(tcam->addr) < > ICE_MARKER_PTYPE_TCAM_ADDR_MAX && > + LE16_TO_CPU(tcam->ptype) < ICE_FLOW_PTYPE_MAX) > + ice_set_bit(LE16_TO_CPU(tcam->ptype), hw- > >hw_ptype); > + > + seg =3D NULL; > + } while (tcam); > +} > + > +/** > + * ice_init_pkg - initialize/download package > + * @hw: pointer to the hardware structure > + * @buf: pointer to the package buffer > + * @len: size of the package buffer > + * > + * This function initializes a package. The package contains HW tables > + * required to do packet processing. First, the function extracts packag= e > + * information such as version. Then it finds the ice configuration segm= ent > + * within the package; this function then saves a copy of the segment > pointer > + * within the supplied package buffer. Next, the function will cache any > hints > + * from the package, followed by downloading the package itself. Note, > that if > + * a previous PF driver has already downloaded the package successfully, > then > + * the current driver will not have to download the package again. > + * > + * The local package contents will be used to query default behavior and > to > + * update specific sections of the HW's version of the package (e.g. to > update > + * the parse graph to understand new protocols). > + * > + * This function stores a pointer to the package buffer memory, and it i= s > + * expected that the supplied buffer will not be freed immediately. If t= he > + * package buffer needs to be freed, such as when read from a file, use > + * ice_copy_and_init_pkg() instead of directly calling ice_init_pkg() in= this > + * case. > + */ > +enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len) > +{ > + bool already_loaded =3D false; > + enum ice_ddp_state state; > + struct ice_pkg_hdr *pkg; > + struct ice_seg *seg; > + > + if (!buf || !len) > + return ICE_DDP_PKG_ERR; > + > + pkg =3D (struct ice_pkg_hdr *)buf; > + state =3D ice_verify_pkg(pkg, len); > + if (state) { > + ice_debug(hw, ICE_DBG_INIT, "failed to verify pkg > (err: %d)\n", > + state); > + return state; > + } > + > + /* initialize package info */ > + state =3D ice_init_pkg_info(hw, pkg); > + if (state) > + return state; > + > + /* For packages with signing segments, must be a matching segment > */ > + if (hw->pkg_has_signing_seg) > + if (!ice_match_signing_seg(pkg, hw->pkg_seg_id, > + hw->pkg_sign_type)) > + return ICE_DDP_PKG_ERR; > + > + /* before downloading the package, check package version for > + * compatibility with driver > + */ > + state =3D ice_chk_pkg_compat(hw, pkg, &seg); > + if (state) > + return state; > + > + /* initialize package hints and then download package */ > + ice_init_pkg_hints(hw, seg); > + state =3D ice_download_pkg(hw, pkg, seg); > + > + if (state =3D=3D ICE_DDP_PKG_ALREADY_LOADED) { > + ice_debug(hw, ICE_DBG_INIT, "package previously loaded - > no work.\n"); > + already_loaded =3D true; > + } > + > + /* Get information on the package currently loaded in HW, then > make sure > + * the driver is compatible with this version. > + */ > + if (!state || state =3D=3D ICE_DDP_PKG_ALREADY_LOADED) { > + state =3D ice_get_pkg_info(hw); > + if (!state) > + state =3D ice_get_ddp_pkg_state(hw, already_loaded); > + } > + > + if (ice_is_init_pkg_successful(state)) { > + hw->seg =3D seg; > + /* on successful package download update other required > + * registers to support the package and fill HW tables > + * with package content. > + */ > + ice_init_pkg_regs(hw); > + ice_fill_blk_tbls(hw); > + ice_fill_hw_ptype(hw); > + ice_get_prof_index_max(hw); > + } else { > + ice_debug(hw, ICE_DBG_INIT, "package load failed, %d\n", > + state); > + } > + > + return state; > +} > + > +/** > + * ice_copy_and_init_pkg - initialize/download a copy of the package > + * @hw: pointer to the hardware structure > + * @buf: pointer to the package buffer > + * @len: size of the package buffer > + * > + * This function copies the package buffer, and then calls ice_init_pkg(= ) to > + * initialize the copied package contents. > + * > + * The copying is necessary if the package buffer supplied is constant, = or if > + * the memory may disappear shortly after calling this function. > + * > + * If the package buffer resides in the data segment and can be modified= , > the > + * caller is free to use ice_init_pkg() instead of ice_copy_and_init_pkg= (). > + * > + * However, if the package buffer needs to be copied first, such as when > being > + * read from a file, the caller should use ice_copy_and_init_pkg(). > + * > + * This function will first copy the package buffer, before calling > + * ice_init_pkg(). The caller is free to immediately destroy the origina= l > + * package buffer, as the new copy will be managed by this function and > + * related routines. > + */ > +enum ice_ddp_state > +ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len) > +{ > + enum ice_ddp_state state; > + u8 *buf_copy; > + > + if (!buf || !len) > + return ICE_DDP_PKG_ERR; > + > + buf_copy =3D (u8 *)ice_memdup(hw, buf, len, > ICE_NONDMA_TO_NONDMA); > + > + state =3D ice_init_pkg(hw, buf_copy, len); > + if (!ice_is_init_pkg_successful(state)) { > + /* Free the copy, since we failed to initialize the package */ > + ice_free(hw, buf_copy); > + } else { > + /* Track the copied pkg so we can free it later */ > + hw->pkg_copy =3D buf_copy; > + hw->pkg_size =3D len; > + } > + > + return state; > +} > + > +/** > + * ice_is_init_pkg_successful - check if DDP init was successful > + * @state: state of the DDP pkg after download > + */ > +bool ice_is_init_pkg_successful(enum ice_ddp_state state) > +{ > + switch (state) { > + case ICE_DDP_PKG_SUCCESS: > + case ICE_DDP_PKG_SAME_VERSION_ALREADY_LOADED: > + case ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED: > + return true; > + default: > + return false; > + } > +} > + > +/** > + * ice_pkg_buf_alloc > + * @hw: pointer to the HW structure > + * > + * Allocates a package buffer and returns a pointer to the buffer header= . > + * Note: all package contents must be in Little Endian form. > + */ > +struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw) > +{ > + struct ice_buf_build *bld; > + struct ice_buf_hdr *buf; > + > + bld =3D (struct ice_buf_build *)ice_malloc(hw, sizeof(*bld)); > + if (!bld) > + return NULL; > + > + buf =3D (struct ice_buf_hdr *)bld; > + buf->data_end =3D CPU_TO_LE16(offsetof(struct ice_buf_hdr, > + section_entry)); > + return bld; > +} > + > +/** > + * ice_get_sw_prof_type - determine switch profile type > + * @hw: pointer to the HW structure > + * @fv: pointer to the switch field vector > + */ > +static enum ice_prof_type > +ice_get_sw_prof_type(struct ice_hw *hw, struct ice_fv *fv) > +{ > + bool valid_prof =3D false; > + u16 i; > + > + for (i =3D 0; i < hw->blk[ICE_BLK_SW].es.fvw; i++) { > + if (fv->ew[i].off !=3D ICE_NAN_OFFSET) > + valid_prof =3D true; > + > + /* UDP tunnel will have UDP_OF protocol ID and VNI offset > */ > + if (fv->ew[i].prot_id =3D=3D (u8)ICE_PROT_UDP_OF && > + fv->ew[i].off =3D=3D ICE_VNI_OFFSET) > + return ICE_PROF_TUN_UDP; > + > + /* GRE tunnel will have GRE protocol */ > + if (fv->ew[i].prot_id =3D=3D (u8)ICE_PROT_GRE_OF) > + return ICE_PROF_TUN_GRE; > + > + /* PPPOE tunnel will have PPPOE protocol */ > + if (fv->ew[i].prot_id =3D=3D (u8)ICE_PROT_PPPOE) > + return ICE_PROF_TUN_PPPOE; > + } > + > + return valid_prof ? ICE_PROF_NON_TUN : ICE_PROF_INVALID; > +} > + > +/** > + * ice_get_sw_fv_bitmap - Get switch field vector bitmap based on profil= e > type > + * @hw: pointer to hardware structure > + * @req_profs: type of profiles requested > + * @bm: pointer to memory for returning the bitmap of field vectors > + */ > +void > +ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type req_profs, > + ice_bitmap_t *bm) > +{ > + struct ice_pkg_enum state; > + struct ice_seg *ice_seg; > + struct ice_fv *fv; > + > + ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); > + ice_zero_bitmap(bm, ICE_MAX_NUM_PROFILES); > + ice_seg =3D hw->seg; > + do { > + enum ice_prof_type prof_type; > + u32 offset; > + > + fv =3D (struct ice_fv *) > + ice_pkg_enum_entry(ice_seg, &state, > ICE_SID_FLD_VEC_SW, > + &offset, ice_sw_fv_handler); > + ice_seg =3D NULL; > + > + if (fv) { > + /* Determine field vector type */ > + prof_type =3D ice_get_sw_prof_type(hw, fv); > + > + if (req_profs & prof_type) > + ice_set_bit((u16)offset, bm); > + } > + } while (fv); > +} > + > +/** > + * ice_get_sw_fv_list > + * @hw: pointer to the HW structure > + * @lkups: lookup elements or match criteria for the advanced recipe, on= e > + * structure per protocol header > + * @bm: bitmap of field vectors to consider > + * @fv_list: Head of a list > + * > + * Finds all the field vector entries from switch block that contain > + * a given protocol ID and offset and returns a list of structures of ty= pe > + * "ice_sw_fv_list_entry". Every structure in the list has a field vecto= r > + * definition and profile ID information > + * NOTE: The caller of the function is responsible for freeing the memor= y > + * allocated for every list entry. > + */ > +enum ice_status > +ice_get_sw_fv_list(struct ice_hw *hw, struct ice_prot_lkup_ext *lkups, > + ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list) > +{ > + struct ice_sw_fv_list_entry *fvl; > + struct ice_sw_fv_list_entry *tmp; > + struct ice_pkg_enum state; > + struct ice_seg *ice_seg; > + struct ice_fv *fv; > + u32 offset; > + > + ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); > + > + if (!lkups->n_val_words || !hw->seg) > + return ICE_ERR_PARAM; > + > + ice_seg =3D hw->seg; > + do { > + u16 i; > + > + fv =3D (struct ice_fv *) > + ice_pkg_enum_entry(ice_seg, &state, > ICE_SID_FLD_VEC_SW, > + &offset, ice_sw_fv_handler); > + if (!fv) > + break; > + ice_seg =3D NULL; > + > + /* If field vector is not in the bitmap list, then skip this > + * profile. > + */ > + if (!ice_is_bit_set(bm, (u16)offset)) > + continue; > + > + for (i =3D 0; i < lkups->n_val_words; i++) { > + int j; > + > + for (j =3D 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++) > + if (fv->ew[j].prot_id =3D=3D > + lkups->fv_words[i].prot_id && > + fv->ew[j].off =3D=3D lkups->fv_words[i].off) > + break; > + if (j >=3D hw->blk[ICE_BLK_SW].es.fvw) > + break; > + if (i + 1 =3D=3D lkups->n_val_words) { > + fvl =3D (struct ice_sw_fv_list_entry *) > + ice_malloc(hw, sizeof(*fvl)); > + if (!fvl) > + goto err; > + fvl->fv_ptr =3D fv; > + fvl->profile_id =3D offset; > + LIST_ADD(&fvl->list_entry, fv_list); > + break; > + } > + } > + } while (fv); > + if (LIST_EMPTY(fv_list)) > + return ICE_ERR_CFG; > + return ICE_SUCCESS; > + > +err: > + LIST_FOR_EACH_ENTRY_SAFE(fvl, tmp, fv_list, ice_sw_fv_list_entry, > + list_entry) { > + LIST_DEL(&fvl->list_entry); > + ice_free(hw, fvl); > + } > + > + return ICE_ERR_NO_MEMORY; > +} > + > +/** > + * ice_init_prof_result_bm - Initialize the profile result index bitmap > + * @hw: pointer to hardware structure > + */ > +void ice_init_prof_result_bm(struct ice_hw *hw) > +{ > + struct ice_pkg_enum state; > + struct ice_seg *ice_seg; > + struct ice_fv *fv; > + > + ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); > + > + if (!hw->seg) > + return; > + > + ice_seg =3D hw->seg; > + do { > + u32 off; > + u16 i; > + > + fv =3D (struct ice_fv *) > + ice_pkg_enum_entry(ice_seg, &state, > ICE_SID_FLD_VEC_SW, > + &off, ice_sw_fv_handler); > + ice_seg =3D NULL; > + if (!fv) > + break; > + > + ice_zero_bitmap(hw->switch_info->prof_res_bm[off], > + ICE_MAX_FV_WORDS); > + > + /* Determine empty field vector indices, these can be > + * used for recipe results. Skip index 0, since it is > + * always used for Switch ID. > + */ > + for (i =3D 1; i < ICE_MAX_FV_WORDS; i++) > + if (fv->ew[i].prot_id =3D=3D ICE_PROT_INVALID && > + fv->ew[i].off =3D=3D ICE_FV_OFFSET_INVAL) > + ice_set_bit(i, > + hw->switch_info- > >prof_res_bm[off]); > + } while (fv); > +} > + > +/** > + * ice_pkg_buf_free > + * @hw: pointer to the HW structure > + * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) > + * > + * Frees a package buffer > + */ > +void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld) > +{ > + ice_free(hw, bld); > +} > + > +/** > + * ice_pkg_buf_reserve_section > + * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) > + * @count: the number of sections to reserve > + * > + * Reserves one or more section table entries in a package buffer. This > routine > + * can be called multiple times as long as they are made before calling > + * ice_pkg_buf_alloc_section(). Once ice_pkg_buf_alloc_section() > + * is called once, the number of sections that can be allocated will not= be > able > + * to be increased; not using all reserved sections is fine, but this wi= ll > + * result in some wasted space in the buffer. > + * Note: all package contents must be in Little Endian form. > + */ > +enum ice_status > +ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count) > +{ > + struct ice_buf_hdr *buf; > + u16 section_count; > + u16 data_end; > + > + if (!bld) > + return ICE_ERR_PARAM; > + > + buf =3D (struct ice_buf_hdr *)&bld->buf; > + > + /* already an active section, can't increase table size */ > + section_count =3D LE16_TO_CPU(buf->section_count); > + if (section_count > 0) > + return ICE_ERR_CFG; > + > + if (bld->reserved_section_table_entries + count > > ICE_MAX_S_COUNT) > + return ICE_ERR_CFG; > + bld->reserved_section_table_entries +=3D count; > + > + data_end =3D LE16_TO_CPU(buf->data_end) + > + FLEX_ARRAY_SIZE(buf, section_entry, count); > + buf->data_end =3D CPU_TO_LE16(data_end); > + > + return ICE_SUCCESS; > +} > + > +/** > + * ice_pkg_buf_alloc_section > + * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) > + * @type: the section type value > + * @size: the size of the section to reserve (in bytes) > + * > + * Reserves memory in the buffer for a section's content and updates the > + * buffers' status accordingly. This routine returns a pointer to the fi= rst > + * byte of the section start within the buffer, which is used to fill in= the > + * section contents. > + * Note: all package contents must be in Little Endian form. > + */ > +void * > +ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size) > +{ > + struct ice_buf_hdr *buf; > + u16 sect_count; > + u16 data_end; > + > + if (!bld || !type || !size) > + return NULL; > + > + buf =3D (struct ice_buf_hdr *)&bld->buf; > + > + /* check for enough space left in buffer */ > + data_end =3D LE16_TO_CPU(buf->data_end); > + > + /* section start must align on 4 byte boundary */ > + data_end =3D ICE_ALIGN(data_end, 4); > + > + if ((data_end + size) > ICE_MAX_S_DATA_END) > + return NULL; > + > + /* check for more available section table entries */ > + sect_count =3D LE16_TO_CPU(buf->section_count); > + if (sect_count < bld->reserved_section_table_entries) { > + void *section_ptr =3D ((u8 *)buf) + data_end; > + > + buf->section_entry[sect_count].offset =3D > CPU_TO_LE16(data_end); > + buf->section_entry[sect_count].size =3D CPU_TO_LE16(size); > + buf->section_entry[sect_count].type =3D CPU_TO_LE32(type); > + > + data_end +=3D size; > + buf->data_end =3D CPU_TO_LE16(data_end); > + > + buf->section_count =3D CPU_TO_LE16(sect_count + 1); > + return section_ptr; > + } > + > + /* no free section table entries */ > + return NULL; > +} > + > +/** > + * ice_pkg_buf_alloc_single_section > + * @hw: pointer to the HW structure > + * @type: the section type value > + * @size: the size of the section to reserve (in bytes) > + * @section: returns pointer to the section > + * > + * Allocates a package buffer with a single section. > + * Note: all package contents must be in Little Endian form. > + */ > +struct ice_buf_build * > +ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size, > + void **section) > +{ > + struct ice_buf_build *buf; > + > + if (!section) > + return NULL; > + > + buf =3D ice_pkg_buf_alloc(hw); > + if (!buf) > + return NULL; > + > + if (ice_pkg_buf_reserve_section(buf, 1)) > + goto ice_pkg_buf_alloc_single_section_err; > + > + *section =3D ice_pkg_buf_alloc_section(buf, type, size); > + if (!*section) > + goto ice_pkg_buf_alloc_single_section_err; > + > + return buf; > + > +ice_pkg_buf_alloc_single_section_err: > + ice_pkg_buf_free(hw, buf); > + return NULL; > +} > + > +/** > + * ice_pkg_buf_get_active_sections > + * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) > + * > + * Returns the number of active sections. Before using the package buffe= r > + * in an update package command, the caller should make sure that there > is at > + * least one active section - otherwise, the buffer is not legal and sho= uld > + * not be used. > + * Note: all package contents must be in Little Endian form. > + */ > +u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld) > +{ > + struct ice_buf_hdr *buf; > + > + if (!bld) > + return 0; > + > + buf =3D (struct ice_buf_hdr *)&bld->buf; > + return LE16_TO_CPU(buf->section_count); > +} > + > +/** > + * ice_pkg_buf > + * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) > + * > + * Return a pointer to the buffer's header > + */ > +struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld) > +{ > + if (bld) > + return &bld->buf; > + > + return NULL; > +} > + > +/** > + * ice_find_buf_table > + * @ice_seg: pointer to the ice segment > + * > + * Returns the address of the buffer table within the ice segment. > + */ > +struct ice_buf_table *ice_find_buf_table(struct ice_seg *ice_seg) > +{ > + struct ice_nvm_table *nvms; > + > + nvms =3D (struct ice_nvm_table *) > + (ice_seg->device_table + > + LE32_TO_CPU(ice_seg->device_table_count)); > + > + return (_FORCE_ struct ice_buf_table *) > + (nvms->vers + LE32_TO_CPU(nvms->table_count)); > +} > + > +/** > + * ice_pkg_val_buf > + * @buf: pointer to the ice buffer > + * > + * This helper function validates a buffer's header. > + */ > +static struct ice_buf_hdr *ice_pkg_val_buf(struct ice_buf *buf) > +{ > + struct ice_buf_hdr *hdr; > + u16 section_count; > + u16 data_end; > + > + hdr =3D (struct ice_buf_hdr *)buf->buf; > + /* verify data */ > + section_count =3D LE16_TO_CPU(hdr->section_count); > + if (section_count < ICE_MIN_S_COUNT || section_count > > ICE_MAX_S_COUNT) > + return NULL; > + > + data_end =3D LE16_TO_CPU(hdr->data_end); > + if (data_end < ICE_MIN_S_DATA_END || data_end > > ICE_MAX_S_DATA_END) > + return NULL; > + > + return hdr; > +} > + > +/** > + * ice_pkg_enum_buf > + * @ice_seg: pointer to the ice segment (or NULL on subsequent calls) > + * @state: pointer to the enum state > + * > + * This function will enumerate all the buffers in the ice segment. The = first > + * call is made with the ice_seg parameter non-NULL; on subsequent calls= , > + * ice_seg is set to NULL which continues the enumeration. When the > function > + * returns a NULL pointer, then the end of the buffers has been reached, > or an > + * unexpected value has been detected (for example an invalid section > count or > + * an invalid buffer end value). > + */ > +struct ice_buf_hdr * > +ice_pkg_enum_buf(struct ice_seg *ice_seg, struct ice_pkg_enum *state) > +{ > + if (ice_seg) { > + state->buf_table =3D ice_find_buf_table(ice_seg); > + if (!state->buf_table) > + return NULL; > + > + state->buf_idx =3D 0; > + return ice_pkg_val_buf(state->buf_table->buf_array); > + } > + > + if (++state->buf_idx < LE32_TO_CPU(state->buf_table->buf_count)) > + return ice_pkg_val_buf(state->buf_table->buf_array + > + state->buf_idx); > + else > + return NULL; > +} > + > +/** > + * ice_pkg_advance_sect > + * @ice_seg: pointer to the ice segment (or NULL on subsequent calls) > + * @state: pointer to the enum state > + * > + * This helper function will advance the section within the ice segment, > + * also advancing the buffer if needed. > + */ > +bool > +ice_pkg_advance_sect(struct ice_seg *ice_seg, struct ice_pkg_enum *state= ) > +{ > + if (!ice_seg && !state->buf) > + return false; > + > + if (!ice_seg && state->buf) > + if (++state->sect_idx < LE16_TO_CPU(state->buf- > >section_count)) > + return true; > + > + state->buf =3D ice_pkg_enum_buf(ice_seg, state); > + if (!state->buf) > + return false; > + > + /* start of new buffer, reset section index */ > + state->sect_idx =3D 0; > + return true; > +} > + > +/** > + * ice_pkg_enum_section > + * @ice_seg: pointer to the ice segment (or NULL on subsequent calls) > + * @state: pointer to the enum state > + * @sect_type: section type to enumerate > + * > + * This function will enumerate all the sections of a particular type in= the > + * ice segment. The first call is made with the ice_seg parameter non-NU= LL; > + * on subsequent calls, ice_seg is set to NULL which continues the > enumeration. > + * When the function returns a NULL pointer, then the end of the > matching > + * sections has been reached. > + */ > +void * > +ice_pkg_enum_section(struct ice_seg *ice_seg, struct ice_pkg_enum *state= , > + u32 sect_type) > +{ > + u16 offset, size; > + > + if (ice_seg) > + state->type =3D sect_type; > + > + if (!ice_pkg_advance_sect(ice_seg, state)) > + return NULL; > + > + /* scan for next matching section */ > + while (state->buf->section_entry[state->sect_idx].type !=3D > + CPU_TO_LE32(state->type)) > + if (!ice_pkg_advance_sect(NULL, state)) > + return NULL; > + > + /* validate section */ > + offset =3D LE16_TO_CPU(state->buf->section_entry[state- > >sect_idx].offset); > + if (offset < ICE_MIN_S_OFF || offset > ICE_MAX_S_OFF) > + return NULL; > + > + size =3D LE16_TO_CPU(state->buf->section_entry[state->sect_idx].size); > + if (size < ICE_MIN_S_SZ || size > ICE_MAX_S_SZ) > + return NULL; > + > + /* make sure the section fits in the buffer */ > + if (offset + size > ICE_PKG_BUF_SIZE) > + return NULL; > + > + state->sect_type =3D > + LE32_TO_CPU(state->buf->section_entry[state- > >sect_idx].type); > + > + /* calc pointer to this section */ > + state->sect =3D ((u8 *)state->buf) + > + LE16_TO_CPU(state->buf->section_entry[state- > >sect_idx].offset); > + > + return state->sect; > +} > + > +/** > + * ice_pkg_enum_entry > + * @ice_seg: pointer to the ice segment (or NULL on subsequent calls) > + * @state: pointer to the enum state > + * @sect_type: section type to enumerate > + * @offset: pointer to variable that receives the offset in the table > (optional) > + * @handler: function that handles access to the entries into the sectio= n > type > + * > + * This function will enumerate all the entries in particular section ty= pe in > + * the ice segment. The first call is made with the ice_seg parameter no= n- > NULL; > + * on subsequent calls, ice_seg is set to NULL which continues the > enumeration. > + * When the function returns a NULL pointer, then the end of the entries > has > + * been reached. > + * > + * Since each section may have a different header and entry size, the > handler > + * function is needed to determine the number and location entries in > each > + * section. > + * > + * The offset parameter is optional, but should be used for sections tha= t > + * contain an offset for each section table. For such cases, the section > handler > + * function must return the appropriate offset + index to give the > absolution > + * offset for each entry. For example, if the base for a section's heade= r > + * indicates a base offset of 10, and the index for the entry is 2, then > + * section handler function should set the offset to 10 + 2 =3D 12. > + */ > +void * > +ice_pkg_enum_entry(struct ice_seg *ice_seg, struct ice_pkg_enum *state, > + u32 sect_type, u32 *offset, > + void *(*handler)(u32 sect_type, void *section, > + u32 index, u32 *offset)) > +{ > + void *entry; > + > + if (ice_seg) { > + if (!handler) > + return NULL; > + > + if (!ice_pkg_enum_section(ice_seg, state, sect_type)) > + return NULL; > + > + state->entry_idx =3D 0; > + state->handler =3D handler; > + } else { > + state->entry_idx++; > + } > + > + if (!state->handler) > + return NULL; > + > + /* get entry */ > + entry =3D state->handler(state->sect_type, state->sect, state- > >entry_idx, > + offset); > + if (!entry) { > + /* end of a section, look for another section of this type */ > + if (!ice_pkg_enum_section(NULL, state, 0)) > + return NULL; > + > + state->entry_idx =3D 0; > + entry =3D state->handler(state->sect_type, state->sect, > + state->entry_idx, offset); > + } > + > + return entry; > +} > + > +/** > + * ice_boost_tcam_handler > + * @sect_type: section type > + * @section: pointer to section > + * @index: index of the boost TCAM entry to be returned > + * @offset: pointer to receive absolute offset, always 0 for boost TCAM > sections > + * > + * This is a callback function that can be passed to ice_pkg_enum_entry. > + * Handles enumeration of individual boost TCAM entries. > + */ > +static void * > +ice_boost_tcam_handler(u32 sect_type, void *section, u32 index, u32 > *offset) > +{ > + struct ice_boost_tcam_section *boost; > + > + if (!section) > + return NULL; > + > + if (sect_type !=3D ICE_SID_RXPARSER_BOOST_TCAM) > + return NULL; > + > + if (index > ICE_MAX_BST_TCAMS_IN_BUF) > + return NULL; > + > + if (offset) > + *offset =3D 0; > + > + boost =3D (struct ice_boost_tcam_section *)section; > + if (index >=3D LE16_TO_CPU(boost->count)) > + return NULL; > + > + return boost->tcam + index; > +} > + > +/** > + * ice_find_boost_entry > + * @ice_seg: pointer to the ice segment (non-NULL) > + * @addr: Boost TCAM address of entry to search for > + * @entry: returns pointer to the entry > + * > + * Finds a particular Boost TCAM entry and returns a pointer to that ent= ry > + * if it is found. The ice_seg parameter must not be NULL since the firs= t call > + * to ice_pkg_enum_entry requires a pointer to an actual ice_segment > structure. > + */ > +static enum ice_status > +ice_find_boost_entry(struct ice_seg *ice_seg, u16 addr, > + struct ice_boost_tcam_entry **entry) > +{ > + struct ice_boost_tcam_entry *tcam; > + struct ice_pkg_enum state; > + > + ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); > + > + if (!ice_seg) > + return ICE_ERR_PARAM; > + > + do { > + tcam =3D (struct ice_boost_tcam_entry *) > + ice_pkg_enum_entry(ice_seg, &state, > + ICE_SID_RXPARSER_BOOST_TCAM, > NULL, > + ice_boost_tcam_handler); > + if (tcam && LE16_TO_CPU(tcam->addr) =3D=3D addr) { > + *entry =3D tcam; > + return ICE_SUCCESS; > + } > + > + ice_seg =3D NULL; > + } while (tcam); > + > + *entry =3D NULL; > + return ICE_ERR_CFG; > +} > + > +/** > + * ice_init_pkg_hints > + * @hw: pointer to the HW structure > + * @ice_seg: pointer to the segment of the package scan (non-NULL) > + * > + * This function will scan the package and save off relevant information > + * (hints or metadata) for driver use. The ice_seg parameter must not be > NULL > + * since the first call to ice_enum_labels requires a pointer to an actu= al > + * ice_seg structure. > + */ > +void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg) > +{ > + struct ice_pkg_enum state; > + char *label_name; > + u16 val; > + int i; > + > + ice_memset(&hw->tnl, 0, sizeof(hw->tnl), ICE_NONDMA_MEM); > + ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); > + > + if (!ice_seg) > + return; > + > + label_name =3D ice_enum_labels(ice_seg, > ICE_SID_LBL_RXPARSER_TMEM, &state, > + &val); > + > + while (label_name) { > +/* TODO: Replace !strnsmp() with wrappers like match_some_pre() */ > + if (!strncmp(label_name, ICE_TNL_PRE, strlen(ICE_TNL_PRE))) > + /* check for a tunnel entry */ > + ice_add_tunnel_hint(hw, label_name, val); > + > + /* check for a dvm mode entry */ > + else if (!strncmp(label_name, ICE_DVM_PRE, > strlen(ICE_DVM_PRE))) > + ice_add_dvm_hint(hw, val, true); > + > + /* check for a svm mode entry */ > + else if (!strncmp(label_name, ICE_SVM_PRE, > strlen(ICE_SVM_PRE))) > + ice_add_dvm_hint(hw, val, false); > + > + label_name =3D ice_enum_labels(NULL, 0, &state, &val); > + } > + > + /* Cache the appropriate boost TCAM entry pointers for tunnels */ > + for (i =3D 0; i < hw->tnl.count; i++) { > + ice_find_boost_entry(ice_seg, hw->tnl.tbl[i].boost_addr, > + &hw->tnl.tbl[i].boost_entry); > + if (hw->tnl.tbl[i].boost_entry) > + hw->tnl.tbl[i].valid =3D true; > + } > + > + /* Cache the appropriate boost TCAM entry pointers for DVM and > SVM */ > + for (i =3D 0; i < hw->dvm_upd.count; i++) > + ice_find_boost_entry(ice_seg, hw- > >dvm_upd.tbl[i].boost_addr, > + &hw->dvm_upd.tbl[i].boost_entry); > +} > + > +/** > + * ice_acquire_global_cfg_lock > + * @hw: pointer to the HW structure > + * @access: access type (read or write) > + * > + * This function will request ownership of the global config lock for re= ading > + * or writing of the package. When attempting to obtain write access, th= e > + * caller must check for the following two return values: > + * > + * ICE_SUCCESS - Means the caller has acquired the global config = lock > + * and can perform writing of the package. > + * ICE_ERR_AQ_NO_WORK - Indicates another driver has already written > the > + * package or has found that no update was necessar= y; in > + * this case, the caller can just skip performing a= ny > + * update of the package. > + */ > +enum ice_status > +ice_acquire_global_cfg_lock(struct ice_hw *hw, > + enum ice_aq_res_access_type access) > +{ > + enum ice_status status; > + > + status =3D ice_acquire_res(hw, ICE_GLOBAL_CFG_LOCK_RES_ID, access, > + ICE_GLOBAL_CFG_LOCK_TIMEOUT); > + > + if (status =3D=3D ICE_ERR_AQ_NO_WORK) > + ice_debug(hw, ICE_DBG_PKG, "Global config lock: No work > to do\n"); > + > + return status; > +} > + > +/** > + * ice_release_global_cfg_lock > + * @hw: pointer to the HW structure > + * > + * This function will release the global config lock. > + */ > +void ice_release_global_cfg_lock(struct ice_hw *hw) > +{ > + ice_release_res(hw, ICE_GLOBAL_CFG_LOCK_RES_ID); > +} > + > +/** > + * ice_acquire_change_lock > + * @hw: pointer to the HW structure > + * @access: access type (read or write) > + * > + * This function will request ownership of the change lock. > + */ > +enum ice_status > +ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type > access) > +{ > + return ice_acquire_res(hw, ICE_CHANGE_LOCK_RES_ID, access, > + ICE_CHANGE_LOCK_TIMEOUT); > +} > + > +/** > + * ice_release_change_lock > + * @hw: pointer to the HW structure > + * > + * This function will release the change lock using the proper Admin > Command. > + */ > +void ice_release_change_lock(struct ice_hw *hw) > +{ > + ice_release_res(hw, ICE_CHANGE_LOCK_RES_ID); > +} > + > +/** > + * ice_get_set_tx_topo - get or set tx topology > + * @hw: pointer to the HW struct > + * @buf: pointer to tx topology buffer > + * @buf_size: buffer size > + * @cd: pointer to command details structure or NULL > + * @flags: pointer to descriptor flags > + * @set: 0-get, 1-set topology > + * > + * The function will get or set tx topology > + */ > +static enum ice_status > +ice_get_set_tx_topo(struct ice_hw *hw, u8 *buf, u16 buf_size, > + struct ice_sq_cd *cd, u8 *flags, bool set) > +{ > + struct ice_aqc_get_set_tx_topo *cmd; > + struct ice_aq_desc desc; > + enum ice_status status; > + > + cmd =3D &desc.params.get_set_tx_topo; > + if (set) { > + ice_fill_dflt_direct_cmd_desc(&desc, > ice_aqc_opc_set_tx_topo); > + cmd->set_flags =3D ICE_AQC_TX_TOPO_FLAGS_ISSUED; > + /* requested to update a new topology, not a default > topolgy */ > + if (buf) > + cmd->set_flags |=3D > ICE_AQC_TX_TOPO_FLAGS_SRC_RAM | > + > ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW; > + } else { > + ice_fill_dflt_direct_cmd_desc(&desc, > ice_aqc_opc_get_tx_topo); > + cmd->get_flags =3D ICE_AQC_TX_TOPO_GET_RAM; > + } > + desc.flags |=3D CPU_TO_LE16(ICE_AQ_FLAG_RD); > + status =3D ice_aq_send_cmd(hw, &desc, buf, buf_size, cd); > + if (status) > + return status; > + /* read the return flag values (first byte) for get operation */ > + if (!set && flags) > + *flags =3D desc.params.get_set_tx_topo.set_flags; > + > + return ICE_SUCCESS; > +} > + > +/** > + * ice_cfg_tx_topo - Initialize new tx topology if available > + * @hw: pointer to the HW struct > + * @buf: pointer to Tx topology buffer > + * @len: buffer size > + * > + * The function will apply the new Tx topology from the package buffer > + * if available. > + */ > +enum ice_status ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len) > +{ > + u8 *current_topo, *new_topo =3D NULL; > + struct ice_run_time_cfg_seg *seg; > + struct ice_buf_hdr *section; > + struct ice_pkg_hdr *pkg_hdr; > + enum ice_ddp_state state; > + u16 i, size =3D 0, offset; > + enum ice_status status; > + u32 reg =3D 0; > + u8 flags; > + > + if (!buf || !len) > + return ICE_ERR_PARAM; > + > + /* Does FW support new Tx topology mode ? */ > + if (!hw->func_caps.common_cap.tx_sched_topo_comp_mode_en) { > + ice_debug(hw, ICE_DBG_INIT, "FW doesn't support > compatibility mode\n"); > + return ICE_ERR_NOT_SUPPORTED; > + } > + > + current_topo =3D (u8 *)ice_malloc(hw, ICE_AQ_MAX_BUF_LEN); > + if (!current_topo) > + return ICE_ERR_NO_MEMORY; > + > + /* get the current Tx topology */ > + status =3D ice_get_set_tx_topo(hw, current_topo, > ICE_AQ_MAX_BUF_LEN, NULL, > + &flags, false); > + ice_free(hw, current_topo); > + > + if (status) { > + ice_debug(hw, ICE_DBG_INIT, "Get current topology is > failed\n"); > + return status; > + } > + > + /* Is default topology already applied ? */ > + if (!(flags & ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW) && > + hw->num_tx_sched_layers =3D=3D 9) { > + ice_debug(hw, ICE_DBG_INIT, "Loaded default topology\n"); > + /* Already default topology is loaded */ > + return ICE_ERR_ALREADY_EXISTS; > + } > + > + /* Is new topology already applied ? */ > + if ((flags & ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW) && > + hw->num_tx_sched_layers =3D=3D 5) { > + ice_debug(hw, ICE_DBG_INIT, "Loaded new topology\n"); > + /* Already new topology is loaded */ > + return ICE_ERR_ALREADY_EXISTS; > + } > + > + /* Is set topology issued already ? */ > + if (flags & ICE_AQC_TX_TOPO_FLAGS_ISSUED) { > + ice_debug(hw, ICE_DBG_INIT, "Update tx topology was done > by another PF\n"); > + /* add a small delay before exiting */ > + for (i =3D 0; i < 20; i++) > + ice_msec_delay(100, true); > + return ICE_ERR_ALREADY_EXISTS; > + } > + > + /* Change the topology from new to default (5 to 9) */ > + if (!(flags & ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW) && > + hw->num_tx_sched_layers =3D=3D 5) { > + ice_debug(hw, ICE_DBG_INIT, "Change topology from 5 to 9 > layers\n"); > + goto update_topo; > + } > + > + pkg_hdr =3D (struct ice_pkg_hdr *)buf; > + state =3D ice_verify_pkg(pkg_hdr, len); > + if (state) { > + ice_debug(hw, ICE_DBG_INIT, "failed to verify pkg > (err: %d)\n", > + state); > + return ICE_ERR_CFG; > + } > + > + /* find run time configuration segment */ > + seg =3D (struct ice_run_time_cfg_seg *) > + ice_find_seg_in_pkg(hw, > SEGMENT_TYPE_ICE_RUN_TIME_CFG, pkg_hdr); > + if (!seg) { > + ice_debug(hw, ICE_DBG_INIT, "5 layer topology segment is > missing\n"); > + return ICE_ERR_CFG; > + } > + > + if (LE32_TO_CPU(seg->buf_table.buf_count) < ICE_MIN_S_COUNT) { > + ice_debug(hw, ICE_DBG_INIT, "5 layer topology segment > count(%d) is wrong\n", > + seg->buf_table.buf_count); > + return ICE_ERR_CFG; > + } > + > + section =3D ice_pkg_val_buf(seg->buf_table.buf_array); > + > + if (!section || LE32_TO_CPU(section->section_entry[0].type) !=3D > + ICE_SID_TX_5_LAYER_TOPO) { > + ice_debug(hw, ICE_DBG_INIT, "5 layer topology section type > is wrong\n"); > + return ICE_ERR_CFG; > + } > + > + size =3D LE16_TO_CPU(section->section_entry[0].size); > + offset =3D LE16_TO_CPU(section->section_entry[0].offset); > + if (size < ICE_MIN_S_SZ || size > ICE_MAX_S_SZ) { > + ice_debug(hw, ICE_DBG_INIT, "5 layer topology section size > is wrong\n"); > + return ICE_ERR_CFG; > + } > + > + /* make sure the section fits in the buffer */ > + if (offset + size > ICE_PKG_BUF_SIZE) { > + ice_debug(hw, ICE_DBG_INIT, "5 layer topology buffer > > 4K\n"); > + return ICE_ERR_CFG; > + } > + > + /* Get the new topology buffer */ > + new_topo =3D ((u8 *)section) + offset; > + > +update_topo: > + /* acquire global lock to make sure that set topology issued > + * by one PF > + */ > + status =3D ice_acquire_res(hw, ICE_GLOBAL_CFG_LOCK_RES_ID, > ICE_RES_WRITE, > + ICE_GLOBAL_CFG_LOCK_TIMEOUT); > + if (status) { > + ice_debug(hw, ICE_DBG_INIT, "Failed to acquire global > lock\n"); > + return status; > + } > + > + /* check reset was triggered already or not */ > + reg =3D rd32(hw, GLGEN_RSTAT); > + if (reg & GLGEN_RSTAT_DEVSTATE_M) { > + /* Reset is in progress, re-init the hw again */ > + ice_debug(hw, ICE_DBG_INIT, "Reset is in progress. layer > topology might be applied already\n"); > + ice_check_reset(hw); > + return ICE_SUCCESS; > + } > + > + /* set new topology */ > + status =3D ice_get_set_tx_topo(hw, new_topo, size, NULL, NULL, true); > + if (status) { > + ice_debug(hw, ICE_DBG_INIT, "Set tx topology is failed\n"); > + return status; > + } > + > + /* new topology is updated, delay 1 second before issuing the > CORRER */ > + for (i =3D 0; i < 10; i++) > + ice_msec_delay(100, true); > + ice_reset(hw, ICE_RESET_CORER); > + /* CORER will clear the global lock, so no explicit call > + * required for release > + */ > + return ICE_SUCCESS; > +} > diff --git a/drivers/net/ice/base/ice_ddp.h b/drivers/net/ice/base/ice_dd= p.h > new file mode 100644 > index 0000000000..53bbbe2a5a > --- /dev/null > +++ b/drivers/net/ice/base/ice_ddp.h > @@ -0,0 +1,466 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2001-2022 Intel Corporation > + */ > + > +#ifndef _ICE_DDP_H_ > +#define _ICE_DDP_H_ > + > +#include "ice_osdep.h" > +#include "ice_adminq_cmd.h" > +#include "ice_controlq.h" > +#include "ice_status.h" > +#include "ice_flex_type.h" > +#include "ice_protocol_type.h" > + > +/* Package minimal version supported */ > +#define ICE_PKG_SUPP_VER_MAJ 1 > +#define ICE_PKG_SUPP_VER_MNR 3 > + > +/* Package format version */ > +#define ICE_PKG_FMT_VER_MAJ 1 > +#define ICE_PKG_FMT_VER_MNR 0 > +#define ICE_PKG_FMT_VER_UPD 0 > +#define ICE_PKG_FMT_VER_DFT 0 > + > +#define ICE_PKG_CNT 4 > + > +enum ice_ddp_state { > + /* Indicates that this call to ice_init_pkg > + * successfully loaded the requested DDP package > + */ > + ICE_DDP_PKG_SUCCESS =3D 0, > + > + /* Generic error for already loaded errors, it is mapped later to > + * the more specific one (one of the next 3) > + */ > + ICE_DDP_PKG_ALREADY_LOADED =3D -1, > + > + /* Indicates that a DDP package of the same version has already > been > + * loaded onto the device by a previous call or by another PF > + */ > + ICE_DDP_PKG_SAME_VERSION_ALREADY_LOADED =3D -2, > + > + /* The device has a DDP package that is not supported by the driver > */ > + ICE_DDP_PKG_ALREADY_LOADED_NOT_SUPPORTED =3D -3, > + > + /* The device has a compatible package > + * (but different from the request) already loaded > + */ > + ICE_DDP_PKG_COMPATIBLE_ALREADY_LOADED =3D -4, > + > + /* The firmware loaded on the device is not compatible with > + * the DDP package loaded > + */ > + ICE_DDP_PKG_FW_MISMATCH =3D -5, > + > + /* The DDP package file is invalid */ > + ICE_DDP_PKG_INVALID_FILE =3D -6, > + > + /* The version of the DDP package provided is higher than > + * the driver supports > + */ > + ICE_DDP_PKG_FILE_VERSION_TOO_HIGH =3D -7, > + > + /* The version of the DDP package provided is lower than the > + * driver supports > + */ > + ICE_DDP_PKG_FILE_VERSION_TOO_LOW =3D -8, > + > + /* Missing security manifest in DDP pkg */ > + ICE_DDP_PKG_NO_SEC_MANIFEST =3D -9, > + > + /* The RSA signature of the DDP package file provided is invalid */ > + ICE_DDP_PKG_FILE_SIGNATURE_INVALID =3D -10, > + > + /* The DDP package file security revision is too low and not > + * supported by firmware > + */ > + ICE_DDP_PKG_SECURE_VERSION_NBR_TOO_LOW =3D -11, > + > + /* Manifest hash mismatch */ > + ICE_DDP_PKG_MANIFEST_INVALID =3D -12, > + > + /* Buffer hash mismatches manifest */ > + ICE_DDP_PKG_BUFFER_INVALID =3D -13, > + > + /* Other errors */ > + ICE_DDP_PKG_ERR =3D -14, > +}; > + > +/* Package and segment headers and tables */ > +struct ice_pkg_hdr { > + struct ice_pkg_ver pkg_format_ver; > + __le32 seg_count; > + __le32 seg_offset[STRUCT_HACK_VAR_LEN]; > +}; > + > +/* Package signing algorithm types */ > +#define SEGMENT_SIGN_TYPE_INVALID 0x00000000 > +#define SEGMENT_SIGN_TYPE_RSA2K 0x00000001 > +#define SEGMENT_SIGN_TYPE_RSA3K 0x00000002 > +#define SEGMENT_SIGN_TYPE_RSA3K_SBB 0x00000003 /* Secure Boot > Block */ > + > +/* generic segment */ > +struct ice_generic_seg_hdr { > +#define SEGMENT_TYPE_INVALID 0x00000000 > +#define SEGMENT_TYPE_METADATA 0x00000001 > +#define SEGMENT_TYPE_ICE_E810 0x00000010 > +#define SEGMENT_TYPE_SIGNING 0x00001001 > +#define SEGMENT_TYPE_ICE_RUN_TIME_CFG 0x00000020 > + __le32 seg_type; > + struct ice_pkg_ver seg_format_ver; > + __le32 seg_size; > + char seg_id[ICE_PKG_NAME_SIZE]; > +}; > + > +/* ice specific segment */ > + > +union ice_device_id { > + struct { > + __le16 device_id; > + __le16 vendor_id; > + } dev_vend_id; > + __le32 id; > +}; > + > +struct ice_device_id_entry { > + union ice_device_id device; > + union ice_device_id sub_device; > +}; > + > +struct ice_seg { > + struct ice_generic_seg_hdr hdr; > + __le32 device_table_count; > + struct ice_device_id_entry device_table[STRUCT_HACK_VAR_LEN]; > +}; > + > +struct ice_nvm_table { > + __le32 table_count; > + __le32 vers[STRUCT_HACK_VAR_LEN]; > +}; > + > +struct ice_buf { > +#define ICE_PKG_BUF_SIZE 4096 > + u8 buf[ICE_PKG_BUF_SIZE]; > +}; > + > +struct ice_buf_table { > + __le32 buf_count; > + struct ice_buf buf_array[STRUCT_HACK_VAR_LEN]; > +}; > + > +struct ice_run_time_cfg_seg { > + struct ice_generic_seg_hdr hdr; > + u8 rsvd[8]; > + struct ice_buf_table buf_table; > +}; > + > +/* global metadata specific segment */ > +struct ice_global_metadata_seg { > + struct ice_generic_seg_hdr hdr; > + struct ice_pkg_ver pkg_ver; > + __le32 rsvd; > + char pkg_name[ICE_PKG_NAME_SIZE]; > +}; > + > +#define ICE_MIN_S_OFF 12 > +#define ICE_MAX_S_OFF 4095 > +#define ICE_MIN_S_SZ 1 > +#define ICE_MAX_S_SZ 4084 > + > +struct ice_sign_seg { > + struct ice_generic_seg_hdr hdr; > + __le32 seg_id; > + __le32 sign_type; > + __le32 signed_seg_idx; > + __le32 signed_buf_start; > + __le32 signed_buf_count; > +#define ICE_SIGN_SEG_RESERVED_COUNT 44 > + u8 reserved[ICE_SIGN_SEG_RESERVED_COUNT]; > + struct ice_buf_table buf_tbl; > +}; > + > +/* section information */ > +struct ice_section_entry { > + __le32 type; > + __le16 offset; > + __le16 size; > +}; > + > +#define ICE_MIN_S_COUNT 1 > +#define ICE_MAX_S_COUNT 511 > +#define ICE_MIN_S_DATA_END 12 > +#define ICE_MAX_S_DATA_END 4096 > + > +#define ICE_METADATA_BUF 0x80000000 > + > +struct ice_buf_hdr { > + __le16 section_count; > + __le16 data_end; > + struct ice_section_entry section_entry[STRUCT_HACK_VAR_LEN]; > +}; > + > +#define ICE_MAX_ENTRIES_IN_BUF(hd_sz, ent_sz) ((ICE_PKG_BUF_SIZE - \ > + ice_struct_size((struct ice_buf_hdr *)0, section_entry, 1) - (hd_sz)) /= \ > + (ent_sz)) > + > +/* ice package section IDs */ > +#define ICE_SID_METADATA 1 > +#define ICE_SID_XLT0_SW 10 > +#define ICE_SID_XLT_KEY_BUILDER_SW 11 > +#define ICE_SID_XLT1_SW 12 > +#define ICE_SID_XLT2_SW 13 > +#define ICE_SID_PROFID_TCAM_SW 14 > +#define ICE_SID_PROFID_REDIR_SW 15 > +#define ICE_SID_FLD_VEC_SW 16 > +#define ICE_SID_CDID_KEY_BUILDER_SW 17 > +#define ICE_SID_CDID_REDIR_SW 18 > + > +#define ICE_SID_XLT0_ACL 20 > +#define ICE_SID_XLT_KEY_BUILDER_ACL 21 > +#define ICE_SID_XLT1_ACL 22 > +#define ICE_SID_XLT2_ACL 23 > +#define ICE_SID_PROFID_TCAM_ACL 24 > +#define ICE_SID_PROFID_REDIR_ACL 25 > +#define ICE_SID_FLD_VEC_ACL 26 > +#define ICE_SID_CDID_KEY_BUILDER_ACL 27 > +#define ICE_SID_CDID_REDIR_ACL 28 > + > +#define ICE_SID_XLT0_FD 30 > +#define ICE_SID_XLT_KEY_BUILDER_FD 31 > +#define ICE_SID_XLT1_FD 32 > +#define ICE_SID_XLT2_FD 33 > +#define ICE_SID_PROFID_TCAM_FD 34 > +#define ICE_SID_PROFID_REDIR_FD 35 > +#define ICE_SID_FLD_VEC_FD 36 > +#define ICE_SID_CDID_KEY_BUILDER_FD 37 > +#define ICE_SID_CDID_REDIR_FD 38 > + > +#define ICE_SID_XLT0_RSS 40 > +#define ICE_SID_XLT_KEY_BUILDER_RSS 41 > +#define ICE_SID_XLT1_RSS 42 > +#define ICE_SID_XLT2_RSS 43 > +#define ICE_SID_PROFID_TCAM_RSS 44 > +#define ICE_SID_PROFID_REDIR_RSS 45 > +#define ICE_SID_FLD_VEC_RSS 46 > +#define ICE_SID_CDID_KEY_BUILDER_RSS 47 > +#define ICE_SID_CDID_REDIR_RSS 48 > + > +#define ICE_SID_RXPARSER_CAM 50 > +#define ICE_SID_RXPARSER_NOMATCH_CAM 51 > +#define ICE_SID_RXPARSER_IMEM 52 > +#define ICE_SID_RXPARSER_XLT0_BUILDER 53 > +#define ICE_SID_RXPARSER_NODE_PTYPE 54 > +#define ICE_SID_RXPARSER_MARKER_PTYPE 55 > +#define ICE_SID_RXPARSER_BOOST_TCAM 56 > +#define ICE_SID_RXPARSER_PROTO_GRP 57 > +#define ICE_SID_RXPARSER_METADATA_INIT 58 > +#define ICE_SID_RXPARSER_XLT0 59 > + > +#define ICE_SID_TXPARSER_CAM 60 > +#define ICE_SID_TXPARSER_NOMATCH_CAM 61 > +#define ICE_SID_TXPARSER_IMEM 62 > +#define ICE_SID_TXPARSER_XLT0_BUILDER 63 > +#define ICE_SID_TXPARSER_NODE_PTYPE 64 > +#define ICE_SID_TXPARSER_MARKER_PTYPE 65 > +#define ICE_SID_TXPARSER_BOOST_TCAM 66 > +#define ICE_SID_TXPARSER_PROTO_GRP 67 > +#define ICE_SID_TXPARSER_METADATA_INIT 68 > +#define ICE_SID_TXPARSER_XLT0 69 > + > +#define ICE_SID_RXPARSER_INIT_REDIR 70 > +#define ICE_SID_TXPARSER_INIT_REDIR 71 > +#define ICE_SID_RXPARSER_MARKER_GRP 72 > +#define ICE_SID_TXPARSER_MARKER_GRP 73 > +#define ICE_SID_RXPARSER_LAST_PROTO 74 > +#define ICE_SID_TXPARSER_LAST_PROTO 75 > +#define ICE_SID_RXPARSER_PG_SPILL 76 > +#define ICE_SID_TXPARSER_PG_SPILL 77 > +#define ICE_SID_RXPARSER_NOMATCH_SPILL 78 > +#define ICE_SID_TXPARSER_NOMATCH_SPILL 79 > + > +#define ICE_SID_XLT0_PE 80 > +#define ICE_SID_XLT_KEY_BUILDER_PE 81 > +#define ICE_SID_XLT1_PE 82 > +#define ICE_SID_XLT2_PE 83 > +#define ICE_SID_PROFID_TCAM_PE 84 > +#define ICE_SID_PROFID_REDIR_PE 85 > +#define ICE_SID_FLD_VEC_PE 86 > +#define ICE_SID_CDID_KEY_BUILDER_PE 87 > +#define ICE_SID_CDID_REDIR_PE 88 > + > +#define ICE_SID_RXPARSER_FLAG_REDIR 97 > + > +/* Label Metadata section IDs */ > +#define ICE_SID_LBL_FIRST 0x80000010 > +#define ICE_SID_LBL_RXPARSER_IMEM 0x80000010 > +#define ICE_SID_LBL_TXPARSER_IMEM 0x80000011 > +#define ICE_SID_LBL_RESERVED_12 0x80000012 > +#define ICE_SID_LBL_RESERVED_13 0x80000013 > +#define ICE_SID_LBL_RXPARSER_MARKER 0x80000014 > +#define ICE_SID_LBL_TXPARSER_MARKER 0x80000015 > +#define ICE_SID_LBL_PTYPE 0x80000016 > +#define ICE_SID_LBL_PROTOCOL_ID 0x80000017 > +#define ICE_SID_LBL_RXPARSER_TMEM 0x80000018 > +#define ICE_SID_LBL_TXPARSER_TMEM 0x80000019 > +#define ICE_SID_LBL_RXPARSER_PG 0x8000001A > +#define ICE_SID_LBL_TXPARSER_PG 0x8000001B > +#define ICE_SID_LBL_RXPARSER_M_TCAM 0x8000001C > +#define ICE_SID_LBL_TXPARSER_M_TCAM 0x8000001D > +#define ICE_SID_LBL_SW_PROFID_TCAM 0x8000001E > +#define ICE_SID_LBL_ACL_PROFID_TCAM 0x8000001F > +#define ICE_SID_LBL_PE_PROFID_TCAM 0x80000020 > +#define ICE_SID_LBL_RSS_PROFID_TCAM 0x80000021 > +#define ICE_SID_LBL_FD_PROFID_TCAM 0x80000022 > +#define ICE_SID_LBL_FLAG 0x80000023 > +#define ICE_SID_LBL_REG 0x80000024 > +#define ICE_SID_LBL_SW_PTG 0x80000025 > +#define ICE_SID_LBL_ACL_PTG 0x80000026 > +#define ICE_SID_LBL_PE_PTG 0x80000027 > +#define ICE_SID_LBL_RSS_PTG 0x80000028 > +#define ICE_SID_LBL_FD_PTG 0x80000029 > +#define ICE_SID_LBL_SW_VSIG 0x8000002A > +#define ICE_SID_LBL_ACL_VSIG 0x8000002B > +#define ICE_SID_LBL_PE_VSIG 0x8000002C > +#define ICE_SID_LBL_RSS_VSIG 0x8000002D > +#define ICE_SID_LBL_FD_VSIG 0x8000002E > +#define ICE_SID_LBL_PTYPE_META 0x8000002F > +#define ICE_SID_LBL_SW_PROFID 0x80000030 > +#define ICE_SID_LBL_ACL_PROFID 0x80000031 > +#define ICE_SID_LBL_PE_PROFID 0x80000032 > +#define ICE_SID_LBL_RSS_PROFID 0x80000033 > +#define ICE_SID_LBL_FD_PROFID 0x80000034 > +#define ICE_SID_LBL_RXPARSER_MARKER_GRP 0x80000035 > +#define ICE_SID_LBL_TXPARSER_MARKER_GRP 0x80000036 > +#define ICE_SID_LBL_RXPARSER_PROTO 0x80000037 > +#define ICE_SID_LBL_TXPARSER_PROTO 0x80000038 > +/* The following define MUST be updated to reflect the last label sectio= n > ID */ > +#define ICE_SID_LBL_LAST 0x80000038 > + > +/* Label ICE runtime configuration section IDs */ > +#define ICE_SID_TX_5_LAYER_TOPO 0x10 > + > +enum ice_block { > + ICE_BLK_SW =3D 0, > + ICE_BLK_ACL, > + ICE_BLK_FD, > + ICE_BLK_RSS, > + ICE_BLK_PE, > + ICE_BLK_COUNT > +}; > + > +enum ice_sect { > + ICE_XLT0 =3D 0, > + ICE_XLT_KB, > + ICE_XLT1, > + ICE_XLT2, > + ICE_PROF_TCAM, > + ICE_PROF_REDIR, > + ICE_VEC_TBL, > + ICE_CDID_KB, > + ICE_CDID_REDIR, > + ICE_SECT_COUNT > +}; > + > +/* package buffer building */ > + > +struct ice_buf_build { > + struct ice_buf buf; > + u16 reserved_section_table_entries; > +}; > + > +struct ice_pkg_enum { > + struct ice_buf_table *buf_table; > + u32 buf_idx; > + > + u32 type; > + struct ice_buf_hdr *buf; > + u32 sect_idx; > + void *sect; > + u32 sect_type; > + > + u32 entry_idx; > + void *(*handler)(u32 sect_type, void *section, u32 index, u32 > *offset); > +}; > + > +/* package Marker PType TCAM entry */ > +struct ice_marker_ptype_tcam_entry { > +#define ICE_MARKER_PTYPE_TCAM_ADDR_MAX 1024 > + __le16 addr; > + __le16 ptype; > + u8 keys[20]; > +}; > + > +struct ice_marker_ptype_tcam_section { > + __le16 count; > + __le16 reserved; > + struct ice_marker_ptype_tcam_entry tcam[STRUCT_HACK_VAR_LEN]; > +}; > + > +#define ICE_MAX_MARKER_PTYPE_TCAMS_IN_BUF > ICE_MAX_ENTRIES_IN_BUF( \ > + ice_struct_size((struct ice_marker_ptype_tcam_section *)0, tcam, 1) > - \ > + sizeof(struct ice_marker_ptype_tcam_entry), \ > + sizeof(struct ice_marker_ptype_tcam_entry)) > + > +struct ice_hw; > + > +enum ice_status > +ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type > access); > +void ice_release_change_lock(struct ice_hw *hw); > + > +struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw); > +void * > +ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size)= ; > +enum ice_status > +ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count); > +enum ice_status > +ice_get_sw_fv_list(struct ice_hw *hw, struct ice_prot_lkup_ext *lkups, > + ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list); > +u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld); > + > +enum ice_status > +ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count); > +enum ice_status > +ice_update_pkg_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 > count); > +void ice_release_global_cfg_lock(struct ice_hw *hw); > +struct ice_generic_seg_hdr * > +ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type, > + struct ice_pkg_hdr *pkg_hdr); > +enum ice_ddp_state > +ice_verify_pkg(struct ice_pkg_hdr *pkg, u32 len); > +enum ice_ddp_state > +ice_get_pkg_info(struct ice_hw *hw); > +void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg); > +struct ice_buf_table *ice_find_buf_table(struct ice_seg *ice_seg); > +enum ice_status > +ice_acquire_global_cfg_lock(struct ice_hw *hw, > + enum ice_aq_res_access_type access); > + > +struct ice_buf_table *ice_find_buf_table(struct ice_seg *ice_seg); > +struct ice_buf_hdr * > +ice_pkg_enum_buf(struct ice_seg *ice_seg, struct ice_pkg_enum *state); > +bool > +ice_pkg_advance_sect(struct ice_seg *ice_seg, struct ice_pkg_enum *state= ); > +void * > +ice_pkg_enum_entry(struct ice_seg *ice_seg, struct ice_pkg_enum *state, > + u32 sect_type, u32 *offset, > + void *(*handler)(u32 sect_type, void *section, > + u32 index, u32 *offset)); > +void * > +ice_pkg_enum_section(struct ice_seg *ice_seg, struct ice_pkg_enum *state= , > + u32 sect_type); > +enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buff, u32 len); > +enum ice_ddp_state > +ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len); > +bool ice_is_init_pkg_successful(enum ice_ddp_state state); > +void ice_free_seg(struct ice_hw *hw); > + > +struct ice_buf_build * > +ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size, > + void **section); > +struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld); > +void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld); > + > +enum ice_status ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len); > + > +#endif /* _ICE_DDP_H_ */ > diff --git a/drivers/net/ice/base/ice_defs.h > b/drivers/net/ice/base/ice_defs.h > new file mode 100644 > index 0000000000..6e886f6aac > --- /dev/null > +++ b/drivers/net/ice/base/ice_defs.h > @@ -0,0 +1,49 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2001-2022 Intel Corporation > + */ > + > +#ifndef _ICE_DEFS_H_ > +#define _ICE_DEFS_H_ > + > +#define ETH_ALEN 6 > + > +#define ETH_HEADER_LEN 14 > + > +#define BIT(a) (1UL << (a)) > +#define BIT_ULL(a) (1ULL << (a)) > + > +#define BITS_PER_BYTE 8 > + > +#define _FORCE_ > + > +#define ICE_BYTES_PER_WORD 2 > +#define ICE_BYTES_PER_DWORD 4 > +#define ICE_MAX_TRAFFIC_CLASS 8 > + > +/** > + * ROUND_UP - round up to next arbitrary multiple (not a power of 2) > + * @a: value to round up > + * @b: arbitrary multiple > + * > + * Round up to the next multiple of the arbitrary b. > + * Note, when b is a power of 2 use ICE_ALIGN() instead. > + */ > +#define ROUND_UP(a, b) ((b) * DIVIDE_AND_ROUND_UP((a), (b))) > + > +#define MIN_T(_t, _a, _b) min((_t)(_a), (_t)(_b)) > + > +#define IS_ASCII(_ch) ((_ch) < 0x80) > + > +#define STRUCT_HACK_VAR_LEN > +/** > + * ice_struct_size - size of struct with C99 flexible array member > + * @ptr: pointer to structure > + * @field: flexible array member (last member of the structure) > + * @num: number of elements of that flexible array member > + */ > +#define ice_struct_size(ptr, field, num) \ > + (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num)) > + > +#define FLEX_ARRAY_SIZE(_ptr, _mem, cnt) ((cnt) * sizeof(_ptr->_mem[0])) > + > +#endif /* _ICE_DEFS_H_ */ > diff --git a/drivers/net/ice/base/ice_flex_pipe.c > b/drivers/net/ice/base/ice_flex_pipe.c > index 3918169001..a43d7ef76b 100644 > --- a/drivers/net/ice/base/ice_flex_pipe.c > +++ b/drivers/net/ice/base/ice_flex_pipe.c > @@ -3,6 +3,7 @@ > */ >=20 > #include "ice_common.h" > +#include "ice_ddp.h" > #include "ice_flex_pipe.h" > #include "ice_protocol_type.h" > #include "ice_flow.h" > @@ -106,2049 +107,224 @@ static u32 ice_sect_id(enum ice_block blk, > enum ice_sect sect) > } >=20 > /** > - * ice_pkg_val_buf > - * @buf: pointer to the ice buffer > - * > - * This helper function validates a buffer's header. > - */ > -static struct ice_buf_hdr *ice_pkg_val_buf(struct ice_buf *buf) > -{ > - struct ice_buf_hdr *hdr; > - u16 section_count; > - u16 data_end; > - > - hdr =3D (struct ice_buf_hdr *)buf->buf; > - /* verify data */ > - section_count =3D LE16_TO_CPU(hdr->section_count); > - if (section_count < ICE_MIN_S_COUNT || section_count > > ICE_MAX_S_COUNT) > - return NULL; > - > - data_end =3D LE16_TO_CPU(hdr->data_end); > - if (data_end < ICE_MIN_S_DATA_END || data_end > > ICE_MAX_S_DATA_END) > - return NULL; > - > - return hdr; > -} > - > -/** > - * ice_find_buf_table > - * @ice_seg: pointer to the ice segment > - * > - * Returns the address of the buffer table within the ice segment. > - */ > -static struct ice_buf_table *ice_find_buf_table(struct ice_seg *ice_seg) > -{ > - struct ice_nvm_table *nvms; > - > - nvms =3D (struct ice_nvm_table *) > - (ice_seg->device_table + > - LE32_TO_CPU(ice_seg->device_table_count)); > - > - return (_FORCE_ struct ice_buf_table *) > - (nvms->vers + LE32_TO_CPU(nvms->table_count)); > -} > - > -/** > - * ice_pkg_enum_buf > - * @ice_seg: pointer to the ice segment (or NULL on subsequent calls) > - * @state: pointer to the enum state > - * > - * This function will enumerate all the buffers in the ice segment. The = first > - * call is made with the ice_seg parameter non-NULL; on subsequent calls= , > - * ice_seg is set to NULL which continues the enumeration. When the > function > - * returns a NULL pointer, then the end of the buffers has been reached,= or > an > - * unexpected value has been detected (for example an invalid section > count or > - * an invalid buffer end value). > - */ > -static struct ice_buf_hdr * > -ice_pkg_enum_buf(struct ice_seg *ice_seg, struct ice_pkg_enum *state) > -{ > - if (ice_seg) { > - state->buf_table =3D ice_find_buf_table(ice_seg); > - if (!state->buf_table) > - return NULL; > - > - state->buf_idx =3D 0; > - return ice_pkg_val_buf(state->buf_table->buf_array); > - } > - > - if (++state->buf_idx < LE32_TO_CPU(state->buf_table->buf_count)) > - return ice_pkg_val_buf(state->buf_table->buf_array + > - state->buf_idx); > - else > - return NULL; > -} > - > -/** > - * ice_pkg_advance_sect > - * @ice_seg: pointer to the ice segment (or NULL on subsequent calls) > - * @state: pointer to the enum state > - * > - * This helper function will advance the section within the ice segment, > - * also advancing the buffer if needed. > - */ > -static bool > -ice_pkg_advance_sect(struct ice_seg *ice_seg, struct ice_pkg_enum *state= ) > -{ > - if (!ice_seg && !state->buf) > - return false; > - > - if (!ice_seg && state->buf) > - if (++state->sect_idx < LE16_TO_CPU(state->buf- > >section_count)) > - return true; > - > - state->buf =3D ice_pkg_enum_buf(ice_seg, state); > - if (!state->buf) > - return false; > - > - /* start of new buffer, reset section index */ > - state->sect_idx =3D 0; > - return true; > -} > - > -/** > - * ice_pkg_enum_section > - * @ice_seg: pointer to the ice segment (or NULL on subsequent calls) > - * @state: pointer to the enum state > - * @sect_type: section type to enumerate > - * > - * This function will enumerate all the sections of a particular type in= the > - * ice segment. The first call is made with the ice_seg parameter non-NU= LL; > - * on subsequent calls, ice_seg is set to NULL which continues the > enumeration. > - * When the function returns a NULL pointer, then the end of the matchin= g > - * sections has been reached. > - */ > -void * > -ice_pkg_enum_section(struct ice_seg *ice_seg, struct ice_pkg_enum *state= , > - u32 sect_type) > -{ > - u16 offset, size; > - > - if (ice_seg) > - state->type =3D sect_type; > - > - if (!ice_pkg_advance_sect(ice_seg, state)) > - return NULL; > - > - /* scan for next matching section */ > - while (state->buf->section_entry[state->sect_idx].type !=3D > - CPU_TO_LE32(state->type)) > - if (!ice_pkg_advance_sect(NULL, state)) > - return NULL; > - > - /* validate section */ > - offset =3D LE16_TO_CPU(state->buf->section_entry[state- > >sect_idx].offset); > - if (offset < ICE_MIN_S_OFF || offset > ICE_MAX_S_OFF) > - return NULL; > - > - size =3D LE16_TO_CPU(state->buf->section_entry[state->sect_idx].size); > - if (size < ICE_MIN_S_SZ || size > ICE_MAX_S_SZ) > - return NULL; > - > - /* make sure the section fits in the buffer */ > - if (offset + size > ICE_PKG_BUF_SIZE) > - return NULL; > - > - state->sect_type =3D > - LE32_TO_CPU(state->buf->section_entry[state- > >sect_idx].type); > - > - /* calc pointer to this section */ > - state->sect =3D ((u8 *)state->buf) + > - LE16_TO_CPU(state->buf->section_entry[state- > >sect_idx].offset); > - > - return state->sect; > -} > - > -/** > - * ice_pkg_enum_entry > - * @ice_seg: pointer to the ice segment (or NULL on subsequent calls) > - * @state: pointer to the enum state > - * @sect_type: section type to enumerate > - * @offset: pointer to variable that receives the offset in the table > (optional) > - * @handler: function that handles access to the entries into the sectio= n > type > - * > - * This function will enumerate all the entries in particular section ty= pe in > - * the ice segment. The first call is made with the ice_seg parameter no= n- > NULL; > - * on subsequent calls, ice_seg is set to NULL which continues the > enumeration. > - * When the function returns a NULL pointer, then the end of the entries > has > - * been reached. > - * > - * Since each section may have a different header and entry size, the > handler > - * function is needed to determine the number and location entries in > each > - * section. > - * > - * The offset parameter is optional, but should be used for sections tha= t > - * contain an offset for each section table. For such cases, the section > handler > - * function must return the appropriate offset + index to give the > absolution > - * offset for each entry. For example, if the base for a section's heade= r > - * indicates a base offset of 10, and the index for the entry is 2, then > - * section handler function should set the offset to 10 + 2 =3D 12. > - */ > -void * > -ice_pkg_enum_entry(struct ice_seg *ice_seg, struct ice_pkg_enum *state, > - u32 sect_type, u32 *offset, > - void *(*handler)(u32 sect_type, void *section, > - u32 index, u32 *offset)) > -{ > - void *entry; > - > - if (ice_seg) { > - if (!handler) > - return NULL; > - > - if (!ice_pkg_enum_section(ice_seg, state, sect_type)) > - return NULL; > - > - state->entry_idx =3D 0; > - state->handler =3D handler; > - } else { > - state->entry_idx++; > - } > - > - if (!state->handler) > - return NULL; > - > - /* get entry */ > - entry =3D state->handler(state->sect_type, state->sect, state- > >entry_idx, > - offset); > - if (!entry) { > - /* end of a section, look for another section of this type */ > - if (!ice_pkg_enum_section(NULL, state, 0)) > - return NULL; > - > - state->entry_idx =3D 0; > - entry =3D state->handler(state->sect_type, state->sect, > - state->entry_idx, offset); > - } > - > - return entry; > -} > - > -/** > - * ice_hw_ptype_ena - check if the PTYPE is enabled or not > - * @hw: pointer to the HW structure > - * @ptype: the hardware PTYPE > - */ > -bool ice_hw_ptype_ena(struct ice_hw *hw, u16 ptype) > -{ > - return ptype < ICE_FLOW_PTYPE_MAX && > - ice_is_bit_set(hw->hw_ptype, ptype); > -} > - > -/** > - * ice_marker_ptype_tcam_handler > - * @sect_type: section type > - * @section: pointer to section > - * @index: index of the Marker PType TCAM entry to be returned > - * @offset: pointer to receive absolute offset, always 0 for ptype TCAM > sections > - * > - * This is a callback function that can be passed to ice_pkg_enum_entry. > - * Handles enumeration of individual Marker PType TCAM entries. > - */ > -static void * > -ice_marker_ptype_tcam_handler(u32 sect_type, void *section, u32 index, > - u32 *offset) > -{ > - struct ice_marker_ptype_tcam_section *marker_ptype; > - > - if (!section) > - return NULL; > - > - if (sect_type !=3D ICE_SID_RXPARSER_MARKER_PTYPE) > - return NULL; > - > - if (index > ICE_MAX_MARKER_PTYPE_TCAMS_IN_BUF) > - return NULL; > - > - if (offset) > - *offset =3D 0; > - > - marker_ptype =3D (struct ice_marker_ptype_tcam_section *)section; > - if (index >=3D LE16_TO_CPU(marker_ptype->count)) > - return NULL; > - > - return marker_ptype->tcam + index; > -} > - > -/** > - * ice_fill_hw_ptype - fill the enabled PTYPE bit information > - * @hw: pointer to the HW structure > - */ > -static void > -ice_fill_hw_ptype(struct ice_hw *hw) > -{ > - struct ice_marker_ptype_tcam_entry *tcam; > - struct ice_seg *seg =3D hw->seg; > - struct ice_pkg_enum state; > - > - ice_zero_bitmap(hw->hw_ptype, ICE_FLOW_PTYPE_MAX); > - if (!seg) > - return; > - > - ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); > - > - do { > - tcam =3D (struct ice_marker_ptype_tcam_entry *) > - ice_pkg_enum_entry(seg, &state, > - ICE_SID_RXPARSER_MARKER_PTYPE, > NULL, > - ice_marker_ptype_tcam_handler); > - if (tcam && > - LE16_TO_CPU(tcam->addr) < > ICE_MARKER_PTYPE_TCAM_ADDR_MAX && > - LE16_TO_CPU(tcam->ptype) < ICE_FLOW_PTYPE_MAX) > - ice_set_bit(LE16_TO_CPU(tcam->ptype), hw- > >hw_ptype); > - > - seg =3D NULL; > - } while (tcam); > -} > - > -/** > - * ice_boost_tcam_handler > - * @sect_type: section type > - * @section: pointer to section > - * @index: index of the boost TCAM entry to be returned > - * @offset: pointer to receive absolute offset, always 0 for boost TCAM > sections > - * > - * This is a callback function that can be passed to ice_pkg_enum_entry. > - * Handles enumeration of individual boost TCAM entries. > - */ > -static void * > -ice_boost_tcam_handler(u32 sect_type, void *section, u32 index, u32 > *offset) > -{ > - struct ice_boost_tcam_section *boost; > - > - if (!section) > - return NULL; > - > - if (sect_type !=3D ICE_SID_RXPARSER_BOOST_TCAM) > - return NULL; > - > - if (index > ICE_MAX_BST_TCAMS_IN_BUF) > - return NULL; > - > - if (offset) > - *offset =3D 0; > - > - boost =3D (struct ice_boost_tcam_section *)section; > - if (index >=3D LE16_TO_CPU(boost->count)) > - return NULL; > - > - return boost->tcam + index; > -} > - > -/** > - * ice_find_boost_entry > - * @ice_seg: pointer to the ice segment (non-NULL) > - * @addr: Boost TCAM address of entry to search for > - * @entry: returns pointer to the entry > - * > - * Finds a particular Boost TCAM entry and returns a pointer to that ent= ry > - * if it is found. The ice_seg parameter must not be NULL since the firs= t call > - * to ice_pkg_enum_entry requires a pointer to an actual ice_segment > structure. > - */ > -static enum ice_status > -ice_find_boost_entry(struct ice_seg *ice_seg, u16 addr, > - struct ice_boost_tcam_entry **entry) > -{ > - struct ice_boost_tcam_entry *tcam; > - struct ice_pkg_enum state; > - > - ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); > - > - if (!ice_seg) > - return ICE_ERR_PARAM; > - > - do { > - tcam =3D (struct ice_boost_tcam_entry *) > - ice_pkg_enum_entry(ice_seg, &state, > - ICE_SID_RXPARSER_BOOST_TCAM, > NULL, > - ice_boost_tcam_handler); > - if (tcam && LE16_TO_CPU(tcam->addr) =3D=3D addr) { > - *entry =3D tcam; > - return ICE_SUCCESS; > - } > - > - ice_seg =3D NULL; > - } while (tcam); > - > - *entry =3D NULL; > - return ICE_ERR_CFG; > -} > - > -/** > - * ice_label_enum_handler > - * @sect_type: section type > - * @section: pointer to section > - * @index: index of the label entry to be returned > - * @offset: pointer to receive absolute offset, always zero for label se= ctions > - * > - * This is a callback function that can be passed to ice_pkg_enum_entry. > - * Handles enumeration of individual label entries. > - */ > -static void * > -ice_label_enum_handler(u32 __ALWAYS_UNUSED sect_type, void *section, > u32 index, > - u32 *offset) > -{ > - struct ice_label_section *labels; > - > - if (!section) > - return NULL; > - > - if (index > ICE_MAX_LABELS_IN_BUF) > - return NULL; > - > - if (offset) > - *offset =3D 0; > - > - labels =3D (struct ice_label_section *)section; > - if (index >=3D LE16_TO_CPU(labels->count)) > - return NULL; > - > - return labels->label + index; > -} > - > -/** > - * ice_enum_labels > - * @ice_seg: pointer to the ice segment (NULL on subsequent calls) > - * @type: the section type that will contain the label (0 on subsequent > calls) > - * @state: ice_pkg_enum structure that will hold the state of the > enumeration > - * @value: pointer to a value that will return the label's value if foun= d > - * > - * Enumerates a list of labels in the package. The caller will call > - * ice_enum_labels(ice_seg, type, ...) to start the enumeration, then ca= ll > - * ice_enum_labels(NULL, 0, ...) to continue. When the function returns = a > NULL > - * the end of the list has been reached. > - */ > -static char * > -ice_enum_labels(struct ice_seg *ice_seg, u32 type, struct ice_pkg_enum > *state, > - u16 *value) > -{ > - struct ice_label *label; > - > - /* Check for valid label section on first call */ > - if (type && !(type >=3D ICE_SID_LBL_FIRST && type <=3D > ICE_SID_LBL_LAST)) > - return NULL; > - > - label =3D (struct ice_label *)ice_pkg_enum_entry(ice_seg, state, type, > - NULL, > - > ice_label_enum_handler); > - if (!label) > - return NULL; > - > - *value =3D LE16_TO_CPU(label->value); > - return label->name; > -} > - > -/** > - * ice_add_tunnel_hint > - * @hw: pointer to the HW structure > - * @label_name: label text > - * @val: value of the tunnel port boost entry > - */ > -static void ice_add_tunnel_hint(struct ice_hw *hw, char *label_name, u16 > val) > -{ > - if (hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) { > - u16 i; > - > - for (i =3D 0; tnls[i].type !=3D TNL_LAST; i++) { > - size_t len =3D strlen(tnls[i].label_prefix); > - > - /* Look for matching label start, before continuing > */ > - if (strncmp(label_name, tnls[i].label_prefix, len)) > - continue; > - > - /* Make sure this label matches our PF. Note that > the PF > - * character ('0' - '7') will be located where our > - * prefix string's null terminator is located. > - */ > - if ((label_name[len] - '0') =3D=3D hw->pf_id) { > - hw->tnl.tbl[hw->tnl.count].type =3D tnls[i].type; > - hw->tnl.tbl[hw->tnl.count].valid =3D false; > - hw->tnl.tbl[hw->tnl.count].in_use =3D false; > - hw->tnl.tbl[hw->tnl.count].marked =3D false; > - hw->tnl.tbl[hw->tnl.count].boost_addr =3D val; > - hw->tnl.tbl[hw->tnl.count].port =3D 0; > - hw->tnl.count++; > - break; > - } > - } > - } > -} > - > -/** > - * ice_add_dvm_hint > - * @hw: pointer to the HW structure > - * @val: value of the boost entry > - * @enable: true if entry needs to be enabled, or false if needs to be > disabled > - */ > -static void ice_add_dvm_hint(struct ice_hw *hw, u16 val, bool enable) > -{ > - if (hw->dvm_upd.count < ICE_DVM_MAX_ENTRIES) { > - hw->dvm_upd.tbl[hw->dvm_upd.count].boost_addr =3D val; > - hw->dvm_upd.tbl[hw->dvm_upd.count].enable =3D enable; > - hw->dvm_upd.count++; > - } > -} > - > -/** > - * ice_init_pkg_hints > - * @hw: pointer to the HW structure > - * @ice_seg: pointer to the segment of the package scan (non-NULL) > - * > - * This function will scan the package and save off relevant information > - * (hints or metadata) for driver use. The ice_seg parameter must not be > NULL > - * since the first call to ice_enum_labels requires a pointer to an actu= al > - * ice_seg structure. > - */ > -static void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_se= g) > -{ > - struct ice_pkg_enum state; > - char *label_name; > - u16 val; > - int i; > - > - ice_memset(&hw->tnl, 0, sizeof(hw->tnl), ICE_NONDMA_MEM); > - ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); > - > - if (!ice_seg) > - return; > - > - label_name =3D ice_enum_labels(ice_seg, > ICE_SID_LBL_RXPARSER_TMEM, &state, > - &val); > - > - while (label_name) { > - if (!strncmp(label_name, ICE_TNL_PRE, strlen(ICE_TNL_PRE))) > - /* check for a tunnel entry */ > - ice_add_tunnel_hint(hw, label_name, val); > - > - /* check for a dvm mode entry */ > - else if (!strncmp(label_name, ICE_DVM_PRE, > strlen(ICE_DVM_PRE))) > - ice_add_dvm_hint(hw, val, true); > - > - /* check for a svm mode entry */ > - else if (!strncmp(label_name, ICE_SVM_PRE, > strlen(ICE_SVM_PRE))) > - ice_add_dvm_hint(hw, val, false); > - > - label_name =3D ice_enum_labels(NULL, 0, &state, &val); > - } > - > - /* Cache the appropriate boost TCAM entry pointers for tunnels */ > - for (i =3D 0; i < hw->tnl.count; i++) { > - ice_find_boost_entry(ice_seg, hw->tnl.tbl[i].boost_addr, > - &hw->tnl.tbl[i].boost_entry); > - if (hw->tnl.tbl[i].boost_entry) > - hw->tnl.tbl[i].valid =3D true; > - } > - > - /* Cache the appropriate boost TCAM entry pointers for DVM and > SVM */ > - for (i =3D 0; i < hw->dvm_upd.count; i++) > - ice_find_boost_entry(ice_seg, hw- > >dvm_upd.tbl[i].boost_addr, > - &hw->dvm_upd.tbl[i].boost_entry); > -} > - > -/* Key creation */ > - > -#define ICE_DC_KEY 0x1 /* don't care */ > -#define ICE_DC_KEYINV 0x1 > -#define ICE_NM_KEY 0x0 /* never match */ > -#define ICE_NM_KEYINV 0x0 > -#define ICE_0_KEY 0x1 /* match 0 */ > -#define ICE_0_KEYINV 0x0 > -#define ICE_1_KEY 0x0 /* match 1 */ > -#define ICE_1_KEYINV 0x1 > - > -/** > - * ice_gen_key_word - generate 16-bits of a key/mask word > - * @val: the value > - * @valid: valid bits mask (change only the valid bits) > - * @dont_care: don't care mask > - * @nvr_mtch: never match mask > - * @key: pointer to an array of where the resulting key portion > - * @key_inv: pointer to an array of where the resulting key invert porti= on > - * > - * This function generates 16-bits from a 8-bit value, an 8-bit don't ca= re > mask > - * and an 8-bit never match mask. The 16-bits of output are divided into= 8 > bits > - * of key and 8 bits of key invert. > - * > - * '0' =3D b01, always match a 0 bit > - * '1' =3D b10, always match a 1 bit > - * '?' =3D b11, don't care bit (always matches) > - * '~' =3D b00, never match bit > - * > - * Input: > - * val: b0 1 0 1 0 1 > - * dont_care: b0 0 1 1 0 0 > - * never_mtch: b0 0 0 0 1 1 > - * ------------------------------ > - * Result: key: b01 10 11 11 00 00 > - */ > -static enum ice_status > -ice_gen_key_word(u8 val, u8 valid, u8 dont_care, u8 nvr_mtch, u8 *key, > - u8 *key_inv) > -{ > - u8 in_key =3D *key, in_key_inv =3D *key_inv; > - u8 i; > - > - /* 'dont_care' and 'nvr_mtch' masks cannot overlap */ > - if ((dont_care ^ nvr_mtch) !=3D (dont_care | nvr_mtch)) > - return ICE_ERR_CFG; > - > - *key =3D 0; > - *key_inv =3D 0; > - > - /* encode the 8 bits into 8-bit key and 8-bit key invert */ > - for (i =3D 0; i < 8; i++) { > - *key >>=3D 1; > - *key_inv >>=3D 1; > - > - if (!(valid & 0x1)) { /* change only valid bits */ > - *key |=3D (in_key & 0x1) << 7; > - *key_inv |=3D (in_key_inv & 0x1) << 7; > - } else if (dont_care & 0x1) { /* don't care bit */ > - *key |=3D ICE_DC_KEY << 7; > - *key_inv |=3D ICE_DC_KEYINV << 7; > - } else if (nvr_mtch & 0x1) { /* never match bit */ > - *key |=3D ICE_NM_KEY << 7; > - *key_inv |=3D ICE_NM_KEYINV << 7; > - } else if (val & 0x01) { /* exact 1 match */ > - *key |=3D ICE_1_KEY << 7; > - *key_inv |=3D ICE_1_KEYINV << 7; > - } else { /* exact 0 match */ > - *key |=3D ICE_0_KEY << 7; > - *key_inv |=3D ICE_0_KEYINV << 7; > - } > - > - dont_care >>=3D 1; > - nvr_mtch >>=3D 1; > - valid >>=3D 1; > - val >>=3D 1; > - in_key >>=3D 1; > - in_key_inv >>=3D 1; > - } > - > - return ICE_SUCCESS; > -} > - > -/** > - * ice_bits_max_set - determine if the number of bits set is within a > maximum > - * @mask: pointer to the byte array which is the mask > - * @size: the number of bytes in the mask > - * @max: the max number of set bits > - * > - * This function determines if there are at most 'max' number of bits se= t in > an > - * array. Returns true if the number for bits set is <=3D max or will re= turn > false > - * otherwise. > - */ > -static bool ice_bits_max_set(const u8 *mask, u16 size, u16 max) > -{ > - u16 count =3D 0; > - u16 i; > - > - /* check each byte */ > - for (i =3D 0; i < size; i++) { > - /* if 0, go to next byte */ > - if (!mask[i]) > - continue; > - > - /* We know there is at least one set bit in this byte because > of > - * the above check; if we already have found 'max' number > of > - * bits set, then we can return failure now. > - */ > - if (count =3D=3D max) > - return false; > - > - /* count the bits in this byte, checking threshold */ > - count +=3D ice_hweight8(mask[i]); > - if (count > max) > - return false; > - } > - > - return true; > -} > - > -/** > - * ice_set_key - generate a variable sized key with multiples of 16-bits > - * @key: pointer to where the key will be stored > - * @size: the size of the complete key in bytes (must be even) > - * @val: array of 8-bit values that makes up the value portion of the ke= y > - * @upd: array of 8-bit masks that determine what key portion to update > - * @dc: array of 8-bit masks that make up the don't care mask > - * @nm: array of 8-bit masks that make up the never match mask > - * @off: the offset of the first byte in the key to update > - * @len: the number of bytes in the key update > - * > - * This function generates a key from a value, a don't care mask and a > never > - * match mask. > - * upd, dc, and nm are optional parameters, and can be NULL: > - * upd =3D=3D NULL --> upd mask is all 1's (update all bits) > - * dc =3D=3D NULL --> dc mask is all 0's (no don't care bits) > - * nm =3D=3D NULL --> nm mask is all 0's (no never match bits) > - */ > -enum ice_status > -ice_set_key(u8 *key, u16 size, u8 *val, u8 *upd, u8 *dc, u8 *nm, u16 off= , > - u16 len) > -{ > - u16 half_size; > - u16 i; > - > - /* size must be a multiple of 2 bytes. */ > - if (size % 2) > - return ICE_ERR_CFG; > - half_size =3D size / 2; > - > - if (off + len > half_size) > - return ICE_ERR_CFG; > - > - /* Make sure at most one bit is set in the never match mask. Having > more > - * than one never match mask bit set will cause HW to consume > excessive > - * power otherwise; this is a power management efficiency check. > - */ > -#define ICE_NVR_MTCH_BITS_MAX 1 > - if (nm && !ice_bits_max_set(nm, len, ICE_NVR_MTCH_BITS_MAX)) > - return ICE_ERR_CFG; > - > - for (i =3D 0; i < len; i++) > - if (ice_gen_key_word(val[i], upd ? upd[i] : 0xff, > - dc ? dc[i] : 0, nm ? nm[i] : 0, > - key + off + i, key + half_size + off + i)) > - return ICE_ERR_CFG; > - > - return ICE_SUCCESS; > -} > - > -/** > - * ice_acquire_global_cfg_lock > - * @hw: pointer to the HW structure > - * @access: access type (read or write) > - * > - * This function will request ownership of the global config lock for re= ading > - * or writing of the package. When attempting to obtain write access, th= e > - * caller must check for the following two return values: > - * > - * ICE_SUCCESS - Means the caller has acquired the global config = lock > - * and can perform writing of the package. > - * ICE_ERR_AQ_NO_WORK - Indicates another driver has already written > the > - * package or has found that no update was necessar= y; in > - * this case, the caller can just skip performing a= ny > - * update of the package. > - */ > -static enum ice_status > -ice_acquire_global_cfg_lock(struct ice_hw *hw, > - enum ice_aq_res_access_type access) > -{ > - enum ice_status status; > - > - ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); > - > - status =3D ice_acquire_res(hw, ICE_GLOBAL_CFG_LOCK_RES_ID, access, > - ICE_GLOBAL_CFG_LOCK_TIMEOUT); > - > - if (status =3D=3D ICE_ERR_AQ_NO_WORK) > - ice_debug(hw, ICE_DBG_PKG, "Global config lock: No work > to do\n"); > - > - return status; > -} > - > -/** > - * ice_release_global_cfg_lock > - * @hw: pointer to the HW structure > - * > - * This function will release the global config lock. > - */ > -static void ice_release_global_cfg_lock(struct ice_hw *hw) > -{ > - ice_release_res(hw, ICE_GLOBAL_CFG_LOCK_RES_ID); > -} > - > -/** > - * ice_acquire_change_lock > - * @hw: pointer to the HW structure > - * @access: access type (read or write) > - * > - * This function will request ownership of the change lock. > - */ > -enum ice_status > -ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type > access) > -{ > - ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); > - > - return ice_acquire_res(hw, ICE_CHANGE_LOCK_RES_ID, access, > - ICE_CHANGE_LOCK_TIMEOUT); > -} > - > -/** > - * ice_release_change_lock > - * @hw: pointer to the HW structure > - * > - * This function will release the change lock using the proper Admin > Command. > - */ > -void ice_release_change_lock(struct ice_hw *hw) > -{ > - ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); > - > - ice_release_res(hw, ICE_CHANGE_LOCK_RES_ID); > -} > - > -/** > - * ice_aq_download_pkg > - * @hw: pointer to the hardware structure > - * @pkg_buf: the package buffer to transfer > - * @buf_size: the size of the package buffer > - * @last_buf: last buffer indicator > - * @error_offset: returns error offset > - * @error_info: returns error information > - * @cd: pointer to command details structure or NULL > - * > - * Download Package (0x0C40) > - */ > -static enum ice_status > -ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, > - u16 buf_size, bool last_buf, u32 *error_offset, > - u32 *error_info, struct ice_sq_cd *cd) > -{ > - struct ice_aqc_download_pkg *cmd; > - struct ice_aq_desc desc; > - enum ice_status status; > - > - ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); > - > - if (error_offset) > - *error_offset =3D 0; > - if (error_info) > - *error_info =3D 0; > - > - cmd =3D &desc.params.download_pkg; > - ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_download_pkg); > - desc.flags |=3D CPU_TO_LE16(ICE_AQ_FLAG_RD); > - > - if (last_buf) > - cmd->flags |=3D ICE_AQC_DOWNLOAD_PKG_LAST_BUF; > - > - status =3D ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd); > - if (status =3D=3D ICE_ERR_AQ_ERROR) { > - /* Read error from buffer only when the FW returned an > error */ > - struct ice_aqc_download_pkg_resp *resp; > - > - resp =3D (struct ice_aqc_download_pkg_resp *)pkg_buf; > - if (error_offset) > - *error_offset =3D LE32_TO_CPU(resp->error_offset); > - if (error_info) > - *error_info =3D LE32_TO_CPU(resp->error_info); > - } > - > - return status; > -} > - > -/** > - * ice_aq_upload_section > - * @hw: pointer to the hardware structure > - * @pkg_buf: the package buffer which will receive the section > - * @buf_size: the size of the package buffer > - * @cd: pointer to command details structure or NULL > - * > - * Upload Section (0x0C41) > - */ > -enum ice_status > -ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, > - u16 buf_size, struct ice_sq_cd *cd) > -{ > - struct ice_aq_desc desc; > - > - ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); > - ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_upload_section); > - desc.flags |=3D CPU_TO_LE16(ICE_AQ_FLAG_RD); > - > - return ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd); > -} > - > -/** > - * ice_aq_update_pkg > - * @hw: pointer to the hardware structure > - * @pkg_buf: the package cmd buffer > - * @buf_size: the size of the package cmd buffer > - * @last_buf: last buffer indicator > - * @error_offset: returns error offset > - * @error_info: returns error information > - * @cd: pointer to command details structure or NULL > - * > - * Update Package (0x0C42) > - */ > -static enum ice_status > -ice_aq_update_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, u16 > buf_size, > - bool last_buf, u32 *error_offset, u32 *error_info, > - struct ice_sq_cd *cd) > -{ > - struct ice_aqc_download_pkg *cmd; > - struct ice_aq_desc desc; > - enum ice_status status; > - > - ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); > - > - if (error_offset) > - *error_offset =3D 0; > - if (error_info) > - *error_info =3D 0; > - > - cmd =3D &desc.params.download_pkg; > - ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_update_pkg); > - desc.flags |=3D CPU_TO_LE16(ICE_AQ_FLAG_RD); > - > - if (last_buf) > - cmd->flags |=3D ICE_AQC_DOWNLOAD_PKG_LAST_BUF; > - > - status =3D ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd); > - if (status =3D=3D ICE_ERR_AQ_ERROR) { > - /* Read error from buffer only when the FW returned an > error */ > - struct ice_aqc_download_pkg_resp *resp; > - > - resp =3D (struct ice_aqc_download_pkg_resp *)pkg_buf; > - if (error_offset) > - *error_offset =3D LE32_TO_CPU(resp->error_offset); > - if (error_info) > - *error_info =3D LE32_TO_CPU(resp->error_info); > - } > - > - return status; > -} > - > -/** > - * ice_find_seg_in_pkg > - * @hw: pointer to the hardware structure > - * @seg_type: the segment type to search for (i.e., SEGMENT_TYPE_CPK) > - * @pkg_hdr: pointer to the package header to be searched > - * > - * This function searches a package file for a particular segment type. = On > - * success it returns a pointer to the segment header, otherwise it will > - * return NULL. > - */ > -static struct ice_generic_seg_hdr * > -ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type, > - struct ice_pkg_hdr *pkg_hdr) > -{ > - u32 i; > - > - ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); > - ice_debug(hw, ICE_DBG_PKG, "Package format > version: %d.%d.%d.%d\n", > - pkg_hdr->pkg_format_ver.major, pkg_hdr- > >pkg_format_ver.minor, > - pkg_hdr->pkg_format_ver.update, > - pkg_hdr->pkg_format_ver.draft); > - > - /* Search all package segments for the requested segment type */ > - for (i =3D 0; i < LE32_TO_CPU(pkg_hdr->seg_count); i++) { > - struct ice_generic_seg_hdr *seg; > - > - seg =3D (struct ice_generic_seg_hdr *) > - ((u8 *)pkg_hdr + LE32_TO_CPU(pkg_hdr- > >seg_offset[i])); > - > - if (LE32_TO_CPU(seg->seg_type) =3D=3D seg_type) > - return seg; > - } > - > - return NULL; > -} > - > -/** > - * ice_update_pkg_no_lock > - * @hw: pointer to the hardware structure > - * @bufs: pointer to an array of buffers > - * @count: the number of buffers in the array > - */ > -static enum ice_status > -ice_update_pkg_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 > count) > -{ > - enum ice_status status =3D ICE_SUCCESS; > - u32 i; > - > - for (i =3D 0; i < count; i++) { > - struct ice_buf_hdr *bh =3D (struct ice_buf_hdr *)(bufs + i); > - bool last =3D ((i + 1) =3D=3D count); > - u32 offset, info; > - > - status =3D ice_aq_update_pkg(hw, bh, LE16_TO_CPU(bh- > >data_end), > - last, &offset, &info, NULL); > - > - if (status) { > - ice_debug(hw, ICE_DBG_PKG, "Update pkg failed: > err %d off %d inf %d\n", > - status, offset, info); > - break; > - } > - } > - > - return status; > -} > - > -/** > - * ice_update_pkg > - * @hw: pointer to the hardware structure > - * @bufs: pointer to an array of buffers > - * @count: the number of buffers in the array > - * > - * Obtains change lock and updates package. > - */ > -enum ice_status > -ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) > -{ > - enum ice_status status; > - > - status =3D ice_acquire_change_lock(hw, ICE_RES_WRITE); > - if (status) > - return status; > - > - status =3D ice_update_pkg_no_lock(hw, bufs, count); > - > - ice_release_change_lock(hw); > - > - return status; > -} > - > -/** > - * ice_dwnld_cfg_bufs > - * @hw: pointer to the hardware structure > - * @bufs: pointer to an array of buffers > - * @count: the number of buffers in the array > - * > - * Obtains global config lock and downloads the package configuration > buffers > - * to the firmware. Metadata buffers are skipped, and the first metadata > buffer > - * found indicates that the rest of the buffers are all metadata buffers= . > - */ > -static enum ice_status > -ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count) > -{ > - enum ice_status status; > - struct ice_buf_hdr *bh; > - u32 offset, info, i; > - > - if (!bufs || !count) > - return ICE_ERR_PARAM; > - > - /* If the first buffer's first section has its metadata bit set > - * then there are no buffers to be downloaded, and the operation is > - * considered a success. > - */ > - bh =3D (struct ice_buf_hdr *)bufs; > - if (LE32_TO_CPU(bh->section_entry[0].type) & ICE_METADATA_BUF) > - return ICE_SUCCESS; > - > - /* reset pkg_dwnld_status in case this function is called in the > - * reset/rebuild flow > - */ > - hw->pkg_dwnld_status =3D ICE_AQ_RC_OK; > - > - status =3D ice_acquire_global_cfg_lock(hw, ICE_RES_WRITE); > - if (status) { > - if (status =3D=3D ICE_ERR_AQ_NO_WORK) > - hw->pkg_dwnld_status =3D ICE_AQ_RC_EEXIST; > - else > - hw->pkg_dwnld_status =3D hw- > >adminq.sq_last_status; > - return status; > - } > - > - for (i =3D 0; i < count; i++) { > - bool last =3D ((i + 1) =3D=3D count); > - > - if (!last) { > - /* check next buffer for metadata flag */ > - bh =3D (struct ice_buf_hdr *)(bufs + i + 1); > - > - /* A set metadata flag in the next buffer will signal > - * that the current buffer will be the last buffer > - * downloaded > - */ > - if (LE16_TO_CPU(bh->section_count)) > - if (LE32_TO_CPU(bh->section_entry[0].type) > & > - ICE_METADATA_BUF) > - last =3D true; > - } > - > - bh =3D (struct ice_buf_hdr *)(bufs + i); > - > - status =3D ice_aq_download_pkg(hw, bh, ICE_PKG_BUF_SIZE, > last, > - &offset, &info, NULL); > - > - /* Save AQ status from download package */ > - hw->pkg_dwnld_status =3D hw->adminq.sq_last_status; > - if (status) { > - ice_debug(hw, ICE_DBG_PKG, "Pkg download failed: > err %d off %d inf %d\n", > - status, offset, info); > - break; > - } > - > - if (last) > - break; > - } > - > - if (!status) { > - status =3D ice_set_vlan_mode(hw); > - if (status) > - ice_debug(hw, ICE_DBG_PKG, "Failed to set VLAN > mode: err %d\n", > - status); > - } > - > - ice_release_global_cfg_lock(hw); > - > - return status; > -} > - > -/** > - * ice_aq_get_pkg_info_list > - * @hw: pointer to the hardware structure > - * @pkg_info: the buffer which will receive the information list > - * @buf_size: the size of the pkg_info information buffer > - * @cd: pointer to command details structure or NULL > - * > - * Get Package Info List (0x0C43) > - */ > -static enum ice_status > -ice_aq_get_pkg_info_list(struct ice_hw *hw, > - struct ice_aqc_get_pkg_info_resp *pkg_info, > - u16 buf_size, struct ice_sq_cd *cd) > -{ > - struct ice_aq_desc desc; > - > - ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); > - ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_pkg_info_list); > - > - return ice_aq_send_cmd(hw, &desc, pkg_info, buf_size, cd); > -} > - > -/** > - * ice_download_pkg > - * @hw: pointer to the hardware structure > - * @ice_seg: pointer to the segment of the package to be downloaded > - * > - * Handles the download of a complete package. > - */ > -static enum ice_status > -ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg) > -{ > - struct ice_buf_table *ice_buf_tbl; > - enum ice_status status; > - > - ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); > - ice_debug(hw, ICE_DBG_PKG, "Segment format > version: %d.%d.%d.%d\n", > - ice_seg->hdr.seg_format_ver.major, > - ice_seg->hdr.seg_format_ver.minor, > - ice_seg->hdr.seg_format_ver.update, > - ice_seg->hdr.seg_format_ver.draft); > - > - ice_debug(hw, ICE_DBG_PKG, "Seg: type 0x%X, size %d, name %s\n", > - LE32_TO_CPU(ice_seg->hdr.seg_type), > - LE32_TO_CPU(ice_seg->hdr.seg_size), ice_seg->hdr.seg_id); > - > - ice_buf_tbl =3D ice_find_buf_table(ice_seg); > - > - ice_debug(hw, ICE_DBG_PKG, "Seg buf count: %d\n", > - LE32_TO_CPU(ice_buf_tbl->buf_count)); > - > - status =3D ice_dwnld_cfg_bufs(hw, ice_buf_tbl->buf_array, > - LE32_TO_CPU(ice_buf_tbl->buf_count)); > - > - ice_post_pkg_dwnld_vlan_mode_cfg(hw); > - > - return status; > -} > - > -/** > - * ice_init_pkg_info > - * @hw: pointer to the hardware structure > - * @pkg_hdr: pointer to the driver's package hdr > - * > - * Saves off the package details into the HW structure. > - */ > -static enum ice_status > -ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr) > -{ > - struct ice_generic_seg_hdr *seg_hdr; > - > - ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); > - if (!pkg_hdr) > - return ICE_ERR_PARAM; > - > - hw->pkg_seg_id =3D SEGMENT_TYPE_ICE_E810; > - > - ice_debug(hw, ICE_DBG_INIT, "Pkg using segment id: 0x%08X\n", > - hw->pkg_seg_id); > - > - seg_hdr =3D (struct ice_generic_seg_hdr *) > - ice_find_seg_in_pkg(hw, hw->pkg_seg_id, pkg_hdr); > - if (seg_hdr) { > - struct ice_meta_sect *meta; > - struct ice_pkg_enum state; > - > - ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); > - > - /* Get package information from the Metadata Section */ > - meta =3D (struct ice_meta_sect *) > - ice_pkg_enum_section((struct ice_seg *)seg_hdr, > &state, > - ICE_SID_METADATA); > - if (!meta) { > - ice_debug(hw, ICE_DBG_INIT, "Did not find ice > metadata section in package\n"); > - return ICE_ERR_CFG; > - } > - > - hw->pkg_ver =3D meta->ver; > - ice_memcpy(hw->pkg_name, meta->name, sizeof(meta- > >name), > - ICE_NONDMA_TO_NONDMA); > - > - ice_debug(hw, ICE_DBG_PKG, "Pkg: %d.%d.%d.%d, %s\n", > - meta->ver.major, meta->ver.minor, meta- > >ver.update, > - meta->ver.draft, meta->name); > - > - hw->ice_seg_fmt_ver =3D seg_hdr->seg_format_ver; > - ice_memcpy(hw->ice_seg_id, seg_hdr->seg_id, > - sizeof(hw->ice_seg_id), > ICE_NONDMA_TO_NONDMA); > - > - ice_debug(hw, ICE_DBG_PKG, "Ice Seg: %d.%d.%d.%d, %s\n", > - seg_hdr->seg_format_ver.major, > - seg_hdr->seg_format_ver.minor, > - seg_hdr->seg_format_ver.update, > - seg_hdr->seg_format_ver.draft, > - seg_hdr->seg_id); > - } else { > - ice_debug(hw, ICE_DBG_INIT, "Did not find ice segment in > driver package\n"); > - return ICE_ERR_CFG; > - } > - > - return ICE_SUCCESS; > -} > - > -/** > - * ice_get_pkg_info > - * @hw: pointer to the hardware structure > - * > - * Store details of the package currently loaded in HW into the HW > structure. > - */ > -static enum ice_status ice_get_pkg_info(struct ice_hw *hw) > -{ > - struct ice_aqc_get_pkg_info_resp *pkg_info; > - enum ice_status status; > - u16 size; > - u32 i; > - > - ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); > - > - size =3D ice_struct_size(pkg_info, pkg_info, ICE_PKG_CNT); > - pkg_info =3D (struct ice_aqc_get_pkg_info_resp *)ice_malloc(hw, size); > - if (!pkg_info) > - return ICE_ERR_NO_MEMORY; > - > - status =3D ice_aq_get_pkg_info_list(hw, pkg_info, size, NULL); > - if (status) > - goto init_pkg_free_alloc; > - > - for (i =3D 0; i < LE32_TO_CPU(pkg_info->count); i++) { > -#define ICE_PKG_FLAG_COUNT 4 > - char flags[ICE_PKG_FLAG_COUNT + 1] =3D { 0 }; > - u8 place =3D 0; > - > - if (pkg_info->pkg_info[i].is_active) { > - flags[place++] =3D 'A'; > - hw->active_pkg_ver =3D pkg_info->pkg_info[i].ver; > - hw->active_track_id =3D > - LE32_TO_CPU(pkg_info->pkg_info[i].track_id); > - ice_memcpy(hw->active_pkg_name, > - pkg_info->pkg_info[i].name, > - sizeof(pkg_info->pkg_info[i].name), > - ICE_NONDMA_TO_NONDMA); > - hw->active_pkg_in_nvm =3D pkg_info- > >pkg_info[i].is_in_nvm; > - } > - if (pkg_info->pkg_info[i].is_active_at_boot) > - flags[place++] =3D 'B'; > - if (pkg_info->pkg_info[i].is_modified) > - flags[place++] =3D 'M'; > - if (pkg_info->pkg_info[i].is_in_nvm) > - flags[place++] =3D 'N'; > - > - ice_debug(hw, ICE_DBG_PKG, > "Pkg[%d]: %d.%d.%d.%d,%s,%s\n", > - i, pkg_info->pkg_info[i].ver.major, > - pkg_info->pkg_info[i].ver.minor, > - pkg_info->pkg_info[i].ver.update, > - pkg_info->pkg_info[i].ver.draft, > - pkg_info->pkg_info[i].name, flags); > - } > - > -init_pkg_free_alloc: > - ice_free(hw, pkg_info); > - > - return status; > -} > - > -/** > - * ice_verify_pkg - verify package > - * @pkg: pointer to the package buffer > - * @len: size of the package buffer > - * > - * Verifies various attributes of the package file, including length, fo= rmat > - * version, and the requirement of at least one segment. > - */ > -static enum ice_status ice_verify_pkg(struct ice_pkg_hdr *pkg, u32 len) > -{ > - u32 seg_count; > - u32 i; > - > - if (len < ice_struct_size(pkg, seg_offset, 1)) > - return ICE_ERR_BUF_TOO_SHORT; > - > - if (pkg->pkg_format_ver.major !=3D ICE_PKG_FMT_VER_MAJ || > - pkg->pkg_format_ver.minor !=3D ICE_PKG_FMT_VER_MNR || > - pkg->pkg_format_ver.update !=3D ICE_PKG_FMT_VER_UPD || > - pkg->pkg_format_ver.draft !=3D ICE_PKG_FMT_VER_DFT) > - return ICE_ERR_CFG; > - > - /* pkg must have at least one segment */ > - seg_count =3D LE32_TO_CPU(pkg->seg_count); > - if (seg_count < 1) > - return ICE_ERR_CFG; > - > - /* make sure segment array fits in package length */ > - if (len < ice_struct_size(pkg, seg_offset, seg_count)) > - return ICE_ERR_BUF_TOO_SHORT; > - > - /* all segments must fit within length */ > - for (i =3D 0; i < seg_count; i++) { > - u32 off =3D LE32_TO_CPU(pkg->seg_offset[i]); > - struct ice_generic_seg_hdr *seg; > - > - /* segment header must fit */ > - if (len < off + sizeof(*seg)) > - return ICE_ERR_BUF_TOO_SHORT; > - > - seg =3D (struct ice_generic_seg_hdr *)((u8 *)pkg + off); > - > - /* segment body must fit */ > - if (len < off + LE32_TO_CPU(seg->seg_size)) > - return ICE_ERR_BUF_TOO_SHORT; > - } > - > - return ICE_SUCCESS; > -} > - > -/** > - * ice_free_seg - free package segment pointer > - * @hw: pointer to the hardware structure > - * > - * Frees the package segment pointer in the proper manner, depending on > if the > - * segment was allocated or just the passed in pointer was stored. > - */ > -void ice_free_seg(struct ice_hw *hw) > -{ > - if (hw->pkg_copy) { > - ice_free(hw, hw->pkg_copy); > - hw->pkg_copy =3D NULL; > - hw->pkg_size =3D 0; > - } > - hw->seg =3D NULL; > -} > - > -/** > - * ice_init_pkg_regs - initialize additional package registers > - * @hw: pointer to the hardware structure > - */ > -static void ice_init_pkg_regs(struct ice_hw *hw) > -{ > -#define ICE_SW_BLK_INP_MASK_L 0xFFFFFFFF > -#define ICE_SW_BLK_INP_MASK_H 0x0000FFFF > -#define ICE_SW_BLK_IDX 0 > - if (hw->dcf_enabled) > - return; > - > - /* setup Switch block input mask, which is 48-bits in two parts */ > - wr32(hw, GL_PREEXT_L2_PMASK0(ICE_SW_BLK_IDX), > ICE_SW_BLK_INP_MASK_L); > - wr32(hw, GL_PREEXT_L2_PMASK1(ICE_SW_BLK_IDX), > ICE_SW_BLK_INP_MASK_H); > -} > - > -/** > - * ice_chk_pkg_version - check package version for compatibility with dr= iver > - * @pkg_ver: pointer to a version structure to check > - * > - * Check to make sure that the package about to be downloaded is > compatible with > - * the driver. To be compatible, the major and minor components of the > package > - * version must match our ICE_PKG_SUPP_VER_MAJ and > ICE_PKG_SUPP_VER_MNR > - * definitions. > - */ > -static enum ice_status ice_chk_pkg_version(struct ice_pkg_ver *pkg_ver) > -{ > - if (pkg_ver->major !=3D ICE_PKG_SUPP_VER_MAJ || > - pkg_ver->minor !=3D ICE_PKG_SUPP_VER_MNR) > - return ICE_ERR_NOT_SUPPORTED; > - > - return ICE_SUCCESS; > -} > - > -/** > - * ice_chk_pkg_compat > - * @hw: pointer to the hardware structure > - * @ospkg: pointer to the package hdr > - * @seg: pointer to the package segment hdr > - * > - * This function checks the package version compatibility with driver an= d > NVM > - */ > -static enum ice_status > -ice_chk_pkg_compat(struct ice_hw *hw, struct ice_pkg_hdr *ospkg, > - struct ice_seg **seg) > -{ > - struct ice_aqc_get_pkg_info_resp *pkg; > - enum ice_status status; > - u16 size; > - u32 i; > - > - ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); > - > - /* Check package version compatibility */ > - status =3D ice_chk_pkg_version(&hw->pkg_ver); > - if (status) { > - ice_debug(hw, ICE_DBG_INIT, "Package version check > failed.\n"); > - return status; > - } > - > - /* find ICE segment in given package */ > - *seg =3D (struct ice_seg *)ice_find_seg_in_pkg(hw, hw->pkg_seg_id, > - ospkg); > - if (!*seg) { > - ice_debug(hw, ICE_DBG_INIT, "no ice segment in > package.\n"); > - return ICE_ERR_CFG; > - } > - > - /* Check if FW is compatible with the OS package */ > - size =3D ice_struct_size(pkg, pkg_info, ICE_PKG_CNT); > - pkg =3D (struct ice_aqc_get_pkg_info_resp *)ice_malloc(hw, size); > - if (!pkg) > - return ICE_ERR_NO_MEMORY; > - > - status =3D ice_aq_get_pkg_info_list(hw, pkg, size, NULL); > - if (status) > - goto fw_ddp_compat_free_alloc; > - > - for (i =3D 0; i < LE32_TO_CPU(pkg->count); i++) { > - /* loop till we find the NVM package */ > - if (!pkg->pkg_info[i].is_in_nvm) > - continue; > - if ((*seg)->hdr.seg_format_ver.major !=3D > - pkg->pkg_info[i].ver.major || > - (*seg)->hdr.seg_format_ver.minor > > - pkg->pkg_info[i].ver.minor) { > - status =3D ICE_ERR_FW_DDP_MISMATCH; > - ice_debug(hw, ICE_DBG_INIT, "OS package is not > compatible with NVM.\n"); > - } > - /* done processing NVM package so break */ > - break; > - } > -fw_ddp_compat_free_alloc: > - ice_free(hw, pkg); > - return status; > -} > - > -/** > - * ice_sw_fv_handler > - * @sect_type: section type > - * @section: pointer to section > - * @index: index of the field vector entry to be returned > - * @offset: ptr to variable that receives the offset in the field vector= table > - * > - * This is a callback function that can be passed to ice_pkg_enum_entry. > - * This function treats the given section as of type ice_sw_fv_section a= nd > - * enumerates offset field. "offset" is an index into the field vector t= able. > - */ > -static void * > -ice_sw_fv_handler(u32 sect_type, void *section, u32 index, u32 *offset) > -{ > - struct ice_sw_fv_section *fv_section =3D > - (struct ice_sw_fv_section *)section; > - > - if (!section || sect_type !=3D ICE_SID_FLD_VEC_SW) > - return NULL; > - if (index >=3D LE16_TO_CPU(fv_section->count)) > - return NULL; > - if (offset) > - /* "index" passed in to this function is relative to a given > - * 4k block. To get to the true index into the field vector > - * table need to add the relative index to the base_offset > - * field of this section > - */ > - *offset =3D LE16_TO_CPU(fv_section->base_offset) + index; > - return fv_section->fv + index; > -} > - > -/** > - * ice_get_prof_index_max - get the max profile index for used profile > - * @hw: pointer to the HW struct > - * > - * Calling this function will get the max profile index for used profile > - * and store the index number in struct ice_switch_info *switch_info > - * in hw for following use. > - */ > -static int ice_get_prof_index_max(struct ice_hw *hw) > -{ > - u16 prof_index =3D 0, j, max_prof_index =3D 0; > - struct ice_pkg_enum state; > - struct ice_seg *ice_seg; > - bool flag =3D false; > - struct ice_fv *fv; > - u32 offset; > - > - ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); > - > - if (!hw->seg) > - return ICE_ERR_PARAM; > - > - ice_seg =3D hw->seg; > - > - do { > - fv =3D (struct ice_fv *) > - ice_pkg_enum_entry(ice_seg, &state, > ICE_SID_FLD_VEC_SW, > - &offset, ice_sw_fv_handler); > - if (!fv) > - break; > - ice_seg =3D NULL; > - > - /* in the profile that not be used, the prot_id is set to 0xff > - * and the off is set to 0x1ff for all the field vectors. > - */ > - for (j =3D 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++) > - if (fv->ew[j].prot_id !=3D ICE_PROT_INVALID || > - fv->ew[j].off !=3D ICE_FV_OFFSET_INVAL) > - flag =3D true; > - if (flag && prof_index > max_prof_index) > - max_prof_index =3D prof_index; > - > - prof_index++; > - flag =3D false; > - } while (fv); > - > - hw->switch_info->max_used_prof_index =3D max_prof_index; > - > - return ICE_SUCCESS; > -} > - > -/** > - * ice_init_pkg - initialize/download package > - * @hw: pointer to the hardware structure > - * @buf: pointer to the package buffer > - * @len: size of the package buffer > - * > - * This function initializes a package. The package contains HW tables > - * required to do packet processing. First, the function extracts packag= e > - * information such as version. Then it finds the ice configuration segm= ent > - * within the package; this function then saves a copy of the segment > pointer > - * within the supplied package buffer. Next, the function will cache any > hints > - * from the package, followed by downloading the package itself. Note, > that if > - * a previous PF driver has already downloaded the package successfully, > then > - * the current driver will not have to download the package again. > - * > - * The local package contents will be used to query default behavior and= to > - * update specific sections of the HW's version of the package (e.g. to > update > - * the parse graph to understand new protocols). > - * > - * This function stores a pointer to the package buffer memory, and it i= s > - * expected that the supplied buffer will not be freed immediately. If t= he > - * package buffer needs to be freed, such as when read from a file, use > - * ice_copy_and_init_pkg() instead of directly calling ice_init_pkg() in= this > - * case. > - */ > -enum ice_status ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len) > -{ > - struct ice_pkg_hdr *pkg; > - enum ice_status status; > - struct ice_seg *seg; > - > - if (!buf || !len) > - return ICE_ERR_PARAM; > - > - pkg =3D (struct ice_pkg_hdr *)buf; > - status =3D ice_verify_pkg(pkg, len); > - if (status) { > - ice_debug(hw, ICE_DBG_INIT, "failed to verify pkg > (err: %d)\n", > - status); > - return status; > - } > - > - /* initialize package info */ > - status =3D ice_init_pkg_info(hw, pkg); > - if (status) > - return status; > - > - /* before downloading the package, check package version for > - * compatibility with driver > - */ > - status =3D ice_chk_pkg_compat(hw, pkg, &seg); > - if (status) > - return status; > - > - /* initialize package hints and then download package */ > - ice_init_pkg_hints(hw, seg); > - status =3D ice_download_pkg(hw, seg); > - if (status =3D=3D ICE_ERR_AQ_NO_WORK) { > - ice_debug(hw, ICE_DBG_INIT, "package previously loaded - > no work.\n"); > - status =3D ICE_SUCCESS; > - } > - > - /* Get information on the package currently loaded in HW, then > make sure > - * the driver is compatible with this version. > - */ > - if (!status) { > - status =3D ice_get_pkg_info(hw); > - if (!status) > - status =3D ice_chk_pkg_version(&hw->active_pkg_ver); > - } > - > - if (!status) { > - hw->seg =3D seg; > - /* on successful package download update other required > - * registers to support the package and fill HW tables > - * with package content. > - */ > - ice_init_pkg_regs(hw); > - ice_fill_blk_tbls(hw); > - ice_fill_hw_ptype(hw); > - ice_get_prof_index_max(hw); > - } else { > - ice_debug(hw, ICE_DBG_INIT, "package load failed, %d\n", > - status); > - } > - > - return status; > -} > - > -/** > - * ice_copy_and_init_pkg - initialize/download a copy of the package > - * @hw: pointer to the hardware structure > - * @buf: pointer to the package buffer > - * @len: size of the package buffer > - * > - * This function copies the package buffer, and then calls ice_init_pkg(= ) to > - * initialize the copied package contents. > - * > - * The copying is necessary if the package buffer supplied is constant, = or if > - * the memory may disappear shortly after calling this function. > - * > - * If the package buffer resides in the data segment and can be modified= , > the > - * caller is free to use ice_init_pkg() instead of ice_copy_and_init_pkg= (). > - * > - * However, if the package buffer needs to be copied first, such as when > being > - * read from a file, the caller should use ice_copy_and_init_pkg(). > - * > - * This function will first copy the package buffer, before calling > - * ice_init_pkg(). The caller is free to immediately destroy the origina= l > - * package buffer, as the new copy will be managed by this function and > - * related routines. > - */ > -enum ice_status ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, > u32 len) > -{ > - enum ice_status status; > - u8 *buf_copy; > - > - if (!buf || !len) > - return ICE_ERR_PARAM; > - > - buf_copy =3D (u8 *)ice_memdup(hw, buf, len, > ICE_NONDMA_TO_NONDMA); > - > - status =3D ice_init_pkg(hw, buf_copy, len); > - if (status) { > - /* Free the copy, since we failed to initialize the package */ > - ice_free(hw, buf_copy); > - } else { > - /* Track the copied pkg so we can free it later */ > - hw->pkg_copy =3D buf_copy; > - hw->pkg_size =3D len; > - } > - > - return status; > -} > - > -/** > - * ice_pkg_buf_alloc > - * @hw: pointer to the HW structure > - * > - * Allocates a package buffer and returns a pointer to the buffer header= . > - * Note: all package contents must be in Little Endian form. > - */ > -static struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw) > -{ > - struct ice_buf_build *bld; > - struct ice_buf_hdr *buf; > - > - bld =3D (struct ice_buf_build *)ice_malloc(hw, sizeof(*bld)); > - if (!bld) > - return NULL; > - > - buf =3D (struct ice_buf_hdr *)bld; > - buf->data_end =3D CPU_TO_LE16(offsetof(struct ice_buf_hdr, > - section_entry)); > - return bld; > -} > - > -/** > - * ice_get_sw_prof_type - determine switch profile type > - * @hw: pointer to the HW structure > - * @fv: pointer to the switch field vector > - */ > -static enum ice_prof_type > -ice_get_sw_prof_type(struct ice_hw *hw, struct ice_fv *fv) > -{ > - u16 i; > - bool valid_prof =3D false; > - > - for (i =3D 0; i < hw->blk[ICE_BLK_SW].es.fvw; i++) { > - if (fv->ew[i].off !=3D ICE_NAN_OFFSET) > - valid_prof =3D true; > - > - /* UDP tunnel will have UDP_OF protocol ID and VNI offset > */ > - if (fv->ew[i].prot_id =3D=3D (u8)ICE_PROT_UDP_OF && > - fv->ew[i].off =3D=3D ICE_VNI_OFFSET) > - return ICE_PROF_TUN_UDP; > - > - /* GRE tunnel will have GRE protocol */ > - if (fv->ew[i].prot_id =3D=3D (u8)ICE_PROT_GRE_OF) > - return ICE_PROF_TUN_GRE; > - > - /* PPPOE tunnel will have PPPOE protocol */ > - if (fv->ew[i].prot_id =3D=3D (u8)ICE_PROT_PPPOE) > - return ICE_PROF_TUN_PPPOE; > - } > - > - return valid_prof ? ICE_PROF_NON_TUN : ICE_PROF_INVALID; > -} > - > -/** > - * ice_get_sw_fv_bitmap - Get switch field vector bitmap based on profil= e > type > - * @hw: pointer to hardware structure > - * @req_profs: type of profiles requested > - * @bm: pointer to memory for returning the bitmap of field vectors > - */ > -void > -ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type req_profs, > - ice_bitmap_t *bm) > -{ > - struct ice_pkg_enum state; > - struct ice_seg *ice_seg; > - struct ice_fv *fv; > - > - ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); > - ice_zero_bitmap(bm, ICE_MAX_NUM_PROFILES); > - ice_seg =3D hw->seg; > - do { > - enum ice_prof_type prof_type; > - u32 offset; > - > - fv =3D (struct ice_fv *) > - ice_pkg_enum_entry(ice_seg, &state, > ICE_SID_FLD_VEC_SW, > - &offset, ice_sw_fv_handler); > - ice_seg =3D NULL; > - > - if (fv) { > - /* Determine field vector type */ > - prof_type =3D ice_get_sw_prof_type(hw, fv); > - > - if (req_profs & prof_type) > - ice_set_bit((u16)offset, bm); > - } > - } while (fv); > -} > - > -/** > - * ice_get_sw_fv_list > + * ice_add_tunnel_hint > * @hw: pointer to the HW structure > - * @prot_ids: field vector to search for with a given protocol ID > - * @ids_cnt: lookup/protocol count > - * @bm: bitmap of field vectors to consider > - * @fv_list: Head of a list > - * > - * Finds all the field vector entries from switch block that contain > - * a given protocol ID and returns a list of structures of type > - * "ice_sw_fv_list_entry". Every structure in the list has a field vecto= r > - * definition and profile ID information > - * NOTE: The caller of the function is responsible for freeing the memor= y > - * allocated for every list entry. > + * @label_name: label text > + * @val: value of the tunnel port boost entry > */ > -enum ice_status > -ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt, > - ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list) > +void ice_add_tunnel_hint(struct ice_hw *hw, char *label_name, u16 val) > { > - struct ice_sw_fv_list_entry *fvl; > - struct ice_sw_fv_list_entry *tmp; > - struct ice_pkg_enum state; > - struct ice_seg *ice_seg; > - struct ice_fv *fv; > - u32 offset; > - > - ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); > - > - if (!ids_cnt || !hw->seg) > - return ICE_ERR_PARAM; > - > - ice_seg =3D hw->seg; > - do { > + if (hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) { > u16 i; >=20 > - fv =3D (struct ice_fv *) > - ice_pkg_enum_entry(ice_seg, &state, > ICE_SID_FLD_VEC_SW, > - &offset, ice_sw_fv_handler); > - if (!fv) > - break; > - ice_seg =3D NULL; > - > - /* If field vector is not in the bitmap list, then skip this > - * profile. > - */ > - if (!ice_is_bit_set(bm, (u16)offset)) > - continue; > + for (i =3D 0; tnls[i].type !=3D TNL_LAST; i++) { > + size_t len =3D strlen(tnls[i].label_prefix); >=20 > - for (i =3D 0; i < ids_cnt; i++) { > - int j; > + /* Look for matching label start, before continuing > */ > + if (strncmp(label_name, tnls[i].label_prefix, len)) > + continue; >=20 > - /* This code assumes that if a switch field vector line > - * has a matching protocol, then this line will > contain > - * the entries necessary to represent every field in > - * that protocol header. > + /* Make sure this label matches our PF. Note that > the PF > + * character ('0' - '7') will be located where our > + * prefix string's null terminator is located. > */ > - for (j =3D 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++) > - if (fv->ew[j].prot_id =3D=3D prot_ids[i]) > - break; > - if (j >=3D hw->blk[ICE_BLK_SW].es.fvw) > - break; > - if (i + 1 =3D=3D ids_cnt) { > - fvl =3D (struct ice_sw_fv_list_entry *) > - ice_malloc(hw, sizeof(*fvl)); > - if (!fvl) > - goto err; > - fvl->fv_ptr =3D fv; > - fvl->profile_id =3D offset; > - LIST_ADD(&fvl->list_entry, fv_list); > + if ((label_name[len] - '0') =3D=3D hw->pf_id) { > + hw->tnl.tbl[hw->tnl.count].type =3D tnls[i].type; > + hw->tnl.tbl[hw->tnl.count].valid =3D false; > + hw->tnl.tbl[hw->tnl.count].in_use =3D false; > + hw->tnl.tbl[hw->tnl.count].marked =3D false; > + hw->tnl.tbl[hw->tnl.count].boost_addr =3D val; > + hw->tnl.tbl[hw->tnl.count].port =3D 0; > + hw->tnl.count++; > break; > } > } > - } while (fv); > - if (LIST_EMPTY(fv_list)) > - return ICE_ERR_CFG; > - return ICE_SUCCESS; > - > -err: > - LIST_FOR_EACH_ENTRY_SAFE(fvl, tmp, fv_list, ice_sw_fv_list_entry, > - list_entry) { > - LIST_DEL(&fvl->list_entry); > - ice_free(hw, fvl); > } > - > - return ICE_ERR_NO_MEMORY; > } >=20 > /** > - * ice_init_prof_result_bm - Initialize the profile result index bitmap > - * @hw: pointer to hardware structure > + * ice_add_dvm_hint > + * @hw: pointer to the HW structure > + * @val: value of the boost entry > + * @enable: true if entry needs to be enabled, or false if needs to be > disabled > */ > -void ice_init_prof_result_bm(struct ice_hw *hw) > +void ice_add_dvm_hint(struct ice_hw *hw, u16 val, bool enable) > { > - struct ice_pkg_enum state; > - struct ice_seg *ice_seg; > - struct ice_fv *fv; > - > - ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); > - > - if (!hw->seg) > - return; > - > - ice_seg =3D hw->seg; > - do { > - u32 off; > - u16 i; > - > - fv =3D (struct ice_fv *) > - ice_pkg_enum_entry(ice_seg, &state, > ICE_SID_FLD_VEC_SW, > - &off, ice_sw_fv_handler); > - ice_seg =3D NULL; > - if (!fv) > - break; > + if (hw->dvm_upd.count < ICE_DVM_MAX_ENTRIES) { > + hw->dvm_upd.tbl[hw->dvm_upd.count].boost_addr =3D val; > + hw->dvm_upd.tbl[hw->dvm_upd.count].enable =3D enable; > + hw->dvm_upd.count++; > + } > +} >=20 > - ice_zero_bitmap(hw->switch_info->prof_res_bm[off], > - ICE_MAX_FV_WORDS); > +/* Key creation */ >=20 > - /* Determine empty field vector indices, these can be > - * used for recipe results. Skip index 0, since it is > - * always used for Switch ID. > - */ > - for (i =3D 1; i < ICE_MAX_FV_WORDS; i++) > - if (fv->ew[i].prot_id =3D=3D ICE_PROT_INVALID && > - fv->ew[i].off =3D=3D ICE_FV_OFFSET_INVAL) > - ice_set_bit(i, > - hw->switch_info- > >prof_res_bm[off]); > - } while (fv); > -} > +#define ICE_DC_KEY 0x1 /* don't care */ > +#define ICE_DC_KEYINV 0x1 > +#define ICE_NM_KEY 0x0 /* never match */ > +#define ICE_NM_KEYINV 0x0 > +#define ICE_0_KEY 0x1 /* match 0 */ > +#define ICE_0_KEYINV 0x0 > +#define ICE_1_KEY 0x0 /* match 1 */ > +#define ICE_1_KEYINV 0x1 >=20 > /** > - * ice_pkg_buf_free > - * @hw: pointer to the HW structure > - * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) > + * ice_gen_key_word - generate 16-bits of a key/mask word > + * @val: the value > + * @valid: valid bits mask (change only the valid bits) > + * @dont_care: don't care mask > + * @nvr_mtch: never match mask > + * @key: pointer to an array of where the resulting key portion > + * @key_inv: pointer to an array of where the resulting key invert porti= on > * > - * Frees a package buffer > - */ > -void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld) > -{ > - ice_free(hw, bld); > -} > - > -/** > - * ice_pkg_buf_reserve_section > - * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) > - * @count: the number of sections to reserve > + * This function generates 16-bits from a 8-bit value, an 8-bit don't ca= re > mask > + * and an 8-bit never match mask. The 16-bits of output are divided into= 8 > bits > + * of key and 8 bits of key invert. > + * > + * '0' =3D b01, always match a 0 bit > + * '1' =3D b10, always match a 1 bit > + * '?' =3D b11, don't care bit (always matches) > + * '~' =3D b00, never match bit > * > - * Reserves one or more section table entries in a package buffer. This > routine > - * can be called multiple times as long as they are made before calling > - * ice_pkg_buf_alloc_section(). Once ice_pkg_buf_alloc_section() > - * is called once, the number of sections that can be allocated will not= be > able > - * to be increased; not using all reserved sections is fine, but this wi= ll > - * result in some wasted space in the buffer. > - * Note: all package contents must be in Little Endian form. > + * Input: > + * val: b0 1 0 1 0 1 > + * dont_care: b0 0 1 1 0 0 > + * never_mtch: b0 0 0 0 1 1 > + * ------------------------------ > + * Result: key: b01 10 11 11 00 00 > */ > static enum ice_status > -ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count) > +ice_gen_key_word(u8 val, u8 valid, u8 dont_care, u8 nvr_mtch, u8 *key, > + u8 *key_inv) > { > - struct ice_buf_hdr *buf; > - u16 section_count; > - u16 data_end; > + u8 in_key =3D *key, in_key_inv =3D *key_inv; > + u8 i; >=20 > - if (!bld) > - return ICE_ERR_PARAM; > + /* 'dont_care' and 'nvr_mtch' masks cannot overlap */ > + if ((dont_care ^ nvr_mtch) !=3D (dont_care | nvr_mtch)) > + return ICE_ERR_CFG; >=20 > - buf =3D (struct ice_buf_hdr *)&bld->buf; > + *key =3D 0; > + *key_inv =3D 0; >=20 > - /* already an active section, can't increase table size */ > - section_count =3D LE16_TO_CPU(buf->section_count); > - if (section_count > 0) > - return ICE_ERR_CFG; > + /* encode the 8 bits into 8-bit key and 8-bit key invert */ > + for (i =3D 0; i < 8; i++) { > + *key >>=3D 1; > + *key_inv >>=3D 1; >=20 > - if (bld->reserved_section_table_entries + count > > ICE_MAX_S_COUNT) > - return ICE_ERR_CFG; > - bld->reserved_section_table_entries +=3D count; > + if (!(valid & 0x1)) { /* change only valid bits */ > + *key |=3D (in_key & 0x1) << 7; > + *key_inv |=3D (in_key_inv & 0x1) << 7; > + } else if (dont_care & 0x1) { /* don't care bit */ > + *key |=3D ICE_DC_KEY << 7; > + *key_inv |=3D ICE_DC_KEYINV << 7; > + } else if (nvr_mtch & 0x1) { /* never match bit */ > + *key |=3D ICE_NM_KEY << 7; > + *key_inv |=3D ICE_NM_KEYINV << 7; > + } else if (val & 0x01) { /* exact 1 match */ > + *key |=3D ICE_1_KEY << 7; > + *key_inv |=3D ICE_1_KEYINV << 7; > + } else { /* exact 0 match */ > + *key |=3D ICE_0_KEY << 7; > + *key_inv |=3D ICE_0_KEYINV << 7; > + } >=20 > - data_end =3D LE16_TO_CPU(buf->data_end) + > - FLEX_ARRAY_SIZE(buf, section_entry, count); > - buf->data_end =3D CPU_TO_LE16(data_end); > + dont_care >>=3D 1; > + nvr_mtch >>=3D 1; > + valid >>=3D 1; > + val >>=3D 1; > + in_key >>=3D 1; > + in_key_inv >>=3D 1; > + } >=20 > return ICE_SUCCESS; > } >=20 > /** > - * ice_pkg_buf_alloc_section > - * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) > - * @type: the section type value > - * @size: the size of the section to reserve (in bytes) > + * ice_bits_max_set - determine if the number of bits set is within a > maximum > + * @mask: pointer to the byte array which is the mask > + * @size: the number of bytes in the mask > + * @max: the max number of set bits > * > - * Reserves memory in the buffer for a section's content and updates the > - * buffers' status accordingly. This routine returns a pointer to the fi= rst > - * byte of the section start within the buffer, which is used to fill in= the > - * section contents. > - * Note: all package contents must be in Little Endian form. > + * This function determines if there are at most 'max' number of bits se= t in > an > + * array. Returns true if the number for bits set is <=3D max or will re= turn > false > + * otherwise. > */ > -static void * > -ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size) > +static bool ice_bits_max_set(const u8 *mask, u16 size, u16 max) > { > - struct ice_buf_hdr *buf; > - u16 sect_count; > - u16 data_end; > - > - if (!bld || !type || !size) > - return NULL; > - > - buf =3D (struct ice_buf_hdr *)&bld->buf; > - > - /* check for enough space left in buffer */ > - data_end =3D LE16_TO_CPU(buf->data_end); > - > - /* section start must align on 4 byte boundary */ > - data_end =3D ICE_ALIGN(data_end, 4); > - > - if ((data_end + size) > ICE_MAX_S_DATA_END) > - return NULL; > - > - /* check for more available section table entries */ > - sect_count =3D LE16_TO_CPU(buf->section_count); > - if (sect_count < bld->reserved_section_table_entries) { > - void *section_ptr =3D ((u8 *)buf) + data_end; > + u16 count =3D 0; > + u16 i; >=20 > - buf->section_entry[sect_count].offset =3D > CPU_TO_LE16(data_end); > - buf->section_entry[sect_count].size =3D CPU_TO_LE16(size); > - buf->section_entry[sect_count].type =3D CPU_TO_LE32(type); > + /* check each byte */ > + for (i =3D 0; i < size; i++) { > + /* if 0, go to next byte */ > + if (!mask[i]) > + continue; >=20 > - data_end +=3D size; > - buf->data_end =3D CPU_TO_LE16(data_end); > + /* We know there is at least one set bit in this byte because > of > + * the above check; if we already have found 'max' number > of > + * bits set, then we can return failure now. > + */ > + if (count =3D=3D max) > + return false; >=20 > - buf->section_count =3D CPU_TO_LE16(sect_count + 1); > - return section_ptr; > + /* count the bits in this byte, checking threshold */ > + count +=3D ice_hweight8(mask[i]); > + if (count > max) > + return false; > } >=20 > - /* no free section table entries */ > - return NULL; > + return true; > } >=20 > /** > - * ice_pkg_buf_alloc_single_section > - * @hw: pointer to the HW structure > - * @type: the section type value > - * @size: the size of the section to reserve (in bytes) > - * @section: returns pointer to the section > + * ice_set_key - generate a variable sized key with multiples of 16-bits > + * @key: pointer to where the key will be stored > + * @size: the size of the complete key in bytes (must be even) > + * @val: array of 8-bit values that makes up the value portion of the ke= y > + * @upd: array of 8-bit masks that determine what key portion to update > + * @dc: array of 8-bit masks that make up the don't care mask > + * @nm: array of 8-bit masks that make up the never match mask > + * @off: the offset of the first byte in the key to update > + * @len: the number of bytes in the key update > * > - * Allocates a package buffer with a single section. > - * Note: all package contents must be in Little Endian form. > + * This function generates a key from a value, a don't care mask and a > never > + * match mask. > + * upd, dc, and nm are optional parameters, and can be NULL: > + * upd =3D=3D NULL --> upd mask is all 1's (update all bits) > + * dc =3D=3D NULL --> dc mask is all 0's (no don't care bits) > + * nm =3D=3D NULL --> nm mask is all 0's (no never match bits) > */ > -struct ice_buf_build * > -ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size, > - void **section) > +enum ice_status > +ice_set_key(u8 *key, u16 size, u8 *val, u8 *upd, u8 *dc, u8 *nm, u16 off= , > + u16 len) > { > - struct ice_buf_build *buf; > - > - if (!section) > - return NULL; > - > - buf =3D ice_pkg_buf_alloc(hw); > - if (!buf) > - return NULL; > - > - if (ice_pkg_buf_reserve_section(buf, 1)) > - goto ice_pkg_buf_alloc_single_section_err; > - > - *section =3D ice_pkg_buf_alloc_section(buf, type, size); > - if (!*section) > - goto ice_pkg_buf_alloc_single_section_err; > - > - return buf; > - > -ice_pkg_buf_alloc_single_section_err: > - ice_pkg_buf_free(hw, buf); > - return NULL; > -} > + u16 half_size; > + u16 i; >=20 > -/** > - * ice_pkg_buf_get_active_sections > - * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) > - * > - * Returns the number of active sections. Before using the package buffe= r > - * in an update package command, the caller should make sure that there > is at > - * least one active section - otherwise, the buffer is not legal and sho= uld > - * not be used. > - * Note: all package contents must be in Little Endian form. > - */ > -static u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld) > -{ > - struct ice_buf_hdr *buf; > + /* size must be a multiple of 2 bytes. */ > + if (size % 2) > + return ICE_ERR_CFG; > + half_size =3D size / 2; >=20 > - if (!bld) > - return 0; > + if (off + len > half_size) > + return ICE_ERR_CFG; >=20 > - buf =3D (struct ice_buf_hdr *)&bld->buf; > - return LE16_TO_CPU(buf->section_count); > -} > + /* Make sure at most one bit is set in the never match mask. Having > more > + * than one never match mask bit set will cause HW to consume > excessive > + * power otherwise; this is a power management efficiency check. > + */ > +#define ICE_NVR_MTCH_BITS_MAX 1 > + if (nm && !ice_bits_max_set(nm, len, ICE_NVR_MTCH_BITS_MAX)) > + return ICE_ERR_CFG; >=20 > -/** > - * ice_pkg_buf > - * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) > - * > - * Return a pointer to the buffer's header > - */ > -struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld) > -{ > - if (!bld) > - return NULL; > + for (i =3D 0; i < len; i++) > + if (ice_gen_key_word(val[i], upd ? upd[i] : 0xff, > + dc ? dc[i] : 0, nm ? nm[i] : 0, > + key + off + i, key + half_size + off + i)) > + return ICE_ERR_CFG; >=20 > - return &bld->buf; > + return ICE_SUCCESS; > } >=20 > /** > @@ -3956,6 +2132,18 @@ static void ice_fill_tbl(struct ice_hw *hw, enum > ice_block block_id, u32 sid) > } > } >=20 > +/** > + * ice_init_flow_profs - init flow profile locks and list heads > + * @hw: pointer to the hardware structure > + * @blk_idx: HW block index > + */ > +static > +void ice_init_flow_profs(struct ice_hw *hw, u8 blk_idx) > +{ > + ice_init_lock(&hw->fl_profs_locks[blk_idx]); > + INIT_LIST_HEAD(&hw->fl_profs[blk_idx]); > +} > + > /** > * ice_fill_blk_tbls - Read package context for tables > * @hw: pointer to the hardware structure > @@ -4098,17 +2286,6 @@ void ice_free_hw_tbls(struct ice_hw *hw) > ice_memset(hw->blk, 0, sizeof(hw->blk), ICE_NONDMA_MEM); > } >=20 > -/** > - * ice_init_flow_profs - init flow profile locks and list heads > - * @hw: pointer to the hardware structure > - * @blk_idx: HW block index > - */ > -static void ice_init_flow_profs(struct ice_hw *hw, u8 blk_idx) > -{ > - ice_init_lock(&hw->fl_profs_locks[blk_idx]); > - INIT_LIST_HEAD(&hw->fl_profs[blk_idx]); > -} > - > /** > * ice_clear_hw_tbls - clear HW tables and flow profiles > * @hw: pointer to the hardware structure > diff --git a/drivers/net/ice/base/ice_flex_pipe.h > b/drivers/net/ice/base/ice_flex_pipe.h > index ab897de4f3..aab765e68f 100644 > --- a/drivers/net/ice/base/ice_flex_pipe.h > +++ b/drivers/net/ice/base/ice_flex_pipe.h > @@ -7,23 +7,6 @@ >=20 > #include "ice_type.h" >=20 > -/* Package minimal version supported */ > -#define ICE_PKG_SUPP_VER_MAJ 1 > -#define ICE_PKG_SUPP_VER_MNR 3 > - > -/* Package format version */ > -#define ICE_PKG_FMT_VER_MAJ 1 > -#define ICE_PKG_FMT_VER_MNR 0 > -#define ICE_PKG_FMT_VER_UPD 0 > -#define ICE_PKG_FMT_VER_DFT 0 > - > -#define ICE_PKG_CNT 4 > - > -enum ice_status > -ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count); > -enum ice_status > -ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type > access); > -void ice_release_change_lock(struct ice_hw *hw); > enum ice_status > ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_= idx, > u8 *prot, u16 *off); > @@ -36,12 +19,6 @@ ice_get_sw_fv_bitmap(struct ice_hw *hw, enum > ice_prof_type type, > void > ice_init_prof_result_bm(struct ice_hw *hw); > enum ice_status > -ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt, > - ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list); > -enum ice_status > -ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count); > -u16 ice_pkg_buf_get_free_space(struct ice_buf_build *bld); > -enum ice_status > ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, > u16 buf_size, struct ice_sq_cd *cd); > bool > @@ -79,31 +56,31 @@ ice_rem_prof_id_flow(struct ice_hw *hw, enum > ice_block blk, u16 vsi, u64 hdl); > enum ice_status > ice_flow_assoc_hw_prof(struct ice_hw *hw, enum ice_block blk, > u16 dest_vsi_handle, u16 fdir_vsi_handle, int id); > -enum ice_status ice_init_pkg(struct ice_hw *hw, u8 *buff, u32 len); > -enum ice_status > -ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len); > enum ice_status ice_init_hw_tbls(struct ice_hw *hw); > -void ice_free_seg(struct ice_hw *hw); > void ice_fill_blk_tbls(struct ice_hw *hw); > void ice_clear_hw_tbls(struct ice_hw *hw); > void ice_free_hw_tbls(struct ice_hw *hw); > enum ice_status > ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id); > -struct ice_buf_build * > -ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size, > - void **section); > -struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld); > -void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld); >=20 > enum ice_status > ice_set_key(u8 *key, u16 size, u8 *val, u8 *upd, u8 *dc, u8 *nm, u16 off= , > u16 len); > -void * > -ice_pkg_enum_entry(struct ice_seg *ice_seg, struct ice_pkg_enum *state, > - u32 sect_type, u32 *offset, > - void *(*handler)(u32 sect_type, void *section, > - u32 index, u32 *offset)); > -void * > -ice_pkg_enum_section(struct ice_seg *ice_seg, struct ice_pkg_enum *state= , > - u32 sect_type); > + > +void ice_fill_blk_tbls(struct ice_hw *hw); > + > +/* To support tunneling entries by PF, the package will append the PF > number to > + * the label; for example TNL_VXLAN_PF0, TNL_VXLAN_PF1, > TNL_VXLAN_PF2, etc. > + */ > +#define ICE_TNL_PRE "TNL_" > +/* For supporting double VLAN mode, it is necessary to enable or disable > certain > + * boost tcam entries. The metadata labels names that match the > following > + * prefixes will be saved to allow enabling double VLAN mode. > + */ > +#define ICE_DVM_PRE "BOOST_MAC_VLAN_DVM" /* enable > these entries */ > +#define ICE_SVM_PRE "BOOST_MAC_VLAN_SVM" /* disable these > entries */ > + > +void ice_add_tunnel_hint(struct ice_hw *hw, char *label_name, u16 val); > +void ice_add_dvm_hint(struct ice_hw *hw, u16 val, bool enable); > + > #endif /* _ICE_FLEX_PIPE_H_ */ > diff --git a/drivers/net/ice/base/ice_flex_type.h > b/drivers/net/ice/base/ice_flex_type.h > index 09a02fe9ac..d45653b637 100644 > --- a/drivers/net/ice/base/ice_flex_type.h > +++ b/drivers/net/ice/base/ice_flex_type.h > @@ -14,6 +14,7 @@ struct ice_fv_word { > u16 off; /* Offset within the protocol header */ > u8 resvrd; > }; > + > #pragma pack() >=20 > #define ICE_MAX_NUM_PROFILES 256 > @@ -23,251 +24,6 @@ struct ice_fv { > struct ice_fv_word ew[ICE_MAX_FV_WORDS]; > }; >=20 > -/* Package and segment headers and tables */ > -struct ice_pkg_hdr { > - struct ice_pkg_ver pkg_format_ver; > - __le32 seg_count; > - __le32 seg_offset[STRUCT_HACK_VAR_LEN]; > -}; > - > -/* generic segment */ > -struct ice_generic_seg_hdr { > -#define SEGMENT_TYPE_METADATA 0x00000001 > -#define SEGMENT_TYPE_ICE_E810 0x00000010 > - __le32 seg_type; > - struct ice_pkg_ver seg_format_ver; > - __le32 seg_size; > - char seg_id[ICE_PKG_NAME_SIZE]; > -}; > - > -/* ice specific segment */ > - > -union ice_device_id { > - struct { > - __le16 device_id; > - __le16 vendor_id; > - } dev_vend_id; > - __le32 id; > -}; > - > -struct ice_device_id_entry { > - union ice_device_id device; > - union ice_device_id sub_device; > -}; > - > -struct ice_seg { > - struct ice_generic_seg_hdr hdr; > - __le32 device_table_count; > - struct ice_device_id_entry device_table[STRUCT_HACK_VAR_LEN]; > -}; > - > -struct ice_nvm_table { > - __le32 table_count; > - __le32 vers[STRUCT_HACK_VAR_LEN]; > -}; > - > -struct ice_buf { > -#define ICE_PKG_BUF_SIZE 4096 > - u8 buf[ICE_PKG_BUF_SIZE]; > -}; > - > -struct ice_buf_table { > - __le32 buf_count; > - struct ice_buf buf_array[STRUCT_HACK_VAR_LEN]; > -}; > - > -/* global metadata specific segment */ > -struct ice_global_metadata_seg { > - struct ice_generic_seg_hdr hdr; > - struct ice_pkg_ver pkg_ver; > - __le32 rsvd; > - char pkg_name[ICE_PKG_NAME_SIZE]; > -}; > - > -#define ICE_MIN_S_OFF 12 > -#define ICE_MAX_S_OFF 4095 > -#define ICE_MIN_S_SZ 1 > -#define ICE_MAX_S_SZ 4084 > - > -/* section information */ > -struct ice_section_entry { > - __le32 type; > - __le16 offset; > - __le16 size; > -}; > - > -#define ICE_MIN_S_COUNT 1 > -#define ICE_MAX_S_COUNT 511 > -#define ICE_MIN_S_DATA_END 12 > -#define ICE_MAX_S_DATA_END 4096 > - > -#define ICE_METADATA_BUF 0x80000000 > - > -struct ice_buf_hdr { > - __le16 section_count; > - __le16 data_end; > - struct ice_section_entry section_entry[STRUCT_HACK_VAR_LEN]; > -}; > - > -#define ICE_MAX_ENTRIES_IN_BUF(hd_sz, ent_sz) ((ICE_PKG_BUF_SIZE - \ > - ice_struct_size((struct ice_buf_hdr *)0, section_entry, 1) - (hd_sz)) /= \ > - (ent_sz)) > - > -/* ice package section IDs */ > -#define ICE_SID_METADATA 1 > -#define ICE_SID_XLT0_SW 10 > -#define ICE_SID_XLT_KEY_BUILDER_SW 11 > -#define ICE_SID_XLT1_SW 12 > -#define ICE_SID_XLT2_SW 13 > -#define ICE_SID_PROFID_TCAM_SW 14 > -#define ICE_SID_PROFID_REDIR_SW 15 > -#define ICE_SID_FLD_VEC_SW 16 > -#define ICE_SID_CDID_KEY_BUILDER_SW 17 > -#define ICE_SID_CDID_REDIR_SW 18 > - > -#define ICE_SID_XLT0_ACL 20 > -#define ICE_SID_XLT_KEY_BUILDER_ACL 21 > -#define ICE_SID_XLT1_ACL 22 > -#define ICE_SID_XLT2_ACL 23 > -#define ICE_SID_PROFID_TCAM_ACL 24 > -#define ICE_SID_PROFID_REDIR_ACL 25 > -#define ICE_SID_FLD_VEC_ACL 26 > -#define ICE_SID_CDID_KEY_BUILDER_ACL 27 > -#define ICE_SID_CDID_REDIR_ACL 28 > - > -#define ICE_SID_XLT0_FD 30 > -#define ICE_SID_XLT_KEY_BUILDER_FD 31 > -#define ICE_SID_XLT1_FD 32 > -#define ICE_SID_XLT2_FD 33 > -#define ICE_SID_PROFID_TCAM_FD 34 > -#define ICE_SID_PROFID_REDIR_FD 35 > -#define ICE_SID_FLD_VEC_FD 36 > -#define ICE_SID_CDID_KEY_BUILDER_FD 37 > -#define ICE_SID_CDID_REDIR_FD 38 > - > -#define ICE_SID_XLT0_RSS 40 > -#define ICE_SID_XLT_KEY_BUILDER_RSS 41 > -#define ICE_SID_XLT1_RSS 42 > -#define ICE_SID_XLT2_RSS 43 > -#define ICE_SID_PROFID_TCAM_RSS 44 > -#define ICE_SID_PROFID_REDIR_RSS 45 > -#define ICE_SID_FLD_VEC_RSS 46 > -#define ICE_SID_CDID_KEY_BUILDER_RSS 47 > -#define ICE_SID_CDID_REDIR_RSS 48 > - > -#define ICE_SID_RXPARSER_CAM 50 > -#define ICE_SID_RXPARSER_NOMATCH_CAM 51 > -#define ICE_SID_RXPARSER_IMEM 52 > -#define ICE_SID_RXPARSER_XLT0_BUILDER 53 > -#define ICE_SID_RXPARSER_NODE_PTYPE 54 > -#define ICE_SID_RXPARSER_MARKER_PTYPE 55 > -#define ICE_SID_RXPARSER_BOOST_TCAM 56 > -#define ICE_SID_RXPARSER_PROTO_GRP 57 > -#define ICE_SID_RXPARSER_METADATA_INIT 58 > -#define ICE_SID_RXPARSER_XLT0 59 > - > -#define ICE_SID_TXPARSER_CAM 60 > -#define ICE_SID_TXPARSER_NOMATCH_CAM 61 > -#define ICE_SID_TXPARSER_IMEM 62 > -#define ICE_SID_TXPARSER_XLT0_BUILDER 63 > -#define ICE_SID_TXPARSER_NODE_PTYPE 64 > -#define ICE_SID_TXPARSER_MARKER_PTYPE 65 > -#define ICE_SID_TXPARSER_BOOST_TCAM 66 > -#define ICE_SID_TXPARSER_PROTO_GRP 67 > -#define ICE_SID_TXPARSER_METADATA_INIT 68 > -#define ICE_SID_TXPARSER_XLT0 69 > - > -#define ICE_SID_RXPARSER_INIT_REDIR 70 > -#define ICE_SID_TXPARSER_INIT_REDIR 71 > -#define ICE_SID_RXPARSER_MARKER_GRP 72 > -#define ICE_SID_TXPARSER_MARKER_GRP 73 > -#define ICE_SID_RXPARSER_LAST_PROTO 74 > -#define ICE_SID_TXPARSER_LAST_PROTO 75 > -#define ICE_SID_RXPARSER_PG_SPILL 76 > -#define ICE_SID_TXPARSER_PG_SPILL 77 > -#define ICE_SID_RXPARSER_NOMATCH_SPILL 78 > -#define ICE_SID_TXPARSER_NOMATCH_SPILL 79 > - > -#define ICE_SID_XLT0_PE 80 > -#define ICE_SID_XLT_KEY_BUILDER_PE 81 > -#define ICE_SID_XLT1_PE 82 > -#define ICE_SID_XLT2_PE 83 > -#define ICE_SID_PROFID_TCAM_PE 84 > -#define ICE_SID_PROFID_REDIR_PE 85 > -#define ICE_SID_FLD_VEC_PE 86 > -#define ICE_SID_CDID_KEY_BUILDER_PE 87 > -#define ICE_SID_CDID_REDIR_PE 88 > - > -#define ICE_SID_RXPARSER_FLAG_REDIR 97 > - > -/* Label Metadata section IDs */ > -#define ICE_SID_LBL_FIRST 0x80000010 > -#define ICE_SID_LBL_RXPARSER_IMEM 0x80000010 > -#define ICE_SID_LBL_TXPARSER_IMEM 0x80000011 > -#define ICE_SID_LBL_RESERVED_12 0x80000012 > -#define ICE_SID_LBL_RESERVED_13 0x80000013 > -#define ICE_SID_LBL_RXPARSER_MARKER 0x80000014 > -#define ICE_SID_LBL_TXPARSER_MARKER 0x80000015 > -#define ICE_SID_LBL_PTYPE 0x80000016 > -#define ICE_SID_LBL_PROTOCOL_ID 0x80000017 > -#define ICE_SID_LBL_RXPARSER_TMEM 0x80000018 > -#define ICE_SID_LBL_TXPARSER_TMEM 0x80000019 > -#define ICE_SID_LBL_RXPARSER_PG 0x8000001A > -#define ICE_SID_LBL_TXPARSER_PG 0x8000001B > -#define ICE_SID_LBL_RXPARSER_M_TCAM 0x8000001C > -#define ICE_SID_LBL_TXPARSER_M_TCAM 0x8000001D > -#define ICE_SID_LBL_SW_PROFID_TCAM 0x8000001E > -#define ICE_SID_LBL_ACL_PROFID_TCAM 0x8000001F > -#define ICE_SID_LBL_PE_PROFID_TCAM 0x80000020 > -#define ICE_SID_LBL_RSS_PROFID_TCAM 0x80000021 > -#define ICE_SID_LBL_FD_PROFID_TCAM 0x80000022 > -#define ICE_SID_LBL_FLAG 0x80000023 > -#define ICE_SID_LBL_REG 0x80000024 > -#define ICE_SID_LBL_SW_PTG 0x80000025 > -#define ICE_SID_LBL_ACL_PTG 0x80000026 > -#define ICE_SID_LBL_PE_PTG 0x80000027 > -#define ICE_SID_LBL_RSS_PTG 0x80000028 > -#define ICE_SID_LBL_FD_PTG 0x80000029 > -#define ICE_SID_LBL_SW_VSIG 0x8000002A > -#define ICE_SID_LBL_ACL_VSIG 0x8000002B > -#define ICE_SID_LBL_PE_VSIG 0x8000002C > -#define ICE_SID_LBL_RSS_VSIG 0x8000002D > -#define ICE_SID_LBL_FD_VSIG 0x8000002E > -#define ICE_SID_LBL_PTYPE_META 0x8000002F > -#define ICE_SID_LBL_SW_PROFID 0x80000030 > -#define ICE_SID_LBL_ACL_PROFID 0x80000031 > -#define ICE_SID_LBL_PE_PROFID 0x80000032 > -#define ICE_SID_LBL_RSS_PROFID 0x80000033 > -#define ICE_SID_LBL_FD_PROFID 0x80000034 > -#define ICE_SID_LBL_RXPARSER_MARKER_GRP 0x80000035 > -#define ICE_SID_LBL_TXPARSER_MARKER_GRP 0x80000036 > -#define ICE_SID_LBL_RXPARSER_PROTO 0x80000037 > -#define ICE_SID_LBL_TXPARSER_PROTO 0x80000038 > -/* The following define MUST be updated to reflect the last label sectio= n > ID */ > -#define ICE_SID_LBL_LAST 0x80000038 > - > -enum ice_block { > - ICE_BLK_SW =3D 0, > - ICE_BLK_ACL, > - ICE_BLK_FD, > - ICE_BLK_RSS, > - ICE_BLK_PE, > - ICE_BLK_COUNT > -}; > - > -enum ice_sect { > - ICE_XLT0 =3D 0, > - ICE_XLT_KB, > - ICE_XLT1, > - ICE_XLT2, > - ICE_PROF_TCAM, > - ICE_PROF_REDIR, > - ICE_VEC_TBL, > - ICE_CDID_KB, > - ICE_CDID_REDIR, > - ICE_SECT_COUNT > -}; > - > /* Packet Type (PTYPE) values */ > #define ICE_PTYPE_MAC_PAY 1 > #define ICE_MAC_PTP 2 > @@ -662,25 +418,6 @@ struct ice_boost_tcam_section { > sizeof(struct ice_boost_tcam_entry), \ > sizeof(struct ice_boost_tcam_entry)) >=20 > -/* package Marker PType TCAM entry */ > -struct ice_marker_ptype_tcam_entry { > -#define ICE_MARKER_PTYPE_TCAM_ADDR_MAX 1024 > - __le16 addr; > - __le16 ptype; > - u8 keys[20]; > -}; > - > -struct ice_marker_ptype_tcam_section { > - __le16 count; > - __le16 reserved; > - struct ice_marker_ptype_tcam_entry tcam[STRUCT_HACK_VAR_LEN]; > -}; > - > -#define ICE_MAX_MARKER_PTYPE_TCAMS_IN_BUF > ICE_MAX_ENTRIES_IN_BUF( \ > - ice_struct_size((struct ice_marker_ptype_tcam_section *)0, tcam, 1) > - \ > - sizeof(struct ice_marker_ptype_tcam_entry), \ > - sizeof(struct ice_marker_ptype_tcam_entry)) > - > struct ice_xlt1_section { > __le16 count; > __le16 offset; > @@ -699,27 +436,6 @@ struct ice_prof_redir_section { > u8 redir_value[STRUCT_HACK_VAR_LEN]; > }; >=20 > -/* package buffer building */ > - > -struct ice_buf_build { > - struct ice_buf buf; > - u16 reserved_section_table_entries; > -}; > - > -struct ice_pkg_enum { > - struct ice_buf_table *buf_table; > - u32 buf_idx; > - > - u32 type; > - struct ice_buf_hdr *buf; > - u32 sect_idx; > - void *sect; > - u32 sect_type; > - > - u32 entry_idx; > - void *(*handler)(u32 sect_type, void *section, u32 index, u32 > *offset); > -}; > - > /* Tunnel enabling */ >=20 > enum ice_tunnel_type { > diff --git a/drivers/net/ice/base/ice_switch.c > b/drivers/net/ice/base/ice_switch.c > index 513623a0a4..ad61dde397 100644 > --- a/drivers/net/ice/base/ice_switch.c > +++ b/drivers/net/ice/base/ice_switch.c > @@ -7417,37 +7417,18 @@ ice_create_recipe_group(struct ice_hw *hw, > struct ice_sw_recipe *rm, > * @hw: pointer to hardware structure > * @lkups: lookup elements or match criteria for the advanced recipe, on= e > * structure per protocol header > - * @lkups_cnt: number of protocols > * @bm: bitmap of field vectors to consider > * @fv_list: pointer to a list that holds the returned field vectors > */ > static enum ice_status > -ice_get_fv(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, u16 > lkups_cnt, > +ice_get_fv(struct ice_hw *hw, struct ice_prot_lkup_ext *lkups, > ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list) > { > - enum ice_status status; > - u8 *prot_ids; > - u16 i; > - > - if (!lkups_cnt) > + if (!lkups->n_val_words) > return ICE_SUCCESS; >=20 > - prot_ids =3D (u8 *)ice_calloc(hw, lkups_cnt, sizeof(*prot_ids)); > - if (!prot_ids) > - return ICE_ERR_NO_MEMORY; > - > - for (i =3D 0; i < lkups_cnt; i++) > - if (!ice_prot_type_to_id(lkups[i].type, &prot_ids[i])) { > - status =3D ICE_ERR_CFG; > - goto free_mem; > - } > - > /* Find field vectors that include all specified protocol types */ > - status =3D ice_get_sw_fv_list(hw, prot_ids, lkups_cnt, bm, fv_list); > - > -free_mem: > - ice_free(hw, prot_ids); > - return status; > + return ice_get_sw_fv_list(hw, lkups, bm, fv_list); > } >=20 > /** > @@ -7840,16 +7821,7 @@ ice_add_adv_recipe(struct ice_hw *hw, struct > ice_adv_lkup_elem *lkups, > */ > ice_get_compat_fv_bitmap(hw, rinfo, fv_bitmap); >=20 > - /* If it is a packet to match any, add a lookup element to match > direction > - * flag of source interface. > - */ > - if (rinfo->tun_type =3D=3D ICE_SW_TUN_AND_NON_TUN && > - lkups_cnt < ICE_MAX_CHAIN_WORDS) { > - lkups[lkups_cnt].type =3D ICE_FLG_DIR; > - lkups_cnt++; > - } > - > - status =3D ice_get_fv(hw, lkups, lkups_cnt, fv_bitmap, &rm->fv_list); > + status =3D ice_get_fv(hw, lkup_exts, fv_bitmap, &rm->fv_list); > if (status) > goto err_unroll; >=20 > diff --git a/drivers/net/ice/base/ice_type.h > b/drivers/net/ice/base/ice_type.h > index a17accff19..d94fdcda67 100644 > --- a/drivers/net/ice/base/ice_type.h > +++ b/drivers/net/ice/base/ice_type.h > @@ -5,54 +5,15 @@ > #ifndef _ICE_TYPE_H_ > #define _ICE_TYPE_H_ >=20 > -#define ETH_ALEN 6 > - > -#define ETH_HEADER_LEN 14 > - > -#define BIT(a) (1UL << (a)) > -#define BIT_ULL(a) (1ULL << (a)) > - > -#define BITS_PER_BYTE 8 > - > -#define _FORCE_ > - > -#define ICE_BYTES_PER_WORD 2 > -#define ICE_BYTES_PER_DWORD 4 > -#define ICE_MAX_TRAFFIC_CLASS 8 > - > -/** > - * ROUND_UP - round up to next arbitrary multiple (not a power of 2) > - * @a: value to round up > - * @b: arbitrary multiple > - * > - * Round up to the next multiple of the arbitrary b. > - * Note, when b is a power of 2 use ICE_ALIGN() instead. > - */ > -#define ROUND_UP(a, b) ((b) * DIVIDE_AND_ROUND_UP((a), (b))) > - > -#define MIN_T(_t, _a, _b) min((_t)(_a), (_t)(_b)) > - > -#define IS_ASCII(_ch) ((_ch) < 0x80) > - > -#define STRUCT_HACK_VAR_LEN > -/** > - * ice_struct_size - size of struct with C99 flexible array member > - * @ptr: pointer to structure > - * @field: flexible array member (last member of the structure) > - * @num: number of elements of that flexible array member > - */ > -#define ice_struct_size(ptr, field, num) \ > - (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num)) > - > -#define FLEX_ARRAY_SIZE(_ptr, _mem, cnt) ((cnt) * sizeof(_ptr->_mem[0])) > - > +#include "ice_defs.h" > #include "ice_status.h" > #include "ice_hw_autogen.h" > #include "ice_devids.h" > #include "ice_osdep.h" > #include "ice_bitops.h" /* Must come before ice_controlq.h */ > -#include "ice_controlq.h" > #include "ice_lan_tx_rx.h" > +#include "ice_ddp.h" > +#include "ice_controlq.h" > #include "ice_flex_type.h" > #include "ice_protocol_type.h" > #include "ice_sbq_cmd.h" > @@ -191,11 +152,6 @@ enum ice_aq_res_ids { > #define ICE_CHANGE_LOCK_TIMEOUT 1000 > #define ICE_GLOBAL_CFG_LOCK_TIMEOUT 3000 >=20 > -enum ice_aq_res_access_type { > - ICE_RES_READ =3D 1, > - ICE_RES_WRITE > -}; > - > struct ice_driver_ver { > u8 major_ver; > u8 minor_ver; > @@ -248,6 +204,7 @@ enum ice_mac_type { > ICE_MAC_UNKNOWN =3D 0, > ICE_MAC_E810, > ICE_MAC_GENERIC, > + ICE_MAC_GENERIC_3K, > }; >=20 > /* Media Types */ > @@ -636,6 +593,7 @@ struct ice_hw_common_caps { > #define ICE_EXT_TOPO_DEV_IMG_LOAD_EN BIT(0) > bool ext_topo_dev_img_prog_en[ICE_EXT_TOPO_DEV_IMG_COUNT]; > #define ICE_EXT_TOPO_DEV_IMG_PROG_EN BIT(1) > + bool tx_sched_topo_comp_mode_en; > }; >=20 > /* IEEE 1588 TIME_SYNC specific info */ > @@ -1247,7 +1205,9 @@ struct ice_hw { > /* Active package version (currently active) */ > struct ice_pkg_ver active_pkg_ver; > u32 pkg_seg_id; > + u32 pkg_sign_type; > u32 active_track_id; > + u8 pkg_has_signing_seg:1; > u8 active_pkg_name[ICE_PKG_NAME_SIZE]; > u8 active_pkg_in_nvm; >=20 > diff --git a/drivers/net/ice/base/ice_vlan_mode.c > b/drivers/net/ice/base/ice_vlan_mode.c > index 29c6509fc5..d1003a5a89 100644 > --- a/drivers/net/ice/base/ice_vlan_mode.c > +++ b/drivers/net/ice/base/ice_vlan_mode.c > @@ -4,6 +4,7 @@ >=20 > #include "ice_common.h" >=20 > +#include "ice_ddp.h" > /** > * ice_pkg_get_supported_vlan_mode - chk if DDP supports Double VLAN > mode (DVM) > * @hw: pointer to the HW struct > diff --git a/drivers/net/ice/base/meson.build > b/drivers/net/ice/base/meson.build > index 3cf4ce05fa..41ed2d96c6 100644 > --- a/drivers/net/ice/base/meson.build > +++ b/drivers/net/ice/base/meson.build > @@ -26,6 +26,7 @@ sources =3D [ > 'ice_flg_rd.c', > 'ice_xlt_kb.c', > 'ice_parser_rt.c', > + 'ice_ddp.c', > ] >=20 > error_cflags =3D [ > -- > 2.31.1