From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A8B1AA0547; Wed, 27 Oct 2021 02:36:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 69FE940DDA; Wed, 27 Oct 2021 02:36:19 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id D5C94407FF for ; Wed, 27 Oct 2021 02:36:17 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10149"; a="316251737" X-IronPort-AV: E=Sophos;i="5.87,184,1631602800"; d="scan'208";a="316251737" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Oct 2021 17:36:16 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,184,1631602800"; d="scan'208";a="497607761" Received: from irsmsx604.ger.corp.intel.com ([163.33.146.137]) by orsmga008.jf.intel.com with ESMTP; 26 Oct 2021 17:36:15 -0700 Received: from shsmsx601.ccr.corp.intel.com (10.109.6.141) by IRSMSX604.ger.corp.intel.com (163.33.146.137) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12; Wed, 27 Oct 2021 01:36:12 +0100 Received: from shsmsx601.ccr.corp.intel.com ([10.109.6.141]) by SHSMSX601.ccr.corp.intel.com ([10.109.6.141]) with mapi id 15.01.2242.012; Wed, 27 Oct 2021 08:36:10 +0800 From: "Zhang, Qi Z" To: "Nicolau, Radu" , "Wu, Jingjing" , "Xing, Beilei" , Ray Kinsella CC: "dev@dpdk.org" , "Doherty, Declan" , "Sinha, Abhijit" , "Richardson, Bruce" , "Ananyev, Konstantin" Thread-Topic: [PATCH v12 4/7] net/iavf: add iAVF IPsec inline crypto support Thread-Index: AQHXynNOiN5jlKEOeEKvXYEN6d0vvavl/4kg Date: Wed, 27 Oct 2021 00:36:09 +0000 Message-ID: <044f2344b5d14888a2fc022f29bf231d@intel.com> References: <20210909142428.750634-1-radu.nicolau@intel.com> <20211026135657.2034763-1-radu.nicolau@intel.com> <20211026135657.2034763-5-radu.nicolau@intel.com> In-Reply-To: <20211026135657.2034763-5-radu.nicolau@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-reaction: no-action dlp-version: 11.6.200.16 dlp-product: dlpe-windows x-originating-ip: [10.239.127.36] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v12 4/7] net/iavf: add iAVF IPsec inline crypto support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Nicolau, Radu > Sent: Tuesday, October 26, 2021 9:57 PM > To: Wu, Jingjing ; Xing, Beilei ; > Ray Kinsella > Cc: dev@dpdk.org; Doherty, Declan ; Sinha, > Abhijit ; Zhang, Qi Z ; > Richardson, Bruce ; Ananyev, Konstantin > ; Nicolau, Radu > Subject: [PATCH v12 4/7] net/iavf: add iAVF IPsec inline crypto support >=20 > Add support for inline crypto for IPsec, for ESP transport and > tunnel over IPv4 and IPv6, as well as supporting the offload for > ESP over UDP, and inconjunction with TSO for UDP and TCP flows. > Implement support for rte_security packet metadata >=20 > Add definition for IPsec descriptors, extend support for offload > in data and context descriptor to support >=20 > Add support to virtual channel mailbox for IPsec Crypto request > operations. IPsec Crypto requests receive an initial acknowledgment > from phsyical function driver of receipt of request and then an > asynchronous response with success/failure of request including any > response data. >=20 > Add enhanced descriptor debugging >=20 > Refactor of scalar tx burst function to support integration of offload >=20 > Signed-off-by: Declan Doherty > Signed-off-by: Abhijit Sinha > Signed-off-by: Radu Nicolau > Reviewed-by: Jingjing Wu > --- > drivers/net/iavf/iavf.h | 10 + > drivers/net/iavf/iavf_ethdev.c | 41 +- > drivers/net/iavf/iavf_generic_flow.c | 15 + > drivers/net/iavf/iavf_generic_flow.h | 2 + > drivers/net/iavf/iavf_ipsec_crypto.c | 1894 +++++++++++++++++ > drivers/net/iavf/iavf_ipsec_crypto.h | 160 ++ > .../net/iavf/iavf_ipsec_crypto_capabilities.h | 383 ++++ > drivers/net/iavf/iavf_rxtx.c | 202 +- > drivers/net/iavf/iavf_rxtx.h | 107 +- > drivers/net/iavf/iavf_vchnl.c | 29 + > drivers/net/iavf/meson.build | 3 +- > drivers/net/iavf/rte_pmd_iavf.h | 1 + > drivers/net/iavf/version.map | 3 + > 13 files changed, 2823 insertions(+), 27 deletions(-) > create mode 100644 drivers/net/iavf/iavf_ipsec_crypto.c > create mode 100644 drivers/net/iavf/iavf_ipsec_crypto.h > create mode 100644 drivers/net/iavf/iavf_ipsec_crypto_capabilities.h >=20 > diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h > index efc90f9072..6df31a649e 100644 > --- a/drivers/net/iavf/iavf.h > +++ b/drivers/net/iavf/iavf.h > @@ -221,6 +221,7 @@ struct iavf_info { > rte_spinlock_t flow_ops_lock; > struct iavf_parser_list rss_parser_list; > struct iavf_parser_list dist_parser_list; > + struct iavf_parser_list ipsec_crypto_parser_list; >=20 > struct iavf_fdir_info fdir; /* flow director info */ > /* indicate large VF support enabled or not */ > @@ -245,6 +246,7 @@ enum iavf_proto_xtr_type { > IAVF_PROTO_XTR_IPV6_FLOW, > IAVF_PROTO_XTR_TCP, > IAVF_PROTO_XTR_IP_OFFSET, > + IAVF_PROTO_XTR_IPSEC_CRYPTO_SAID, > IAVF_PROTO_XTR_MAX, > }; >=20 > @@ -256,11 +258,14 @@ struct iavf_devargs { > uint8_t proto_xtr[IAVF_MAX_QUEUE_NUM]; > }; >=20 > +struct iavf_security_ctx; > + > /* Structure to store private data for each VF instance. */ > struct iavf_adapter { > struct iavf_hw hw; > struct rte_eth_dev_data *dev_data; > struct iavf_info vf; > + struct iavf_security_ctx *security_ctx; >=20 > bool rx_bulk_alloc_allowed; > /* For vector PMD */ > @@ -279,6 +284,8 @@ struct iavf_adapter { > (&((struct iavf_adapter *)adapter)->vf) > #define IAVF_DEV_PRIVATE_TO_HW(adapter) \ > (&((struct iavf_adapter *)adapter)->hw) > +#define IAVF_DEV_PRIVATE_TO_IAVF_SECURITY_CTX(adapter) \ > + (((struct iavf_adapter *)adapter)->security_ctx) >=20 > /* IAVF_VSI_TO */ > #define IAVF_VSI_TO_HW(vsi) \ > @@ -421,5 +428,8 @@ int iavf_set_q_tc_map(struct rte_eth_dev *dev, > uint16_t size); > void iavf_tm_conf_init(struct rte_eth_dev *dev); > void iavf_tm_conf_uninit(struct rte_eth_dev *dev); > +int iavf_ipsec_crypto_request(struct iavf_adapter *adapter, > + uint8_t *msg, size_t msg_len, > + uint8_t *resp_msg, size_t resp_msg_len); > extern const struct rte_tm_ops iavf_tm_ops; > #endif /* _IAVF_ETHDEV_H_ */ > diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethde= v.c > index f892306f18..dba505494f 100644 > --- a/drivers/net/iavf/iavf_ethdev.c > +++ b/drivers/net/iavf/iavf_ethdev.c > @@ -30,6 +30,7 @@ > #include "iavf_rxtx.h" > #include "iavf_generic_flow.h" > #include "rte_pmd_iavf.h" > +#include "iavf_ipsec_crypto.h" >=20 > /* devargs */ > #define IAVF_PROTO_XTR_ARG "proto_xtr" > @@ -71,6 +72,11 @@ static struct iavf_proto_xtr_ol iavf_proto_xtr_params[= ] > =3D { > [IAVF_PROTO_XTR_IP_OFFSET] =3D { > .param =3D { .name =3D "intel_pmd_dynflag_proto_xtr_ip_offset" }, > .ol_flag =3D &rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask }, > + [IAVF_PROTO_XTR_IPSEC_CRYPTO_SAID] =3D { > + .param =3D { > + .name =3D "intel_pmd_dynflag_proto_xtr_ipsec_crypto_said" }, > + .ol_flag =3D > + &rte_pmd_ifd_dynflag_proto_xtr_ipsec_crypto_said_mask }, > }; >=20 > static int iavf_dev_configure(struct rte_eth_dev *dev); > @@ -922,6 +928,9 @@ iavf_dev_stop(struct rte_eth_dev *dev) > iavf_add_del_mc_addr_list(adapter, vf->mc_addrs, vf->mc_addrs_num, > false); >=20 > + /* free iAVF security device context all related resources */ > + iavf_security_ctx_destroy(adapter); > + > adapter->stopped =3D 1; > dev->data->dev_started =3D 0; >=20 > @@ -931,7 +940,9 @@ iavf_dev_stop(struct rte_eth_dev *dev) > static int > iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info > *dev_info) > { > - struct iavf_info *vf =3D > IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); > + struct iavf_adapter *adapter =3D > + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > + struct iavf_info *vf =3D &adapter->vf; >=20 > dev_info->max_rx_queues =3D IAVF_MAX_NUM_QUEUES_LV; > dev_info->max_tx_queues =3D IAVF_MAX_NUM_QUEUES_LV; > @@ -973,6 +984,11 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct > rte_eth_dev_info *dev_info) > if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_CRC) > dev_info->rx_offload_capa |=3D RTE_ETH_RX_OFFLOAD_KEEP_CRC; >=20 > + if (iavf_ipsec_crypto_supported(adapter)) { > + dev_info->rx_offload_capa |=3D DEV_RX_OFFLOAD_SECURITY; > + dev_info->tx_offload_capa |=3D DEV_TX_OFFLOAD_SECURITY; > + } > + > dev_info->default_rxconf =3D (struct rte_eth_rxconf) { > .rx_free_thresh =3D IAVF_DEFAULT_RX_FREE_THRESH, > .rx_drop_en =3D 0, > @@ -1718,6 +1734,7 @@ iavf_lookup_proto_xtr_type(const char *flex_name) > { "ipv6_flow", IAVF_PROTO_XTR_IPV6_FLOW }, > { "tcp", IAVF_PROTO_XTR_TCP }, > { "ip_offset", IAVF_PROTO_XTR_IP_OFFSET }, > + { "ipsec_crypto_said", IAVF_PROTO_XTR_IPSEC_CRYPTO_SAID }, > }; > uint32_t i; >=20 > @@ -1726,8 +1743,8 @@ iavf_lookup_proto_xtr_type(const char *flex_name) > return xtr_type_map[i].type; > } >=20 > - PMD_DRV_LOG(ERR, "wrong proto_xtr type, " > - "it should be: vlan|ipv4|ipv6|ipv6_flow|tcp|ip_offset"); > + PMD_DRV_LOG(ERR, "wrong proto_xtr type, it should be: " > + "vlan|ipv4|ipv6|ipv6_flow|tcp|ip_offset|ipsec_crypto_said"); >=20 > return -1; > } > @@ -2375,6 +2392,24 @@ iavf_dev_init(struct rte_eth_dev *eth_dev) > goto flow_init_err; > } >=20 > + /** Check if the IPsec Crypto offload is supported and create > + * security_ctx if it is. > + */ > + if (iavf_ipsec_crypto_supported(adapter)) { > + /* Initialize security_ctx only for primary process*/ > + ret =3D iavf_security_ctx_create(adapter); > + if (ret) { > + PMD_INIT_LOG(ERR, "failed to create ipsec crypto security > instance"); > + return ret; > + } > + > + ret =3D iavf_security_init(adapter); > + if (ret) { > + PMD_INIT_LOG(ERR, "failed to initialized ipsec crypto > resources"); > + return ret; > + } > + } > + > iavf_default_rss_disable(adapter); >=20 > return 0; > diff --git a/drivers/net/iavf/iavf_generic_flow.c > b/drivers/net/iavf/iavf_generic_flow.c > index 364904fa02..2befa125ac 100644 > --- a/drivers/net/iavf/iavf_generic_flow.c > +++ b/drivers/net/iavf/iavf_generic_flow.c > @@ -1766,6 +1766,7 @@ iavf_flow_init(struct iavf_adapter *ad) > TAILQ_INIT(&vf->flow_list); > TAILQ_INIT(&vf->rss_parser_list); > TAILQ_INIT(&vf->dist_parser_list); > + TAILQ_INIT(&vf->ipsec_crypto_parser_list); > rte_spinlock_init(&vf->flow_ops_lock); >=20 > RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) { > @@ -1840,6 +1841,9 @@ iavf_register_parser(struct iavf_flow_parser > *parser, > } else if (parser->engine->type =3D=3D IAVF_FLOW_ENGINE_FDIR) { > list =3D &vf->dist_parser_list; > TAILQ_INSERT_HEAD(list, parser_node, node); > + } else if (parser->engine->type =3D=3D IAVF_FLOW_ENGINE_IPSEC_CRYPTO) { > + list =3D &vf->ipsec_crypto_parser_list; > + TAILQ_INSERT_HEAD(list, parser_node, node); > } else { > return -EINVAL; > } > @@ -2149,6 +2153,13 @@ iavf_flow_process_filter(struct rte_eth_dev *dev, >=20 > *engine =3D iavf_parse_engine(ad, flow, &vf->dist_parser_list, pattern, > actions, error); > + if (*engine) > + return 0; > + > + *engine =3D iavf_parse_engine(ad, flow, &vf->ipsec_crypto_parser_list, > + pattern, actions, error); > + if (*engine) > + return 0; >=20 > if (!*engine) { > rte_flow_error_set(error, EINVAL, > @@ -2195,6 +2206,10 @@ iavf_flow_create(struct rte_eth_dev *dev, > return flow; > } >=20 > + /* Special case for inline crypto egress flows */ > + if (attr->egress && actions[0].type =3D=3D > RTE_FLOW_ACTION_TYPE_SECURITY) > + goto free_flow; > + > ret =3D iavf_flow_process_filter(dev, flow, attr, pattern, actions, > &engine, iavf_parse_engine_create, error); > if (ret < 0) { > diff --git a/drivers/net/iavf/iavf_generic_flow.h > b/drivers/net/iavf/iavf_generic_flow.h > index f2b54e1944..3681a96b31 100644 > --- a/drivers/net/iavf/iavf_generic_flow.h > +++ b/drivers/net/iavf/iavf_generic_flow.h > @@ -464,6 +464,7 @@ typedef int (*parse_pattern_action_t)(struct > iavf_adapter *ad, > /* engine types. */ > enum iavf_flow_engine_type { > IAVF_FLOW_ENGINE_NONE =3D 0, > + IAVF_FLOW_ENGINE_IPSEC_CRYPTO, > IAVF_FLOW_ENGINE_FDIR, > IAVF_FLOW_ENGINE_HASH, > IAVF_FLOW_ENGINE_MAX, > @@ -477,6 +478,7 @@ enum iavf_flow_engine_type { > */ > enum iavf_flow_classification_stage { > IAVF_FLOW_STAGE_NONE =3D 0, > + IAVF_FLOW_STAGE_IPSEC_CRYPTO, > IAVF_FLOW_STAGE_RSS, > IAVF_FLOW_STAGE_DISTRIBUTOR, > IAVF_FLOW_STAGE_MAX, > diff --git a/drivers/net/iavf/iavf_ipsec_crypto.c > b/drivers/net/iavf/iavf_ipsec_crypto.c > new file mode 100644 > index 0000000000..633fedf860 > --- /dev/null > +++ b/drivers/net/iavf/iavf_ipsec_crypto.c > @@ -0,0 +1,1894 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2020 Intel Corporation > + */ > + > +#include > +#include > +#include > +#include > + > +#include "iavf.h" > +#include "iavf_rxtx.h" > +#include "iavf_log.h" > +#include "iavf_generic_flow.h" > + > +#include "iavf_ipsec_crypto.h" > +#include "iavf_ipsec_crypto_capabilities.h" > + > +/** > + * iAVF IPsec Crypto Security Context > + */ > +struct iavf_security_ctx { > + struct iavf_adapter *adapter; > + int pkt_md_offset; > + struct rte_cryptodev_capabilities *crypto_capabilities; > +}; > + > +/** > + * iAVF IPsec Crypto Security Session Parameters > + */ > +struct iavf_security_session { > + struct iavf_adapter *adapter; > + > + enum rte_security_ipsec_sa_mode mode; > + enum rte_security_ipsec_tunnel_type type; > + enum rte_security_ipsec_sa_direction direction; > + > + struct { > + uint32_t spi; /* Security Parameter Index */ > + uint32_t hw_idx; /* SA Index in hardware table */ > + } sa; > + > + struct { > + uint8_t enabled :1; > + union { > + uint64_t value; > + struct { > + uint32_t hi; > + uint32_t low; > + }; > + }; > + } esn; > + > + struct { > + uint8_t enabled :1; > + } udp_encap; > + > + size_t iv_sz; > + size_t icv_sz; > + size_t block_sz; > + > + struct iavf_ipsec_crypto_pkt_metadata pkt_metadata_template; > +}; > +/** > + * IV Length field in IPsec Tx Desc uses the following encoding: > + * > + * 0B - 0 > + * 4B - 1 > + * 8B - 2 > + * 16B - 3 > + * > + * but we also need the IV Length for TSO to correctly calculate the tot= al > + * header length so placing it in the upper 6-bits here for easier reter= ival. > + */ > +static inline uint8_t > +calc_ipsec_desc_iv_len_field(uint16_t iv_sz) > +{ > + uint8_t iv_length =3D IAVF_IPSEC_IV_LEN_NONE; > + > + switch (iv_sz) { > + case 4: > + iv_length =3D IAVF_IPSEC_IV_LEN_DW; > + break; > + case 8: > + iv_length =3D IAVF_IPSEC_IV_LEN_DDW; > + break; > + case 16: > + iv_length =3D IAVF_IPSEC_IV_LEN_QDW; > + break; > + } > + > + return (iv_sz << 2) | iv_length; > +} > + > +static unsigned int > +iavf_ipsec_crypto_session_size_get(void *device __rte_unused) > +{ > + return sizeof(struct iavf_security_session); > +} > + > +static const struct rte_cryptodev_symmetric_capability * > +get_capability(struct iavf_security_ctx *iavf_sctx, > + uint32_t algo, uint32_t type) > +{ > + const struct rte_cryptodev_capabilities *capability; > + int i =3D 0; > + > + capability =3D &iavf_sctx->crypto_capabilities[i]; > + > + while (capability->op !=3D RTE_CRYPTO_OP_TYPE_UNDEFINED) { > + if (capability->op =3D=3D RTE_CRYPTO_OP_TYPE_SYMMETRIC && > + capability->sym.xform_type =3D=3D type && > + capability->sym.cipher.algo =3D=3D algo) > + return &capability->sym; > + /** try next capability */ > + capability =3D &iavf_crypto_capabilities[i++]; > + } > + > + return NULL; > +} > + > +static const struct rte_cryptodev_symmetric_capability * > +get_auth_capability(struct iavf_security_ctx *iavf_sctx, > + enum rte_crypto_auth_algorithm algo) > +{ > + return get_capability(iavf_sctx, algo, RTE_CRYPTO_SYM_XFORM_AUTH); > +} > + > +static const struct rte_cryptodev_symmetric_capability * > +get_cipher_capability(struct iavf_security_ctx *iavf_sctx, > + enum rte_crypto_cipher_algorithm algo) > +{ > + return get_capability(iavf_sctx, algo, > RTE_CRYPTO_SYM_XFORM_CIPHER); > +} > +static const struct rte_cryptodev_symmetric_capability * > +get_aead_capability(struct iavf_security_ctx *iavf_sctx, > + enum rte_crypto_aead_algorithm algo) > +{ > + return get_capability(iavf_sctx, algo, RTE_CRYPTO_SYM_XFORM_AEAD); > +} > + > +static uint16_t > +get_cipher_blocksize(struct iavf_security_ctx *iavf_sctx, > + enum rte_crypto_cipher_algorithm algo) > +{ > + const struct rte_cryptodev_symmetric_capability *capability; > + > + capability =3D get_cipher_capability(iavf_sctx, algo); > + if (capability =3D=3D NULL) > + return 0; > + > + return capability->cipher.block_size; > +} > + > +static uint16_t > +get_aead_blocksize(struct iavf_security_ctx *iavf_sctx, > + enum rte_crypto_aead_algorithm algo) > +{ > + const struct rte_cryptodev_symmetric_capability *capability; > + > + capability =3D get_aead_capability(iavf_sctx, algo); > + if (capability =3D=3D NULL) > + return 0; > + > + return capability->cipher.block_size; > +} > + > +static uint16_t > +get_auth_blocksize(struct iavf_security_ctx *iavf_sctx, > + enum rte_crypto_auth_algorithm algo) > +{ > + const struct rte_cryptodev_symmetric_capability *capability; > + > + capability =3D get_auth_capability(iavf_sctx, algo); > + if (capability =3D=3D NULL) > + return 0; > + > + return capability->auth.block_size; > +} > + > +static uint8_t > +calc_context_desc_cipherblock_sz(size_t len) > +{ > + switch (len) { > + case 8: > + return 0x2; > + case 16: > + return 0x3; > + default: > + return 0x0; > + } > +} > + > +static int > +valid_length(uint32_t len, uint32_t min, uint32_t max, uint32_t incremen= t) > +{ > + if (len < min || len > max) > + return false; > + > + if (increment =3D=3D 0) > + return true; > + > + if ((len - min) % increment) > + return false; > + > + /* make sure it fits in the key array */ > + if (len > VIRTCHNL_IPSEC_MAX_KEY_LEN) > + return false; > + > + return true; > +} > + > +static int > +valid_auth_xform(struct iavf_security_ctx *iavf_sctx, > + struct rte_crypto_auth_xform *auth) > +{ > + const struct rte_cryptodev_symmetric_capability *capability; > + > + capability =3D get_auth_capability(iavf_sctx, auth->algo); > + if (capability =3D=3D NULL) > + return false; > + > + /* verify key size */ > + if (!valid_length(auth->key.length, > + capability->auth.key_size.min, > + capability->auth.key_size.max, > + capability->aead.key_size.increment)) > + return false; > + > + return true; > +} > + > +static int > +valid_cipher_xform(struct iavf_security_ctx *iavf_sctx, > + struct rte_crypto_cipher_xform *cipher) > +{ > + const struct rte_cryptodev_symmetric_capability *capability; > + > + capability =3D get_cipher_capability(iavf_sctx, cipher->algo); > + if (capability =3D=3D NULL) > + return false; > + > + /* verify key size */ > + if (!valid_length(cipher->key.length, > + capability->cipher.key_size.min, > + capability->cipher.key_size.max, > + capability->cipher.key_size.increment)) > + return false; > + > + return true; > +} > + > +static int > +valid_aead_xform(struct iavf_security_ctx *iavf_sctx, > + struct rte_crypto_aead_xform *aead) > +{ > + const struct rte_cryptodev_symmetric_capability *capability; > + > + capability =3D get_aead_capability(iavf_sctx, aead->algo); > + if (capability =3D=3D NULL) > + return false; > + > + /* verify key size */ > + if (!valid_length(aead->key.length, > + capability->aead.key_size.min, > + capability->aead.key_size.max, > + capability->aead.key_size.increment)) > + return false; > + > + return true; > +} > + > +static int > +iavf_ipsec_crypto_session_validate_conf(struct iavf_security_ctx *iavf_s= ctx, > + struct rte_security_session_conf *conf) > +{ > + /** validate security action/protocol selection */ > + if (conf->action_type !=3D RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO || > + conf->protocol !=3D RTE_SECURITY_PROTOCOL_IPSEC) { > + PMD_DRV_LOG(ERR, "Invalid action / protocol specified"); > + return -EINVAL; > + } > + > + /** validate IPsec protocol selection */ > + if (conf->ipsec.proto !=3D RTE_SECURITY_IPSEC_SA_PROTO_ESP) { > + PMD_DRV_LOG(ERR, "Invalid IPsec protocol specified"); > + return -EINVAL; > + } > + > + /** validate selected options */ > + if (conf->ipsec.options.copy_dscp || > + conf->ipsec.options.copy_flabel || > + conf->ipsec.options.copy_df || > + conf->ipsec.options.dec_ttl || > + conf->ipsec.options.ecn || > + conf->ipsec.options.stats) { > + PMD_DRV_LOG(ERR, "Invalid IPsec option specified"); > + return -EINVAL; > + } > + > + /** > + * Validate crypto xforms parameters. > + * > + * AEAD transforms can be used for either inbound/outbound IPsec SAs, > + * for non-AEAD crypto transforms we explicitly only support > CIPHER/AUTH > + * for outbound and AUTH/CIPHER chained transforms for inbound IPsec. > + */ > + if (conf->crypto_xform->type =3D=3D RTE_CRYPTO_SYM_XFORM_AEAD) { > + if (!valid_aead_xform(iavf_sctx, &conf->crypto_xform->aead)) { > + PMD_DRV_LOG(ERR, "Invalid IPsec option specified"); > + return -EINVAL; > + } > + } else if (conf->ipsec.direction =3D=3D RTE_SECURITY_IPSEC_SA_DIR_EGRES= S > && > + conf->crypto_xform->type =3D=3D RTE_CRYPTO_SYM_XFORM_CIPHER > && > + conf->crypto_xform->next && > + conf->crypto_xform->next->type =3D=3D > RTE_CRYPTO_SYM_XFORM_AUTH) { > + if (!valid_cipher_xform(iavf_sctx, > + &conf->crypto_xform->cipher)) { > + PMD_DRV_LOG(ERR, "Invalid IPsec option specified"); > + return -EINVAL; > + } > + > + if (!valid_auth_xform(iavf_sctx, > + &conf->crypto_xform->next->auth)) { > + PMD_DRV_LOG(ERR, "Invalid IPsec option specified"); > + return -EINVAL; > + } > + } else if (conf->ipsec.direction =3D=3D RTE_SECURITY_IPSEC_SA_DIR_INGRE= SS > && > + conf->crypto_xform->type =3D=3D RTE_CRYPTO_SYM_XFORM_AUTH && > + conf->crypto_xform->next && > + conf->crypto_xform->next->type =3D=3D > RTE_CRYPTO_SYM_XFORM_CIPHER) { > + if (!valid_auth_xform(iavf_sctx, &conf->crypto_xform->auth)) { > + PMD_DRV_LOG(ERR, "Invalid IPsec option specified"); > + return -EINVAL; > + } > + > + if (!valid_cipher_xform(iavf_sctx, > + &conf->crypto_xform->next->cipher)) { > + PMD_DRV_LOG(ERR, "Invalid IPsec option specified"); > + return -EINVAL; > + } > + } > + > + return 0; > +} > + > +static void > +sa_add_set_aead_params(struct virtchnl_ipsec_crypto_cfg_item *cfg, > + struct rte_crypto_aead_xform *aead, uint32_t salt) > +{ > + cfg->crypto_type =3D VIRTCHNL_AEAD; > + > + switch (aead->algo) { > + case RTE_CRYPTO_AEAD_AES_CCM: > + cfg->algo_type =3D VIRTCHNL_AES_CCM; break; > + case RTE_CRYPTO_AEAD_AES_GCM: > + cfg->algo_type =3D VIRTCHNL_AES_GCM; break; > + case RTE_CRYPTO_AEAD_CHACHA20_POLY1305: > + cfg->algo_type =3D VIRTCHNL_CHACHA20_POLY1305; break; > + default: > + PMD_DRV_LOG(ERR, "Invalid AEAD parameters"); > + break; > + } > + > + cfg->key_len =3D aead->key.length; > + cfg->iv_len =3D sizeof(uint64_t); /* iv.length includes salt len */ > + cfg->digest_len =3D aead->digest_length; > + cfg->salt =3D salt; > + > + memcpy(cfg->key_data, aead->key.data, cfg->key_len); > +} > + > +static void > +sa_add_set_cipher_params(struct virtchnl_ipsec_crypto_cfg_item *cfg, > + struct rte_crypto_cipher_xform *cipher, uint32_t salt) > +{ > + cfg->crypto_type =3D VIRTCHNL_CIPHER; > + > + switch (cipher->algo) { > + case RTE_CRYPTO_CIPHER_AES_CBC: > + cfg->algo_type =3D VIRTCHNL_AES_CBC; break; > + case RTE_CRYPTO_CIPHER_3DES_CBC: > + cfg->algo_type =3D VIRTCHNL_3DES_CBC; break; > + case RTE_CRYPTO_CIPHER_NULL: > + cfg->algo_type =3D VIRTCHNL_CIPHER_NO_ALG; break; > + case RTE_CRYPTO_CIPHER_AES_CTR: > + cfg->algo_type =3D VIRTCHNL_AES_CTR; > + cfg->salt =3D salt; > + break; > + default: > + PMD_DRV_LOG(ERR, "Invalid cipher parameters"); > + break; > + } > + > + cfg->key_len =3D cipher->key.length; > + cfg->iv_len =3D cipher->iv.length; > + cfg->salt =3D salt; > + > + memcpy(cfg->key_data, cipher->key.data, cfg->key_len); > +} > + > +static void > +sa_add_set_auth_params(struct virtchnl_ipsec_crypto_cfg_item *cfg, > + struct rte_crypto_auth_xform *auth, uint32_t salt) > +{ > + cfg->crypto_type =3D VIRTCHNL_AUTH; > + > + switch (auth->algo) { > + case RTE_CRYPTO_AUTH_NULL: > + cfg->algo_type =3D VIRTCHNL_HASH_NO_ALG; break; > + case RTE_CRYPTO_AUTH_AES_CBC_MAC: > + cfg->algo_type =3D VIRTCHNL_AES_CBC_MAC; break; > + case RTE_CRYPTO_AUTH_AES_CMAC: > + cfg->algo_type =3D VIRTCHNL_AES_CMAC; break; > + case RTE_CRYPTO_AUTH_AES_XCBC_MAC: > + cfg->algo_type =3D VIRTCHNL_AES_XCBC_MAC; break; > + case RTE_CRYPTO_AUTH_MD5_HMAC: > + cfg->algo_type =3D VIRTCHNL_MD5_HMAC; break; > + case RTE_CRYPTO_AUTH_SHA1_HMAC: > + cfg->algo_type =3D VIRTCHNL_SHA1_HMAC; break; > + case RTE_CRYPTO_AUTH_SHA224_HMAC: > + cfg->algo_type =3D VIRTCHNL_SHA224_HMAC; break; > + case RTE_CRYPTO_AUTH_SHA256_HMAC: > + cfg->algo_type =3D VIRTCHNL_SHA256_HMAC; break; > + case RTE_CRYPTO_AUTH_SHA384_HMAC: > + cfg->algo_type =3D VIRTCHNL_SHA384_HMAC; break; > + case RTE_CRYPTO_AUTH_SHA512_HMAC: > + cfg->algo_type =3D VIRTCHNL_SHA512_HMAC; break; > + case RTE_CRYPTO_AUTH_AES_GMAC: > + cfg->algo_type =3D VIRTCHNL_AES_GMAC; > + cfg->salt =3D salt; > + break; > + default: > + PMD_DRV_LOG(ERR, "Invalid auth parameters"); > + break; > + } > + > + cfg->key_len =3D auth->key.length; > + /* special case for RTE_CRYPTO_AUTH_AES_GMAC */ > + if (auth->algo =3D=3D RTE_CRYPTO_AUTH_AES_GMAC) > + cfg->iv_len =3D sizeof(uint64_t); /* iv.length includes salt */ > + else > + cfg->iv_len =3D auth->iv.length; > + cfg->digest_len =3D auth->digest_length; > + > + memcpy(cfg->key_data, auth->key.data, cfg->key_len); > +} > + > +/** > + * Send SA add virtual channel request to Inline IPsec driver. > + * > + * Inline IPsec driver expects SPI and destination IP adderss to be in h= ost > + * order, but DPDK APIs are network order, therefore we need to do a hto= nl > + * conversion of these parameters. > + */ > +static uint32_t > +iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter, > + struct rte_security_session_conf *conf) > +{ > + struct inline_ipsec_msg *request =3D NULL, *response =3D NULL; > + struct virtchnl_ipsec_sa_cfg *sa_cfg; > + size_t request_len, response_len; > + > + int rc; > + > + request_len =3D sizeof(struct inline_ipsec_msg) + > + sizeof(struct virtchnl_ipsec_sa_cfg); > + > + request =3D rte_malloc("iavf-sad-add-request", request_len, 0); > + if (request =3D=3D NULL) { > + rc =3D -ENOMEM; > + goto update_cleanup; > + } > + > + response_len =3D sizeof(struct inline_ipsec_msg) + > + sizeof(struct virtchnl_ipsec_sa_cfg_resp); > + response =3D rte_malloc("iavf-sad-add-response", response_len, 0); > + if (response =3D=3D NULL) { > + rc =3D -ENOMEM; > + goto update_cleanup; > + } > + > + /* set msg header params */ > + request->ipsec_opcode =3D INLINE_IPSEC_OP_SA_CREATE; > + request->req_id =3D (uint16_t)0xDEADBEEF; > + > + /* set SA configuration params */ > + sa_cfg =3D (struct virtchnl_ipsec_sa_cfg *)(request + 1); > + > + sa_cfg->spi =3D conf->ipsec.spi; > + sa_cfg->virtchnl_protocol_type =3D VIRTCHNL_PROTO_ESP; > + sa_cfg->virtchnl_direction =3D > + conf->ipsec.direction =3D=3D RTE_SECURITY_IPSEC_SA_DIR_INGRESS ? > + VIRTCHNL_DIR_INGRESS : VIRTCHNL_DIR_EGRESS; > + > + if (conf->ipsec.options.esn) { > + sa_cfg->esn_enabled =3D 1; > + sa_cfg->esn_hi =3D conf->ipsec.esn.hi; > + sa_cfg->esn_low =3D conf->ipsec.esn.low; > + } > + > + if (conf->ipsec.options.udp_encap) > + sa_cfg->udp_encap_enabled =3D 1; > + > + /* Set outer IP params */ > + if (conf->ipsec.tunnel.type =3D=3D RTE_SECURITY_IPSEC_TUNNEL_IPV4) { > + sa_cfg->virtchnl_ip_type =3D VIRTCHNL_IPV4; > + > + *((uint32_t *)sa_cfg->dst_addr) =3D > + htonl(conf->ipsec.tunnel.ipv4.dst_ip.s_addr); > + } else { > + uint32_t *v6_dst_addr =3D > + conf->ipsec.tunnel.ipv6.dst_addr.s6_addr32; > + > + sa_cfg->virtchnl_ip_type =3D VIRTCHNL_IPV6; > + > + ((uint32_t *)sa_cfg->dst_addr)[0] =3D htonl(v6_dst_addr[0]); > + ((uint32_t *)sa_cfg->dst_addr)[1] =3D htonl(v6_dst_addr[1]); > + ((uint32_t *)sa_cfg->dst_addr)[2] =3D htonl(v6_dst_addr[2]); > + ((uint32_t *)sa_cfg->dst_addr)[3] =3D htonl(v6_dst_addr[3]); > + } > + > + /* set crypto params */ > + if (conf->crypto_xform->type =3D=3D RTE_CRYPTO_SYM_XFORM_AEAD) { > + sa_add_set_aead_params(&sa_cfg->crypto_cfg.items[0], > + &conf->crypto_xform->aead, conf->ipsec.salt); > + > + } else if (conf->crypto_xform->type =3D=3D RTE_CRYPTO_SYM_XFORM_CIPHER) > { > + sa_add_set_cipher_params(&sa_cfg->crypto_cfg.items[0], > + &conf->crypto_xform->cipher, conf->ipsec.salt); > + sa_add_set_auth_params(&sa_cfg->crypto_cfg.items[1], > + &conf->crypto_xform->next->auth, conf->ipsec.salt); > + > + } else if (conf->crypto_xform->type =3D=3D RTE_CRYPTO_SYM_XFORM_AUTH) { > + sa_add_set_auth_params(&sa_cfg->crypto_cfg.items[0], > + &conf->crypto_xform->auth, conf->ipsec.salt); > + if (conf->crypto_xform->auth.algo !=3D > RTE_CRYPTO_AUTH_AES_GMAC) > + sa_add_set_cipher_params(&sa_cfg->crypto_cfg.items[1], > + &conf->crypto_xform->next->cipher, conf->ipsec.salt); > + } > + > + /* send virtual channel request to add SA to hardware database */ > + rc =3D iavf_ipsec_crypto_request(adapter, > + (uint8_t *)request, request_len, > + (uint8_t *)response, response_len); > + if (rc) > + goto update_cleanup; > + > + /* verify response id */ > + if (response->ipsec_opcode !=3D request->ipsec_opcode || > + response->req_id !=3D request->req_id) > + rc =3D -EFAULT; > + else > + rc =3D response->ipsec_data.sa_cfg_resp->sa_handle; > +update_cleanup: > + rte_free(response); > + rte_free(request); > + > + return rc; > +} > + > +static void > +set_pkt_metadata_template(struct iavf_ipsec_crypto_pkt_metadata > *template, > + struct iavf_security_session *sess) > +{ > + template->sa_idx =3D sess->sa.hw_idx; > + > + if (sess->udp_encap.enabled) > + template->ol_flags =3D IAVF_IPSEC_CRYPTO_OL_FLAGS_NATT; > + > + if (sess->esn.enabled) > + template->ol_flags =3D IAVF_IPSEC_CRYPTO_OL_FLAGS_ESN; > + > + template->len_iv =3D calc_ipsec_desc_iv_len_field(sess->iv_sz); > + template->ctx_desc_ipsec_params =3D > + calc_context_desc_cipherblock_sz(sess->block_sz) | > + ((uint8_t)(sess->icv_sz >> 2) << 3); > +} > + > +static void > +set_session_parameter(struct iavf_security_ctx *iavf_sctx, > + struct iavf_security_session *sess, > + struct rte_security_session_conf *conf, uint32_t sa_idx) > +{ > + sess->adapter =3D iavf_sctx->adapter; > + > + sess->mode =3D conf->ipsec.mode; > + sess->direction =3D conf->ipsec.direction; > + > + if (sess->mode =3D=3D RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) > + sess->type =3D conf->ipsec.tunnel.type; > + > + sess->sa.spi =3D conf->ipsec.spi; > + sess->sa.hw_idx =3D sa_idx; > + > + if (conf->ipsec.options.esn) { > + sess->esn.enabled =3D 1; > + sess->esn.value =3D conf->ipsec.esn.value; > + } > + > + if (conf->ipsec.options.udp_encap) > + sess->udp_encap.enabled =3D 1; > + > + if (conf->crypto_xform->type =3D=3D RTE_CRYPTO_SYM_XFORM_AEAD) { > + sess->block_sz =3D get_aead_blocksize(iavf_sctx, > + conf->crypto_xform->aead.algo); > + sess->iv_sz =3D sizeof(uint64_t); /* iv.length includes salt */ > + sess->icv_sz =3D conf->crypto_xform->aead.digest_length; > + } else if (conf->crypto_xform->type =3D=3D RTE_CRYPTO_SYM_XFORM_CIPHER) > { > + sess->block_sz =3D get_cipher_blocksize(iavf_sctx, > + conf->crypto_xform->cipher.algo); > + sess->iv_sz =3D conf->crypto_xform->cipher.iv.length; > + sess->icv_sz =3D conf->crypto_xform->next->auth.digest_length; > + } else if (conf->crypto_xform->type =3D=3D RTE_CRYPTO_SYM_XFORM_AUTH) { > + if (conf->crypto_xform->auth.algo =3D=3D > RTE_CRYPTO_AUTH_AES_GMAC) { > + sess->block_sz =3D get_auth_blocksize(iavf_sctx, > + RTE_CRYPTO_SYM_XFORM_AUTH); There is a warning due to implicit conversion from 'enum rte_crypto_sym_xfo= rm_type' to 'enum rte_crypto_auth_algorithm Replace above line with (enum rte_crypto_auth_algorithm)RTE_CRYPTO_SYM_XFOR= M_AUTH); during merge. > + sess->iv_sz =3D conf->crypto_xform->auth.iv.length; > + sess->icv_sz =3D conf->crypto_xform->auth.digest_length; > + } else { > + sess->block_sz =3D get_cipher_blocksize(iavf_sctx, > + conf->crypto_xform->next->cipher.algo); > + sess->iv_sz =3D > + conf->crypto_xform->next->cipher.iv.length; > + sess->icv_sz =3D conf->crypto_xform->auth.digest_length; > + } > + } > + > + set_pkt_metadata_template(&sess->pkt_metadata_template, sess); > +} > + > +/** > + * Create IPsec Security Association for inline IPsec Crypto offload. > + * > + * 1. validate session configuration parameters > + * 2. allocate session memory from mempool > + * 3. add SA to hardware database > + * 4. set session parameters > + * 5. create packet metadata template for datapath > + */ > +static int > +iavf_ipsec_crypto_session_create(void *device, > + struct rte_security_session_conf *conf, > + struct rte_security_session *session, > + struct rte_mempool *mempool) > +{ > + struct rte_eth_dev *ethdev =3D device; > + struct iavf_adapter *adapter =3D > + IAVF_DEV_PRIVATE_TO_ADAPTER(ethdev->data->dev_private); > + struct iavf_security_ctx *iavf_sctx =3D adapter->security_ctx; > + struct iavf_security_session *iavf_session =3D NULL; > + int sa_idx; > + int ret =3D 0; > + > + /* validate that all SA parameters are valid for device */ > + ret =3D iavf_ipsec_crypto_session_validate_conf(iavf_sctx, conf); > + if (ret) > + return ret; > + > + /* allocate session context */ > + if (rte_mempool_get(mempool, (void **)&iavf_session)) { > + PMD_DRV_LOG(ERR, "Cannot get object from sess mempool"); > + return -ENOMEM; > + } > + > + /* add SA to hardware database */ > + sa_idx =3D iavf_ipsec_crypto_security_association_add(adapter, conf); > + if (sa_idx < 0) { > + PMD_DRV_LOG(ERR, > + "Failed to add SA (spi: %d, mode: %s, direction: %s)", > + conf->ipsec.spi, > + conf->ipsec.mode =3D=3D > + RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT ? > + "transport" : "tunnel", > + conf->ipsec.direction =3D=3D > + RTE_SECURITY_IPSEC_SA_DIR_INGRESS ? > + "inbound" : "outbound"); > + > + rte_mempool_put(mempool, iavf_session); > + return -EFAULT; > + } > + > + /* save data plane required session parameters */ > + set_session_parameter(iavf_sctx, iavf_session, conf, sa_idx); > + > + /* save to security session private data */ > + set_sec_session_private_data(session, iavf_session); > + > + return 0; > +} > + > +/** > + * Check if valid ipsec crypto action. > + * SPI must be non-zero and SPI in session must match SPI value > + * passed into function. > + * > + * returns: 0 if invalid session or SPI value equal zero > + * returns: 1 if valid > + */ > +uint32_t > +iavf_ipsec_crypto_action_valid(struct rte_eth_dev *ethdev, > + const struct rte_security_session *session, uint32_t spi) > +{ > + struct iavf_adapter *adapter =3D > + IAVF_DEV_PRIVATE_TO_ADAPTER(ethdev->data->dev_private); > + struct iavf_security_session *sess =3D session->sess_private_data; > + > + /* verify we have a valid session and that it belong to this adapter */ > + if (unlikely(sess =3D=3D NULL || sess->adapter !=3D adapter)) > + return false; > + > + /* SPI value must be non-zero */ > + if (spi =3D=3D 0) > + return false; > + /* Session SPI must patch flow SPI*/ > + else if (sess->sa.spi =3D=3D spi) { > + return true; > + /** > + * TODO: We should add a way of tracking valid hw SA indices to > + * make validation less brittle > + */ > + } > + > + return true; > +} > + > +/** > + * Send virtual channel security policy add request to IES driver. > + * > + * IES driver expects SPI and destination IP adderss to be in host > + * order, but DPDK APIs are network order, therefore we need to do a hto= nl > + * conversion of these parameters. > + */ > +int > +iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapt= er, > + uint32_t esp_spi, > + uint8_t is_v4, > + rte_be32_t v4_dst_addr, > + uint8_t *v6_dst_addr, > + uint8_t drop) > +{ > + struct inline_ipsec_msg *request =3D NULL, *response =3D NULL; > + size_t request_len, response_len; > + int rc =3D 0; > + > + request_len =3D sizeof(struct inline_ipsec_msg) + > + sizeof(struct virtchnl_ipsec_sp_cfg); > + request =3D rte_malloc("iavf-inbound-security-policy-add-request", > + request_len, 0); > + if (request =3D=3D NULL) { > + rc =3D -ENOMEM; > + goto update_cleanup; > + } > + > + /* set msg header params */ > + request->ipsec_opcode =3D INLINE_IPSEC_OP_SP_CREATE; > + request->req_id =3D (uint16_t)0xDEADBEEF; > + > + /* ESP SPI */ > + request->ipsec_data.sp_cfg->spi =3D htonl(esp_spi); > + > + /* Destination IP */ > + if (is_v4) { > + request->ipsec_data.sp_cfg->table_id =3D > + VIRTCHNL_IPSEC_INBOUND_SPD_TBL_IPV4; > + request->ipsec_data.sp_cfg->dip[0] =3D htonl(v4_dst_addr); > + } else { > + request->ipsec_data.sp_cfg->table_id =3D > + VIRTCHNL_IPSEC_INBOUND_SPD_TBL_IPV6; > + request->ipsec_data.sp_cfg->dip[0] =3D > + htonl(((uint32_t *)v6_dst_addr)[0]); > + request->ipsec_data.sp_cfg->dip[1] =3D > + htonl(((uint32_t *)v6_dst_addr)[1]); > + request->ipsec_data.sp_cfg->dip[2] =3D > + htonl(((uint32_t *)v6_dst_addr)[2]); > + request->ipsec_data.sp_cfg->dip[3] =3D > + htonl(((uint32_t *)v6_dst_addr)[3]); > + } > + > + request->ipsec_data.sp_cfg->drop =3D drop; > + > + /** Traffic Class/Congestion Domain currently not support */ > + request->ipsec_data.sp_cfg->set_tc =3D 0; > + request->ipsec_data.sp_cfg->cgd =3D 0; > + > + response_len =3D sizeof(struct inline_ipsec_msg) + > + sizeof(struct virtchnl_ipsec_sp_cfg_resp); > + response =3D rte_malloc("iavf-inbound-security-policy-add-response", > + response_len, 0); > + if (response =3D=3D NULL) { > + rc =3D -ENOMEM; > + goto update_cleanup; > + } > + > + /* send virtual channel request to add SA to hardware database */ > + rc =3D iavf_ipsec_crypto_request(adapter, > + (uint8_t *)request, request_len, > + (uint8_t *)response, response_len); > + if (rc) > + goto update_cleanup; > + > + /* verify response */ > + if (response->ipsec_opcode !=3D request->ipsec_opcode || > + response->req_id !=3D request->req_id) > + rc =3D -EFAULT; > + else > + rc =3D response->ipsec_data.sp_cfg_resp->rule_id; > + > +update_cleanup: > + rte_free(request); > + rte_free(response); > + > + return rc; > +} > + > +static uint32_t > +iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter, > + struct iavf_security_session *sess) > +{ > + struct inline_ipsec_msg *request =3D NULL, *response =3D NULL; > + size_t request_len, response_len; > + int rc =3D 0; > + > + request_len =3D sizeof(struct inline_ipsec_msg) + > + sizeof(struct virtchnl_ipsec_sa_update); > + request =3D rte_malloc("iavf-sa-update-request", request_len, 0); > + if (request =3D=3D NULL) { > + rc =3D -ENOMEM; > + goto update_cleanup; > + } > + > + response_len =3D sizeof(struct inline_ipsec_msg) + > + sizeof(struct virtchnl_ipsec_resp); > + response =3D rte_malloc("iavf-sa-update-response", response_len, 0); > + if (response =3D=3D NULL) { > + rc =3D -ENOMEM; > + goto update_cleanup; > + } > + > + /* set msg header params */ > + request->ipsec_opcode =3D INLINE_IPSEC_OP_SA_UPDATE; > + request->req_id =3D (uint16_t)0xDEADBEEF; > + > + /* set request params */ > + request->ipsec_data.sa_update->sa_index =3D sess->sa.hw_idx; > + request->ipsec_data.sa_update->esn_hi =3D sess->esn.hi; > + > + /* send virtual channel request to add SA to hardware database */ > + rc =3D iavf_ipsec_crypto_request(adapter, > + (uint8_t *)request, request_len, > + (uint8_t *)response, response_len); > + if (rc) > + goto update_cleanup; > + > + /* verify response */ > + if (response->ipsec_opcode !=3D request->ipsec_opcode || > + response->req_id !=3D request->req_id) > + rc =3D -EFAULT; > + else > + rc =3D response->ipsec_data.ipsec_resp->resp; > + > +update_cleanup: > + rte_free(request); > + rte_free(response); > + > + return rc; > +} > + > +static int > +iavf_ipsec_crypto_session_update(void *device, > + struct rte_security_session *session, > + struct rte_security_session_conf *conf) > +{ > + struct iavf_adapter *adapter =3D NULL; > + struct iavf_security_session *iavf_sess =3D NULL; > + struct rte_eth_dev *eth_dev =3D (struct rte_eth_dev *)device; > + int rc =3D 0; > + > + adapter =3D > IAVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private); > + iavf_sess =3D (struct iavf_security_session *)session->sess_private_dat= a; > + > + /* verify we have a valid session and that it belong to this adapter */ > + if (unlikely(iavf_sess =3D=3D NULL || iavf_sess->adapter !=3D adapter)) > + return -EINVAL; > + > + /* update esn hi 32-bits */ > + if (iavf_sess->esn.enabled && conf->ipsec.options.esn) { > + /** > + * Update ESN in hardware for inbound SA. Store in > + * iavf_security_session for outbound SA for use > + * in *iavf_ipsec_crypto_pkt_metadata_set* function. > + */ > + if (iavf_sess->direction =3D=3D RTE_SECURITY_IPSEC_SA_DIR_INGRESS) > + rc =3D iavf_ipsec_crypto_sa_update_esn(adapter, > + iavf_sess); > + else > + iavf_sess->esn.hi =3D conf->ipsec.esn.hi; > + } > + > + return rc; > +} > + > +static int > +iavf_ipsec_crypto_session_stats_get(void *device __rte_unused, > + struct rte_security_session *session __rte_unused, > + struct rte_security_stats *stats __rte_unused) > +{ > + return -EOPNOTSUPP; > +} > + > +int > +iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter, > + uint8_t is_v4, uint32_t flow_id) > +{ > + struct inline_ipsec_msg *request =3D NULL, *response =3D NULL; > + size_t request_len, response_len; > + int rc =3D 0; > + > + request_len =3D sizeof(struct inline_ipsec_msg) + > + sizeof(struct virtchnl_ipsec_sp_destroy); > + request =3D rte_malloc("iavf-sp-del-request", request_len, 0); > + if (request =3D=3D NULL) { > + rc =3D -ENOMEM; > + goto update_cleanup; > + } > + > + response_len =3D sizeof(struct inline_ipsec_msg) + > + sizeof(struct virtchnl_ipsec_resp); > + response =3D rte_malloc("iavf-sp-del-response", response_len, 0); > + if (response =3D=3D NULL) { > + rc =3D -ENOMEM; > + goto update_cleanup; > + } > + > + /* set msg header params */ > + request->ipsec_opcode =3D INLINE_IPSEC_OP_SP_DESTROY; > + request->req_id =3D (uint16_t)0xDEADBEEF; > + > + /* set security policy params */ > + request->ipsec_data.sp_destroy->table_id =3D is_v4 ? > + VIRTCHNL_IPSEC_INBOUND_SPD_TBL_IPV4 : > + VIRTCHNL_IPSEC_INBOUND_SPD_TBL_IPV6; > + request->ipsec_data.sp_destroy->rule_id =3D flow_id; > + > + /* send virtual channel request to add SA to hardware database */ > + rc =3D iavf_ipsec_crypto_request(adapter, > + (uint8_t *)request, request_len, > + (uint8_t *)response, response_len); > + if (rc) > + goto update_cleanup; > + > + /* verify response */ > + if (response->ipsec_opcode !=3D request->ipsec_opcode || > + response->req_id !=3D request->req_id) > + rc =3D -EFAULT; > + else > + return response->ipsec_data.ipsec_status->status; > + > +update_cleanup: > + rte_free(request); > + rte_free(response); > + > + return rc; > +} > + > +static uint32_t > +iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter, > + struct iavf_security_session *sess) > +{ > + struct inline_ipsec_msg *request =3D NULL, *response =3D NULL; > + size_t request_len, response_len; > + > + int rc =3D 0; > + > + request_len =3D sizeof(struct inline_ipsec_msg) + > + sizeof(struct virtchnl_ipsec_sa_destroy); > + > + request =3D rte_malloc("iavf-sa-del-request", request_len, 0); > + if (request =3D=3D NULL) { > + rc =3D -ENOMEM; > + goto update_cleanup; > + } > + > + response_len =3D sizeof(struct inline_ipsec_msg) + > + sizeof(struct virtchnl_ipsec_resp); > + > + response =3D rte_malloc("iavf-sa-del-response", response_len, 0); > + if (response =3D=3D NULL) { > + rc =3D -ENOMEM; > + goto update_cleanup; > + } > + > + /* set msg header params */ > + request->ipsec_opcode =3D INLINE_IPSEC_OP_SA_DESTROY; > + request->req_id =3D (uint16_t)0xDEADBEEF; > + > + /** > + * SA delete supports deletetion of 1-8 specified SA's or if the flag > + * field is zero, all SA's associated with VF will be deleted. > + */ > + if (sess) { > + request->ipsec_data.sa_destroy->flag =3D 0x1; > + request->ipsec_data.sa_destroy->sa_index[0] =3D sess->sa.hw_idx; > + } else { > + request->ipsec_data.sa_destroy->flag =3D 0x0; > + } > + > + /* send virtual channel request to add SA to hardware database */ > + rc =3D iavf_ipsec_crypto_request(adapter, > + (uint8_t *)request, request_len, > + (uint8_t *)response, response_len); > + if (rc) > + goto update_cleanup; > + > + /* verify response */ > + if (response->ipsec_opcode !=3D request->ipsec_opcode || > + response->req_id !=3D request->req_id) > + rc =3D -EFAULT; > + > + /** > + * Delete status will be the same bitmask as sa_destroy request flag if > + * deletes successful > + */ > + if (request->ipsec_data.sa_destroy->flag !=3D > + response->ipsec_data.ipsec_status->status) > + rc =3D -EFAULT; > + > +update_cleanup: > + rte_free(response); > + rte_free(request); > + > + return rc; > +} > + > +static int > +iavf_ipsec_crypto_session_destroy(void *device, > + struct rte_security_session *session) > +{ > + struct iavf_adapter *adapter =3D NULL; > + struct iavf_security_session *iavf_sess =3D NULL; > + struct rte_eth_dev *eth_dev =3D (struct rte_eth_dev *)device; > + int ret; > + > + adapter =3D > IAVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private); > + iavf_sess =3D (struct iavf_security_session *)session->sess_private_dat= a; > + > + /* verify we have a valid session and that it belong to this adapter */ > + if (unlikely(iavf_sess =3D=3D NULL || iavf_sess->adapter !=3D adapter)) > + return -EINVAL; > + > + ret =3D iavf_ipsec_crypto_sa_del(adapter, iavf_sess); > + rte_mempool_put(rte_mempool_from_obj(iavf_sess), (void *)iavf_sess); > + return ret; > +} > + > +/** > + * Get ESP trailer from packet as well as calculate the total ESP traile= r > + * length, which include padding, ESP trailer footer and the ICV > + */ > +static inline struct rte_esp_tail * > +iavf_ipsec_crypto_get_esp_trailer(struct rte_mbuf *m, > + struct iavf_security_session *s, uint16_t *esp_trailer_length) > +{ > + struct rte_esp_tail *esp_trailer; > + > + uint16_t length =3D sizeof(struct rte_esp_tail) + s->icv_sz; > + uint16_t offset =3D 0; > + > + /** > + * The ICV will not be present in TSO packets as this is appended by > + * hardware during segment generation > + */ > + if (m->ol_flags & (RTE_MBUF_F_TX_TCP_SEG | > RTE_MBUF_F_TX_UDP_SEG)) > + length -=3D s->icv_sz; > + > + *esp_trailer_length =3D length; > + > + /** > + * Calculate offset in packet to ESP trailer header, this should be > + * total packet length less the size of the ESP trailer plus the ICV > + * length if it is present > + */ > + offset =3D rte_pktmbuf_pkt_len(m) - length; > + > + if (m->nb_segs > 1) { > + /* find segment which esp trailer is located */ > + while (m->data_len < offset) { > + offset -=3D m->data_len; > + m =3D m->next; > + } > + } > + > + esp_trailer =3D rte_pktmbuf_mtod_offset(m, struct rte_esp_tail *, offse= t); > + > + *esp_trailer_length +=3D esp_trailer->pad_len; > + > + return esp_trailer; > +} > + > +static inline uint16_t > +iavf_ipsec_crypto_compute_l4_payload_length(struct rte_mbuf *m, > + struct iavf_security_session *s, uint16_t esp_tlen) > +{ > + uint16_t ol2_len =3D m->l2_len; /* MAC + VLAN */ > + uint16_t ol3_len =3D 0; /* ipv4/6 + ext hdrs */ > + uint16_t ol4_len =3D 0; /* UDP NATT */ > + uint16_t l3_len =3D 0; /* IPv4/6 + ext hdrs */ > + uint16_t l4_len =3D 0; /* TCP/UDP/STCP hdrs */ > + uint16_t esp_hlen =3D sizeof(struct rte_esp_hdr) + s->iv_sz; > + > + if (s->mode =3D=3D RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) > + ol3_len =3D m->outer_l3_len; > + /**< > + * application provided l3len assumed to include length of > + * ipv4/6 hdr + ext hdrs > + */ > + > + if (s->udp_encap.enabled) > + ol4_len =3D sizeof(struct rte_udp_hdr); > + > + l3_len =3D m->l3_len; > + l4_len =3D m->l4_len; > + > + return rte_pktmbuf_pkt_len(m) - (ol2_len + ol3_len + ol4_len + > + esp_hlen + l3_len + l4_len + esp_tlen); > +} > + > +static int > +iavf_ipsec_crypto_pkt_metadata_set(void *device, > + struct rte_security_session *session, > + struct rte_mbuf *m, void *params) > +{ > + struct rte_eth_dev *ethdev =3D device; > + struct iavf_adapter *adapter =3D > + IAVF_DEV_PRIVATE_TO_ADAPTER(ethdev->data->dev_private); > + struct iavf_security_ctx *iavf_sctx =3D adapter->security_ctx; > + struct iavf_security_session *iavf_sess =3D session->sess_private_data; > + struct iavf_ipsec_crypto_pkt_metadata *md; > + struct rte_esp_tail *esp_tail; > + uint64_t *sqn =3D params; > + uint16_t esp_trailer_length; > + > + /* Check we have valid session and is associated with this device */ > + if (unlikely(iavf_sess =3D=3D NULL || iavf_sess->adapter !=3D adapter)) > + return -EINVAL; > + > + /* Get dynamic metadata location from mbuf */ > + md =3D RTE_MBUF_DYNFIELD(m, iavf_sctx->pkt_md_offset, > + struct iavf_ipsec_crypto_pkt_metadata *); > + > + /* Set immutatable metadata values from session template */ > + memcpy(md, &iavf_sess->pkt_metadata_template, > + sizeof(struct iavf_ipsec_crypto_pkt_metadata)); > + > + esp_tail =3D iavf_ipsec_crypto_get_esp_trailer(m, iavf_sess, > + &esp_trailer_length); > + > + /* Set per packet mutable metadata values */ > + md->esp_trailer_len =3D esp_trailer_length; > + md->l4_payload_len =3D iavf_ipsec_crypto_compute_l4_payload_length(m, > + iavf_sess, esp_trailer_length); > + md->next_proto =3D esp_tail->next_proto; > + > + /* If Extended SN in use set the upper 32-bits in metadata */ > + if (iavf_sess->esn.enabled && sqn !=3D NULL) > + md->esn =3D (uint32_t)(*sqn >> 32); > + > + return 0; > +} > + > +static int > +iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter, > + struct virtchnl_ipsec_cap *capability) > +{ > + /* Perform pf-vf comms */ > + struct inline_ipsec_msg *request =3D NULL, *response =3D NULL; > + size_t request_len, response_len; > + int rc; > + > + request_len =3D sizeof(struct inline_ipsec_msg); > + > + request =3D rte_malloc("iavf-device-capability-request", request_len, 0= ); > + if (request =3D=3D NULL) { > + rc =3D -ENOMEM; > + goto update_cleanup; > + } > + > + response_len =3D sizeof(struct inline_ipsec_msg) + > + sizeof(struct virtchnl_ipsec_cap); > + response =3D rte_malloc("iavf-device-capability-response", > + response_len, 0); > + if (response =3D=3D NULL) { > + rc =3D -ENOMEM; > + goto update_cleanup; > + } > + > + /* set msg header params */ > + request->ipsec_opcode =3D INLINE_IPSEC_OP_GET_CAP; > + request->req_id =3D (uint16_t)0xDEADBEEF; > + > + /* send virtual channel request to add SA to hardware database */ > + rc =3D iavf_ipsec_crypto_request(adapter, > + (uint8_t *)request, request_len, > + (uint8_t *)response, response_len); > + if (rc) > + goto update_cleanup; > + > + /* verify response id */ > + if (response->ipsec_opcode !=3D request->ipsec_opcode || > + response->req_id !=3D request->req_id){ > + rc =3D -EFAULT; > + goto update_cleanup; > + } > + memcpy(capability, response->ipsec_data.ipsec_cap, sizeof(*capability))= ; > + > +update_cleanup: > + rte_free(response); > + rte_free(request); > + > + return rc; > +} > + > +enum rte_crypto_auth_algorithm auth_maptbl[] =3D { > + /* Hash Algorithm */ > + [VIRTCHNL_HASH_NO_ALG] =3D RTE_CRYPTO_AUTH_NULL, > + [VIRTCHNL_AES_CBC_MAC] =3D RTE_CRYPTO_AUTH_AES_CBC_MAC, > + [VIRTCHNL_AES_CMAC] =3D RTE_CRYPTO_AUTH_AES_CMAC, > + [VIRTCHNL_AES_GMAC] =3D RTE_CRYPTO_AUTH_AES_GMAC, > + [VIRTCHNL_AES_XCBC_MAC] =3D RTE_CRYPTO_AUTH_AES_XCBC_MAC, > + [VIRTCHNL_MD5_HMAC] =3D RTE_CRYPTO_AUTH_MD5_HMAC, > + [VIRTCHNL_SHA1_HMAC] =3D RTE_CRYPTO_AUTH_SHA1_HMAC, > + [VIRTCHNL_SHA224_HMAC] =3D RTE_CRYPTO_AUTH_SHA224_HMAC, > + [VIRTCHNL_SHA256_HMAC] =3D RTE_CRYPTO_AUTH_SHA256_HMAC, > + [VIRTCHNL_SHA384_HMAC] =3D RTE_CRYPTO_AUTH_SHA384_HMAC, > + [VIRTCHNL_SHA512_HMAC] =3D RTE_CRYPTO_AUTH_SHA512_HMAC, > + [VIRTCHNL_SHA3_224_HMAC] =3D RTE_CRYPTO_AUTH_SHA3_224_HMAC, > + [VIRTCHNL_SHA3_256_HMAC] =3D RTE_CRYPTO_AUTH_SHA3_256_HMAC, > + [VIRTCHNL_SHA3_384_HMAC] =3D RTE_CRYPTO_AUTH_SHA3_384_HMAC, > + [VIRTCHNL_SHA3_512_HMAC] =3D RTE_CRYPTO_AUTH_SHA3_512_HMAC, > +}; > + > +static void > +update_auth_capabilities(struct rte_cryptodev_capabilities *scap, > + struct virtchnl_algo_cap *acap) > +{ > + struct rte_cryptodev_symmetric_capability *capability =3D &scap->sym; > + > + scap->op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC; > + > + capability->xform_type =3D RTE_CRYPTO_SYM_XFORM_AUTH; > + > + capability->auth.algo =3D auth_maptbl[acap->algo_type]; > + capability->auth.block_size =3D acap->block_size; > + > + capability->auth.key_size.min =3D acap->min_key_size; > + capability->auth.key_size.max =3D acap->max_key_size; > + capability->auth.key_size.increment =3D acap->inc_key_size; > + > + capability->auth.digest_size.min =3D acap->min_digest_size; > + capability->auth.digest_size.max =3D acap->max_digest_size; > + capability->auth.digest_size.increment =3D acap->inc_digest_size; > +} > + > +enum rte_crypto_cipher_algorithm cipher_maptbl[] =3D { > + /* Cipher Algorithm */ > + [VIRTCHNL_CIPHER_NO_ALG] =3D RTE_CRYPTO_CIPHER_NULL, > + [VIRTCHNL_3DES_CBC] =3D RTE_CRYPTO_CIPHER_3DES_CBC, > + [VIRTCHNL_AES_CBC] =3D RTE_CRYPTO_CIPHER_AES_CBC, > + [VIRTCHNL_AES_CTR] =3D RTE_CRYPTO_CIPHER_AES_CTR, > +}; > + > +static void > +update_cipher_capabilities(struct rte_cryptodev_capabilities *scap, > + struct virtchnl_algo_cap *acap) > +{ > + struct rte_cryptodev_symmetric_capability *capability =3D &scap->sym; > + > + scap->op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC; > + > + capability->xform_type =3D RTE_CRYPTO_SYM_XFORM_CIPHER; > + > + capability->cipher.algo =3D cipher_maptbl[acap->algo_type]; > + > + capability->cipher.block_size =3D acap->block_size; > + > + capability->cipher.key_size.min =3D acap->min_key_size; > + capability->cipher.key_size.max =3D acap->max_key_size; > + capability->cipher.key_size.increment =3D acap->inc_key_size; > + > + capability->cipher.iv_size.min =3D acap->min_iv_size; > + capability->cipher.iv_size.max =3D acap->max_iv_size; > + capability->cipher.iv_size.increment =3D acap->inc_iv_size; > +} > + > +enum rte_crypto_aead_algorithm aead_maptbl[] =3D { > + /* AEAD Algorithm */ > + [VIRTCHNL_AES_CCM] =3D RTE_CRYPTO_AEAD_AES_CCM, > + [VIRTCHNL_AES_GCM] =3D RTE_CRYPTO_AEAD_AES_GCM, > + [VIRTCHNL_CHACHA20_POLY1305] =3D > RTE_CRYPTO_AEAD_CHACHA20_POLY1305, > +}; > + > +static void > +update_aead_capabilities(struct rte_cryptodev_capabilities *scap, > + struct virtchnl_algo_cap *acap) > +{ > + struct rte_cryptodev_symmetric_capability *capability =3D &scap->sym; > + > + scap->op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC; > + > + capability->xform_type =3D RTE_CRYPTO_SYM_XFORM_AEAD; > + > + capability->aead.algo =3D aead_maptbl[acap->algo_type]; > + > + capability->aead.block_size =3D acap->block_size; > + > + capability->aead.key_size.min =3D acap->min_key_size; > + capability->aead.key_size.max =3D acap->max_key_size; > + capability->aead.key_size.increment =3D acap->inc_key_size; > + > + capability->aead.aad_size.min =3D acap->min_aad_size; > + capability->aead.aad_size.max =3D acap->max_aad_size; > + capability->aead.aad_size.increment =3D acap->inc_aad_size; > + > + capability->aead.iv_size.min =3D acap->min_iv_size; > + capability->aead.iv_size.max =3D acap->max_iv_size; > + capability->aead.iv_size.increment =3D acap->inc_iv_size; > + > + capability->aead.digest_size.min =3D acap->min_digest_size; > + capability->aead.digest_size.max =3D acap->max_digest_size; > + capability->aead.digest_size.increment =3D acap->inc_digest_size; > +} > + > +/** > + * Dynamically set crypto capabilities based on virtchannel IPsec > + * capabilities structure. > + */ > +int > +iavf_ipsec_crypto_set_security_capabililites(struct iavf_security_ctx > + *iavf_sctx, struct virtchnl_ipsec_cap *vch_cap) > +{ > + struct rte_cryptodev_capabilities *capabilities; > + int i, j, number_of_capabilities =3D 0, ci =3D 0; > + > + /* Count the total number of crypto algorithms supported */ > + for (i =3D 0; i < VIRTCHNL_IPSEC_MAX_CRYPTO_CAP_NUM; i++) > + number_of_capabilities +=3D vch_cap->cap[i].algo_cap_num; > + > + /** > + * Allocate cryptodev capabilities structure for > + * *number_of_capabilities* items plus one item to null terminate the > + * array > + */ > + capabilities =3D rte_zmalloc("crypto_cap", > + sizeof(struct rte_cryptodev_capabilities) * > + (number_of_capabilities + 1), 0); > + capabilities[number_of_capabilities].op =3D > RTE_CRYPTO_OP_TYPE_UNDEFINED; > + > + /** > + * Iterate over each virtchl crypto capability by crypto type and > + * algorithm. > + */ > + for (i =3D 0; i < VIRTCHNL_IPSEC_MAX_CRYPTO_CAP_NUM; i++) { > + for (j =3D 0; j < vch_cap->cap[i].algo_cap_num; j++, ci++) { > + switch (vch_cap->cap[i].crypto_type) { > + case VIRTCHNL_AUTH: > + update_auth_capabilities(&capabilities[ci], > + &vch_cap->cap[i].algo_cap_list[j]); > + break; > + case VIRTCHNL_CIPHER: > + update_cipher_capabilities(&capabilities[ci], > + &vch_cap->cap[i].algo_cap_list[j]); > + break; > + case VIRTCHNL_AEAD: > + update_aead_capabilities(&capabilities[ci], > + &vch_cap->cap[i].algo_cap_list[j]); > + break; > + default: > + capabilities[ci].op =3D > + RTE_CRYPTO_OP_TYPE_UNDEFINED; > + break; > + } > + } > + } > + > + iavf_sctx->crypto_capabilities =3D capabilities; > + return 0; > +} > + > +/** > + * Get security capabilities for device > + */ > +static const struct rte_security_capability * > +iavf_ipsec_crypto_capabilities_get(void *device) > +{ > + struct rte_eth_dev *eth_dev =3D (struct rte_eth_dev *)device; > + struct iavf_adapter *adapter =3D > + IAVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private); > + struct iavf_security_ctx *iavf_sctx =3D adapter->security_ctx; > + unsigned int i; > + > + static struct rte_security_capability iavf_security_capabilities[] =3D = { > + { /* IPsec Inline Crypto ESP Tunnel Egress */ > + .action =3D RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO, > + .protocol =3D RTE_SECURITY_PROTOCOL_IPSEC, > + .ipsec =3D { > + .proto =3D RTE_SECURITY_IPSEC_SA_PROTO_ESP, > + .mode =3D RTE_SECURITY_IPSEC_SA_MODE_TUNNEL, > + .direction =3D RTE_SECURITY_IPSEC_SA_DIR_EGRESS, > + .options =3D { .udp_encap =3D 1, > + .stats =3D 1, .esn =3D 1 }, > + }, > + .ol_flags =3D RTE_SECURITY_TX_OLOAD_NEED_MDATA > + }, > + { /* IPsec Inline Crypto ESP Tunnel Ingress */ > + .action =3D RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO, > + .protocol =3D RTE_SECURITY_PROTOCOL_IPSEC, > + .ipsec =3D { > + .proto =3D RTE_SECURITY_IPSEC_SA_PROTO_ESP, > + .mode =3D RTE_SECURITY_IPSEC_SA_MODE_TUNNEL, > + .direction =3D RTE_SECURITY_IPSEC_SA_DIR_INGRESS, > + .options =3D { .udp_encap =3D 1, > + .stats =3D 1, .esn =3D 1 }, > + }, > + .ol_flags =3D 0 > + }, > + { /* IPsec Inline Crypto ESP Transport Egress */ > + .action =3D RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO, > + .protocol =3D RTE_SECURITY_PROTOCOL_IPSEC, > + .ipsec =3D { > + .proto =3D RTE_SECURITY_IPSEC_SA_PROTO_ESP, > + .mode =3D RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT, > + .direction =3D RTE_SECURITY_IPSEC_SA_DIR_EGRESS, > + .options =3D { .udp_encap =3D 1, .stats =3D 1, > + .esn =3D 1 }, > + }, > + .ol_flags =3D RTE_SECURITY_TX_OLOAD_NEED_MDATA > + }, > + { /* IPsec Inline Crypto ESP Transport Ingress */ > + .action =3D RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO, > + .protocol =3D RTE_SECURITY_PROTOCOL_IPSEC, > + .ipsec =3D { > + .proto =3D RTE_SECURITY_IPSEC_SA_PROTO_ESP, > + .mode =3D RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT, > + .direction =3D RTE_SECURITY_IPSEC_SA_DIR_INGRESS, > + .options =3D { .udp_encap =3D 1, .stats =3D 1, > + .esn =3D 1 } > + }, > + .ol_flags =3D 0 > + }, > + { > + .action =3D RTE_SECURITY_ACTION_TYPE_NONE > + } > + }; > + > + /** > + * Update the security capabilities struct with the runtime discovered > + * crypto capabilities, except for last element of the array which is > + * the null terminatation > + */ > + for (i =3D 0; i < ((sizeof(iavf_security_capabilities) / > + sizeof(iavf_security_capabilities[0])) - 1); i++) { > + iavf_security_capabilities[i].crypto_capabilities =3D > + iavf_sctx->crypto_capabilities; > + } > + > + return iavf_security_capabilities; > +} > + > +static struct rte_security_ops iavf_ipsec_crypto_ops =3D { > + .session_get_size =3D iavf_ipsec_crypto_session_size_get, > + .session_create =3D iavf_ipsec_crypto_session_create, > + .session_update =3D iavf_ipsec_crypto_session_update, > + .session_stats_get =3D iavf_ipsec_crypto_session_stats_get, > + .session_destroy =3D iavf_ipsec_crypto_session_destroy, > + .set_pkt_metadata =3D iavf_ipsec_crypto_pkt_metadata_set, > + .get_userdata =3D NULL, > + .capabilities_get =3D iavf_ipsec_crypto_capabilities_get, > +}; > + > +int > +iavf_security_ctx_create(struct iavf_adapter *adapter) > +{ > + struct rte_security_ctx *sctx; > + > + sctx =3D rte_malloc("security_ctx", sizeof(struct rte_security_ctx), 0)= ; > + if (sctx =3D=3D NULL) > + return -ENOMEM; > + > + sctx->device =3D adapter->vf.eth_dev; > + sctx->ops =3D &iavf_ipsec_crypto_ops; > + sctx->sess_cnt =3D 0; > + > + adapter->vf.eth_dev->security_ctx =3D sctx; > + > + if (adapter->security_ctx =3D=3D NULL) { > + adapter->security_ctx =3D rte_malloc("iavf_security_ctx", > + sizeof(struct iavf_security_ctx), 0); > + if (adapter->security_ctx =3D=3D NULL) > + return -ENOMEM; > + } > + > + return 0; > +} > + > +int > +iavf_security_init(struct iavf_adapter *adapter) > +{ > + struct iavf_security_ctx *iavf_sctx =3D adapter->security_ctx; > + struct rte_mbuf_dynfield pkt_md_dynfield =3D { > + .name =3D "iavf_ipsec_crypto_pkt_metadata", > + .size =3D sizeof(struct iavf_ipsec_crypto_pkt_metadata), > + .align =3D __alignof__(struct iavf_ipsec_crypto_pkt_metadata) > + }; > + struct virtchnl_ipsec_cap capabilities; > + int rc; > + > + iavf_sctx->adapter =3D adapter; > + > + iavf_sctx->pkt_md_offset =3D > rte_mbuf_dynfield_register(&pkt_md_dynfield); > + if (iavf_sctx->pkt_md_offset < 0) > + return iavf_sctx->pkt_md_offset; > + > + /* Get device capabilities from Inline IPsec driver over PF-VF comms */ > + rc =3D iavf_ipsec_crypto_device_capabilities_get(adapter, &capabilities= ); > + if (rc) > + return rc; > + > + return iavf_ipsec_crypto_set_security_capabililites(iavf_sctx, > + &capabilities); > +} > + > +int > +iavf_security_get_pkt_md_offset(struct iavf_adapter *adapter) > +{ > + struct iavf_security_ctx *iavf_sctx =3D adapter->security_ctx; > + > + return iavf_sctx->pkt_md_offset; > +} > + > +int > +iavf_security_ctx_destroy(struct iavf_adapter *adapter) > +{ > + struct rte_security_ctx *sctx =3D adapter->vf.eth_dev->security_ctx; > + struct iavf_security_ctx *iavf_sctx =3D adapter->security_ctx; > + > + if (iavf_sctx =3D=3D NULL) > + return -ENODEV; > + > + /* TODO: Add resources cleanup */ > + > + /* free and reset security data structures */ > + rte_free(iavf_sctx); > + rte_free(sctx); > + > + iavf_sctx =3D NULL; > + sctx =3D NULL; > + > + return 0; > +} > + > +int > +iavf_ipsec_crypto_supported(struct iavf_adapter *adapter) > +{ > + struct virtchnl_vf_resource *resources =3D adapter->vf.vf_res; > + > + /** Capability check for IPsec Crypto */ > + if (resources && (resources->vf_cap_flags & > + VIRTCHNL_VF_OFFLOAD_INLINE_IPSEC_CRYPTO)) > + return true; > + > + return false; > +} > + > +#define IAVF_IPSEC_INSET_ESP (\ > + IAVF_INSET_ESP_SPI) > + > +#define IAVF_IPSEC_INSET_AH (\ > + IAVF_INSET_AH_SPI) > + > +#define IAVF_IPSEC_INSET_IPV4_NATT_ESP (\ > + IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \ > + IAVF_INSET_ESP_SPI) > + > +#define IAVF_IPSEC_INSET_IPV6_NATT_ESP (\ > + IAVF_INSET_IPV6_SRC | IAVF_INSET_IPV6_DST | \ > + IAVF_INSET_ESP_SPI) > + > +enum iavf_ipsec_flow_pt_type { > + IAVF_PATTERN_ESP =3D 1, > + IAVF_PATTERN_AH, > + IAVF_PATTERN_UDP_ESP, > +}; > +enum iavf_ipsec_flow_pt_ip_ver { > + IAVF_PATTERN_IPV4 =3D 1, > + IAVF_PATTERN_IPV6, > +}; > + > +#define IAVF_PATTERN(t, ipt) ((void *)((t) | ((ipt) << 4))) > +#define IAVF_PATTERN_TYPE(pt) ((pt) & 0x0F) > +#define IAVF_PATTERN_IP_V(pt) ((pt) >> 4) > + > +static struct iavf_pattern_match_item iavf_ipsec_flow_pattern[] =3D { > + {iavf_pattern_eth_ipv4_esp, IAVF_IPSEC_INSET_ESP, > + IAVF_PATTERN(IAVF_PATTERN_ESP, IAVF_PATTERN_IPV4)}, > + {iavf_pattern_eth_ipv6_esp, IAVF_IPSEC_INSET_ESP, > + IAVF_PATTERN(IAVF_PATTERN_ESP, IAVF_PATTERN_IPV6)}, > + {iavf_pattern_eth_ipv4_ah, IAVF_IPSEC_INSET_AH, > + IAVF_PATTERN(IAVF_PATTERN_AH, IAVF_PATTERN_IPV4)}, > + {iavf_pattern_eth_ipv6_ah, IAVF_IPSEC_INSET_AH, > + IAVF_PATTERN(IAVF_PATTERN_AH, IAVF_PATTERN_IPV6)}, > + {iavf_pattern_eth_ipv4_udp_esp, IAVF_IPSEC_INSET_IPV4_NATT_ESP, > + IAVF_PATTERN(IAVF_PATTERN_UDP_ESP, > IAVF_PATTERN_IPV4)}, > + {iavf_pattern_eth_ipv6_udp_esp, IAVF_IPSEC_INSET_IPV6_NATT_ESP, > + IAVF_PATTERN(IAVF_PATTERN_UDP_ESP, > IAVF_PATTERN_IPV6)}, > +}; > + > +struct iavf_ipsec_flow_item { > + uint64_t id; > + uint8_t is_ipv4; > + uint32_t spi; > + struct rte_ether_hdr eth_hdr; > + union { > + struct rte_ipv4_hdr ipv4_hdr; > + struct rte_ipv6_hdr ipv6_hdr; > + }; > + struct rte_udp_hdr udp_hdr; > +}; > + > +static void > +parse_eth_item(const struct rte_flow_item_eth *item, > + struct rte_ether_hdr *eth) > +{ > + memcpy(eth->src_addr.addr_bytes, > + item->src.addr_bytes, sizeof(eth->src_addr)); > + memcpy(eth->dst_addr.addr_bytes, > + item->dst.addr_bytes, sizeof(eth->dst_addr)); > +} > + > +static void > +parse_ipv4_item(const struct rte_flow_item_ipv4 *item, > + struct rte_ipv4_hdr *ipv4) > +{ > + ipv4->src_addr =3D item->hdr.src_addr; > + ipv4->dst_addr =3D item->hdr.dst_addr; > +} > + > +static void > +parse_ipv6_item(const struct rte_flow_item_ipv6 *item, > + struct rte_ipv6_hdr *ipv6) > +{ > + memcpy(ipv6->src_addr, item->hdr.src_addr, 16); > + memcpy(ipv6->dst_addr, item->hdr.dst_addr, 16); > +} > + > +static void > +parse_udp_item(const struct rte_flow_item_udp *item, struct rte_udp_hdr > *udp) > +{ > + udp->dst_port =3D item->hdr.dst_port; > + udp->src_port =3D item->hdr.src_port; > +} > + > +static int > +has_security_action(const struct rte_flow_action actions[], > + const void **session) > +{ > + /* only {SECURITY; END} supported */ > + if (actions[0].type =3D=3D RTE_FLOW_ACTION_TYPE_SECURITY && > + actions[1].type =3D=3D RTE_FLOW_ACTION_TYPE_END) { > + *session =3D actions[0].conf; > + return true; > + } > + return false; > +} > + > +static struct iavf_ipsec_flow_item * > +iavf_ipsec_flow_item_parse(struct rte_eth_dev *ethdev, > + const struct rte_flow_item pattern[], > + const struct rte_flow_action actions[], > + uint32_t type) > +{ > + const void *session; > + struct iavf_ipsec_flow_item > + *ipsec_flow =3D rte_malloc("security-flow-rule", > + sizeof(struct iavf_ipsec_flow_item), 0); > + enum iavf_ipsec_flow_pt_type p_type =3D IAVF_PATTERN_TYPE(type); > + enum iavf_ipsec_flow_pt_ip_ver p_ip_type =3D IAVF_PATTERN_IP_V(type); > + > + if (ipsec_flow =3D=3D NULL) > + return NULL; > + > + ipsec_flow->is_ipv4 =3D (p_ip_type =3D=3D IAVF_PATTERN_IPV4); > + > + if (pattern[0].spec) > + parse_eth_item((const struct rte_flow_item_eth *) > + pattern[0].spec, &ipsec_flow->eth_hdr); > + > + switch (p_type) { > + case IAVF_PATTERN_ESP: > + if (ipsec_flow->is_ipv4) { > + parse_ipv4_item((const struct rte_flow_item_ipv4 *) > + pattern[1].spec, > + &ipsec_flow->ipv4_hdr); > + } else { > + parse_ipv6_item((const struct rte_flow_item_ipv6 *) > + pattern[1].spec, > + &ipsec_flow->ipv6_hdr); > + } > + ipsec_flow->spi =3D > + ((const struct rte_flow_item_esp *) > + pattern[2].spec)->hdr.spi; > + break; > + case IAVF_PATTERN_AH: > + if (ipsec_flow->is_ipv4) { > + parse_ipv4_item((const struct rte_flow_item_ipv4 *) > + pattern[1].spec, > + &ipsec_flow->ipv4_hdr); > + } else { > + parse_ipv6_item((const struct rte_flow_item_ipv6 *) > + pattern[1].spec, > + &ipsec_flow->ipv6_hdr); > + } > + ipsec_flow->spi =3D > + ((const struct rte_flow_item_ah *) > + pattern[2].spec)->spi; > + break; > + case IAVF_PATTERN_UDP_ESP: > + if (ipsec_flow->is_ipv4) { > + parse_ipv4_item((const struct rte_flow_item_ipv4 *) > + pattern[1].spec, > + &ipsec_flow->ipv4_hdr); > + } else { > + parse_ipv6_item((const struct rte_flow_item_ipv6 *) > + pattern[1].spec, > + &ipsec_flow->ipv6_hdr); > + } > + parse_udp_item((const struct rte_flow_item_udp *) > + pattern[2].spec, > + &ipsec_flow->udp_hdr); > + ipsec_flow->spi =3D > + ((const struct rte_flow_item_esp *) > + pattern[3].spec)->hdr.spi; > + break; > + default: > + goto flow_cleanup; > + } > + > + if (!has_security_action(actions, &session)) > + goto flow_cleanup; > + > + if (!iavf_ipsec_crypto_action_valid(ethdev, session, > + ipsec_flow->spi)) > + goto flow_cleanup; > + > + return ipsec_flow; > + > +flow_cleanup: > + rte_free(ipsec_flow); > + return NULL; > +} > + > + > +static struct iavf_flow_parser iavf_ipsec_flow_parser; > + > +static int > +iavf_ipsec_flow_init(struct iavf_adapter *ad) > +{ > + struct iavf_info *vf =3D IAVF_DEV_PRIVATE_TO_VF(ad); > + struct iavf_flow_parser *parser; > + > + if (!vf->vf_res) > + return -EINVAL; > + > + if (vf->vf_res->vf_cap_flags & > VIRTCHNL_VF_OFFLOAD_INLINE_IPSEC_CRYPTO) > + parser =3D &iavf_ipsec_flow_parser; > + else > + return -ENOTSUP; > + > + return iavf_register_parser(parser, ad); > +} > + > +static void > +iavf_ipsec_flow_uninit(struct iavf_adapter *ad) > +{ > + iavf_unregister_parser(&iavf_ipsec_flow_parser, ad); > +} > + > +static int > +iavf_ipsec_flow_create(struct iavf_adapter *ad, > + struct rte_flow *flow, > + void *meta, > + struct rte_flow_error *error) > +{ > + struct iavf_ipsec_flow_item *ipsec_flow =3D meta; > + if (!ipsec_flow) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > + "NULL rule."); > + return -rte_errno; > + } > + > + if (ipsec_flow->is_ipv4) { > + ipsec_flow->id =3D > + iavf_ipsec_crypto_inbound_security_policy_add(ad, > + ipsec_flow->spi, > + 1, > + ipsec_flow->ipv4_hdr.dst_addr, > + NULL, > + 0); > + } else { > + ipsec_flow->id =3D > + iavf_ipsec_crypto_inbound_security_policy_add(ad, > + ipsec_flow->spi, > + 0, > + 0, > + ipsec_flow->ipv6_hdr.dst_addr, > + 0); > + } > + > + if (ipsec_flow->id < 1) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, > + "Failed to add SA."); > + return -rte_errno; > + } > + > + flow->rule =3D ipsec_flow; > + > + return 0; > +} > + > +static int > +iavf_ipsec_flow_destroy(struct iavf_adapter *ad, > + struct rte_flow *flow, > + struct rte_flow_error *error) > +{ > + struct iavf_ipsec_flow_item *ipsec_flow =3D flow->rule; > + if (!ipsec_flow) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > + "NULL rule."); > + return -rte_errno; > + } > + > + iavf_ipsec_crypto_security_policy_delete(ad, > + ipsec_flow->is_ipv4, ipsec_flow->id); > + rte_free(ipsec_flow); > + return 0; > +} > + > +static struct iavf_flow_engine iavf_ipsec_flow_engine =3D { > + .init =3D iavf_ipsec_flow_init, > + .uninit =3D iavf_ipsec_flow_uninit, > + .create =3D iavf_ipsec_flow_create, > + .destroy =3D iavf_ipsec_flow_destroy, > + .type =3D IAVF_FLOW_ENGINE_IPSEC_CRYPTO, > +}; > + > +static int > +iavf_ipsec_flow_parse(struct iavf_adapter *ad, > + struct iavf_pattern_match_item *array, > + uint32_t array_len, > + const struct rte_flow_item pattern[], > + const struct rte_flow_action actions[], > + void **meta, > + struct rte_flow_error *error) > +{ > + struct iavf_pattern_match_item *item =3D NULL; > + int ret =3D -1; > + > + item =3D iavf_search_pattern_match_item(pattern, array, array_len, erro= r); > + if (item && item->meta) { > + uint32_t type =3D (uint64_t)(item->meta); > + struct iavf_ipsec_flow_item *fi =3D > + iavf_ipsec_flow_item_parse(ad->vf.eth_dev, > + pattern, actions, type); > + if (fi && meta) { > + *meta =3D fi; > + ret =3D 0; > + } > + } > + return ret; > +} > + > +static struct iavf_flow_parser iavf_ipsec_flow_parser =3D { > + .engine =3D &iavf_ipsec_flow_engine, > + .array =3D iavf_ipsec_flow_pattern, > + .array_len =3D RTE_DIM(iavf_ipsec_flow_pattern), > + .parse_pattern_action =3D iavf_ipsec_flow_parse, > + .stage =3D IAVF_FLOW_STAGE_IPSEC_CRYPTO, > +}; > + > +RTE_INIT(iavf_ipsec_flow_engine_register) > +{ > + iavf_register_flow_engine(&iavf_ipsec_flow_engine); > +} > diff --git a/drivers/net/iavf/iavf_ipsec_crypto.h > b/drivers/net/iavf/iavf_ipsec_crypto.h > new file mode 100644 > index 0000000000..4e4c8798ec > --- /dev/null > +++ b/drivers/net/iavf/iavf_ipsec_crypto.h > @@ -0,0 +1,160 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2020 Intel Corporation > + */ > + > +#ifndef _IAVF_IPSEC_CRYPTO_H_ > +#define _IAVF_IPSEC_CRYPTO_H_ > + > +#include > + > +#include "iavf.h" > + > + > + > +struct iavf_tx_ipsec_desc { > + union { > + struct { > + __le64 qw0; > + __le64 qw1; > + }; > + struct { > + __le16 l4payload_length; > + __le32 esn; > + __le16 trailer_length; > + u8 type:4; > + u8 rsv:1; > + u8 udp:1; > + u8 ivlen:2; > + u8 next_header; > + __le16 ipv6_ext_hdr_length; > + __le32 said; > + } __rte_packed; > + }; > +} __rte_packed; > + > +#define IAVF_IPSEC_TX_DESC_QW0_L4PAYLEN_SHIFT 0 > +#define IAVF_IPSEC_TX_DESC_QW0_L4PAYLEN_MASK (0x3FFFULL << \ > + IAVF_IPSEC_TX_DESC_QW0_L4PAYLEN_SHIFT) > + > +#define IAVF_IPSEC_TX_DESC_QW0_IPSECESN_SHIFT 16 > +#define IAVF_IPSEC_TX_DESC_QW0_IPSECESN_MASK (0xFFFFFFFFULL << > \ > + IAVF_IPSEC_TX_DESC_QW0_IPSECESN_SHIFT) > + > +#define IAVF_IPSEC_TX_DESC_QW0_TRAILERLEN_SHIFT 48 > +#define IAVF_IPSEC_TX_DESC_QW0_TRAILERLEN_MASK (0x3FULL << \ > + IAVF_IPSEC_TX_DESC_QW0_TRAILERLEN_SHIFT) > + > +#define IAVF_IPSEC_TX_DESC_QW1_UDP_SHIFT 5 > +#define IAVF_IPSEC_TX_DESC_QW1_UDP_MASK (0x1ULL << \ > + IAVF_IPSEC_TX_DESC_QW1_UDP_SHIFT) > + > +#define IAVF_IPSEC_TX_DESC_QW1_IVLEN_SHIFT 6 > +#define IAVF_IPSEC_TX_DESC_QW1_IVLEN_MASK (0x3ULL << \ > + IAVF_IPSEC_TX_DESC_QW1_IVLEN_SHIFT) > + > +#define IAVF_IPSEC_TX_DESC_QW1_IPSECNH_SHIFT 8 > +#define IAVF_IPSEC_TX_DESC_QW1_IPSECNH_MASK (0xFFULL << \ > + IAVF_IPSEC_TX_DESC_QW1_IPSECNH_SHIFT) > + > +#define IAVF_IPSEC_TX_DESC_QW1_EXTLEN_SHIFT 16 > +#define IAVF_IPSEC_TX_DESC_QW1_EXTLEN_MASK (0xFFULL << \ > + IAVF_IPSEC_TX_DESC_QW1_EXTLEN_SHIFT) > + > +#define IAVF_IPSEC_TX_DESC_QW1_IPSECSA_SHIFT 32 > +#define IAVF_IPSEC_TX_DESC_QW1_IPSECSA_MASK (0xFFFFFULL << \ > + IAVF_IPSEC_TX_DESC_QW1_IPSECSA_SHIFT) > + > +/* Initialization Vector Length type */ > +enum iavf_ipsec_iv_len { > + IAVF_IPSEC_IV_LEN_NONE, /* No IV */ > + IAVF_IPSEC_IV_LEN_DW, /* 4B IV */ > + IAVF_IPSEC_IV_LEN_DDW, /* 8B IV */ > + IAVF_IPSEC_IV_LEN_QDW, /* 16B IV */ > +}; > + > + > +/* IPsec Crypto Packet Metaday offload flags */ > +#define IAVF_IPSEC_CRYPTO_OL_FLAGS_IS_TUN (0x1 << 0) > +#define IAVF_IPSEC_CRYPTO_OL_FLAGS_ESN (0x1 << 1) > +#define IAVF_IPSEC_CRYPTO_OL_FLAGS_IPV6_EXT_HDRS (0x1 << 2) > +#define IAVF_IPSEC_CRYPTO_OL_FLAGS_NATT (0x1 << 3) > + > +/** > + * Packet metadata data structure used to hold parameters required by th= e > iAVF > + * transmit data path. Parameters set for session by calling > + * rte_security_set_pkt_metadata() API. > + */ > +struct iavf_ipsec_crypto_pkt_metadata { > + uint32_t sa_idx; /* SA hardware index (20b/4B) */ > + > + uint8_t ol_flags; /* flags (1B) */ > + uint8_t len_iv; /* IV length (2b/1B) */ > + uint8_t ctx_desc_ipsec_params; /* IPsec params for ctx desc (7b/1B) */ > + uint8_t esp_trailer_len; /* ESP trailer length (6b/1B) */ > + > + uint16_t l4_payload_len; /* L4 payload length */ > + uint8_t ipv6_ext_hdrs_len; /* IPv6 extender headers len (5b/1B) */ > + uint8_t next_proto; /* Next Protocol (8b/1B) */ > + > + uint32_t esn; /* Extended Sequence Number (32b/4B) */ > +} __rte_packed; > + > +/** > + * Inline IPsec Crypto offload is supported > + */ > +int > +iavf_ipsec_crypto_supported(struct iavf_adapter *adapter); > + > +/** > + * Create security context > + */ > +int iavf_security_ctx_create(struct iavf_adapter *adapter); > + > +/** > + * Create security context > + */ > +int iavf_security_init(struct iavf_adapter *adapter); > + > +/** > + * Set security capabilities > + */ > +int iavf_ipsec_crypto_set_security_capabililites(struct iavf_security_ct= x > + *iavf_sctx, struct virtchnl_ipsec_cap *virtchl_capabilities); > + > + > +int iavf_security_get_pkt_md_offset(struct iavf_adapter *adapter); > + > +/** > + * Destroy security context > + */ > +int iavf_security_ctx_destroy(struct iavf_adapter *adapterv); > + > +/** > + * Verify that the inline IPsec Crypto action is valid for this device > + */ > +uint32_t > +iavf_ipsec_crypto_action_valid(struct rte_eth_dev *ethdev, > + const struct rte_security_session *session, uint32_t spi); > + > +/** > + * Add inbound security policy rule to hardware > + */ > +int > +iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapt= er, > + uint32_t esp_spi, > + uint8_t is_v4, > + rte_be32_t v4_dst_addr, > + uint8_t *v6_dst_addr, > + uint8_t drop); > + > +/** > + * Delete inbound security policy rule from hardware > + */ > +int > +iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter, > + uint8_t is_v4, uint32_t flow_id); > + > +int > +iavf_security_get_pkt_md_offset(struct iavf_adapter *adapter); > + > +#endif /* _IAVF_IPSEC_CRYPTO_H_ */ > diff --git a/drivers/net/iavf/iavf_ipsec_crypto_capabilities.h > b/drivers/net/iavf/iavf_ipsec_crypto_capabilities.h > new file mode 100644 > index 0000000000..70ce8dd638 > --- /dev/null > +++ b/drivers/net/iavf/iavf_ipsec_crypto_capabilities.h > @@ -0,0 +1,383 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2020 Intel Corporation > + */ > + > +#ifndef _IAVF_IPSEC_CRYPTO_CAPABILITIES_H_ > +#define _IAVF_IPSEC_CRYPTO_CAPABILITIES_H_ > + > +static const struct rte_cryptodev_capabilities iavf_crypto_capabilities[= ] =3D { > + { /* SHA1 HMAC */ > + .op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym =3D { > + .xform_type =3D RTE_CRYPTO_SYM_XFORM_AUTH, > + {.auth =3D { > + .algo =3D RTE_CRYPTO_AUTH_SHA1_HMAC, > + .block_size =3D 64, > + .key_size =3D { > + .min =3D 1, > + .max =3D 64, > + .increment =3D 1 > + }, > + .digest_size =3D { > + .min =3D 20, > + .max =3D 20, > + .increment =3D 0 > + }, > + .iv_size =3D { 0 } > + }, } > + }, } > + }, > + { /* SHA256 HMAC */ > + .op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym =3D { > + .xform_type =3D RTE_CRYPTO_SYM_XFORM_AUTH, > + {.auth =3D { > + .algo =3D RTE_CRYPTO_AUTH_SHA256_HMAC, > + .block_size =3D 64, > + .key_size =3D { > + .min =3D 1, > + .max =3D 64, > + .increment =3D 1 > + }, > + .digest_size =3D { > + .min =3D 32, > + .max =3D 32, > + .increment =3D 0 > + }, > + .iv_size =3D { 0 } > + }, } > + }, } > + }, > + { /* SHA384 HMAC */ > + .op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym =3D { > + .xform_type =3D RTE_CRYPTO_SYM_XFORM_AUTH, > + {.auth =3D { > + .algo =3D RTE_CRYPTO_AUTH_SHA384_HMAC, > + .block_size =3D 128, > + .key_size =3D { > + .min =3D 1, > + .max =3D 128, > + .increment =3D 1 > + }, > + .digest_size =3D { > + .min =3D 48, > + .max =3D 48, > + .increment =3D 0 > + }, > + .iv_size =3D { 0 } > + }, } > + }, } > + }, > + { /* SHA512 HMAC */ > + .op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym =3D { > + .xform_type =3D RTE_CRYPTO_SYM_XFORM_AUTH, > + {.auth =3D { > + .algo =3D RTE_CRYPTO_AUTH_SHA512_HMAC, > + .block_size =3D 128, > + .key_size =3D { > + .min =3D 1, > + .max =3D 128, > + .increment =3D 1 > + }, > + .digest_size =3D { > + .min =3D 64, > + .max =3D 64, > + .increment =3D 0 > + }, > + .iv_size =3D { 0 } > + }, } > + }, } > + }, > + { /* MD5 HMAC */ > + .op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym =3D { > + .xform_type =3D RTE_CRYPTO_SYM_XFORM_AUTH, > + {.auth =3D { > + .algo =3D RTE_CRYPTO_AUTH_MD5_HMAC, > + .block_size =3D 64, > + .key_size =3D { > + .min =3D 1, > + .max =3D 64, > + .increment =3D 1 > + }, > + .digest_size =3D { > + .min =3D 16, > + .max =3D 16, > + .increment =3D 0 > + }, > + .iv_size =3D { 0 } > + }, } > + }, } > + }, > + { /* AES XCBC MAC */ > + .op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym =3D { > + .xform_type =3D RTE_CRYPTO_SYM_XFORM_AUTH, > + {.auth =3D { > + .algo =3D RTE_CRYPTO_AUTH_AES_XCBC_MAC, > + .block_size =3D 16, > + .key_size =3D { > + .min =3D 16, > + .max =3D 16, > + .increment =3D 0 > + }, > + .digest_size =3D { > + .min =3D 16, > + .max =3D 16, > + .increment =3D 0 > + }, > + .aad_size =3D { 0 }, > + .iv_size =3D { 0 } > + }, } > + }, } > + }, > + { /* AES GCM */ > + .op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym =3D { > + .xform_type =3D RTE_CRYPTO_SYM_XFORM_AEAD, > + {.aead =3D { > + .algo =3D RTE_CRYPTO_AEAD_AES_GCM, > + .block_size =3D 16, > + .key_size =3D { > + .min =3D 16, > + .max =3D 32, > + .increment =3D 8 > + }, > + .digest_size =3D { > + .min =3D 8, > + .max =3D 16, > + .increment =3D 4 > + }, > + .aad_size =3D { > + .min =3D 0, > + .max =3D 240, > + .increment =3D 1 > + }, > + .iv_size =3D { > + .min =3D 8, > + .max =3D 8, > + .increment =3D 0 > + }, > + }, } > + }, } > + }, > + { /* ChaCha20-Poly1305 */ > + .op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym =3D { > + .xform_type =3D RTE_CRYPTO_SYM_XFORM_AEAD, > + {.aead =3D { > + .algo =3D RTE_CRYPTO_AEAD_CHACHA20_POLY1305, > + .block_size =3D 16, > + .key_size =3D { > + .min =3D 32, > + .max =3D 32, > + .increment =3D 0 > + }, > + .digest_size =3D { > + .min =3D 8, > + .max =3D 16, > + .increment =3D 4 > + }, > + .aad_size =3D { > + .min =3D 0, > + .max =3D 240, > + .increment =3D 1 > + }, > + .iv_size =3D { > + .min =3D 12, > + .max =3D 12, > + .increment =3D 0 > + }, > + }, } > + }, } > + }, > + { /* AES CCM */ > + .op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym =3D { > + .xform_type =3D RTE_CRYPTO_SYM_XFORM_AEAD, > + {.aead =3D { > + .algo =3D RTE_CRYPTO_AEAD_AES_CCM, > + .block_size =3D 16, > + .key_size =3D { > + .min =3D 16, > + .max =3D 32, > + .increment =3D 8 > + }, > + .digest_size =3D { > + .min =3D 8, > + .max =3D 16, > + .increment =3D 4 > + }, > + .aad_size =3D { > + .min =3D 0, > + .max =3D 240, > + .increment =3D 1 > + }, > + .iv_size =3D { > + .min =3D 12, > + .max =3D 12, > + .increment =3D 0 > + }, > + }, } > + }, } > + }, > + { /* AES GMAC (AUTH) */ > + .op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym =3D { > + .xform_type =3D RTE_CRYPTO_SYM_XFORM_AUTH, > + {.auth =3D { > + .algo =3D RTE_CRYPTO_AUTH_AES_GMAC, > + .block_size =3D 16, > + .key_size =3D { > + .min =3D 16, > + .max =3D 32, > + .increment =3D 8 > + }, > + .digest_size =3D { > + .min =3D 8, > + .max =3D 16, > + .increment =3D 4 > + }, > + .iv_size =3D { > + .min =3D 12, > + .max =3D 12, > + .increment =3D 0 > + } > + }, } > + }, } > + }, > + { /* AES CMAC (AUTH) */ > + .op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym =3D { > + .xform_type =3D RTE_CRYPTO_SYM_XFORM_AUTH, > + {.auth =3D { > + .algo =3D RTE_CRYPTO_AUTH_AES_CMAC, > + .block_size =3D 16, > + .key_size =3D { > + .min =3D 16, > + .max =3D 32, > + .increment =3D 8 > + }, > + .digest_size =3D { > + .min =3D 8, > + .max =3D 16, > + .increment =3D 4 > + }, > + .iv_size =3D { > + .min =3D 12, > + .max =3D 12, > + .increment =3D 0 > + } > + }, } > + }, } > + }, > + { /* AES CBC */ > + .op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym =3D { > + .xform_type =3D RTE_CRYPTO_SYM_XFORM_CIPHER, > + {.cipher =3D { > + .algo =3D RTE_CRYPTO_CIPHER_AES_CBC, > + .block_size =3D 16, > + .key_size =3D { > + .min =3D 16, > + .max =3D 32, > + .increment =3D 8 > + }, > + .iv_size =3D { > + .min =3D 16, > + .max =3D 16, > + .increment =3D 0 > + } > + }, } > + }, } > + }, > + { /* AES CTR */ > + .op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym =3D { > + .xform_type =3D RTE_CRYPTO_SYM_XFORM_CIPHER, > + {.cipher =3D { > + .algo =3D RTE_CRYPTO_CIPHER_AES_CTR, > + .block_size =3D 16, > + .key_size =3D { > + .min =3D 16, > + .max =3D 32, > + .increment =3D 8 > + }, > + .iv_size =3D { > + .min =3D 8, > + .max =3D 8, > + .increment =3D 0 > + } > + }, } > + }, } > + }, > + { /* NULL (AUTH) */ > + .op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym =3D { > + .xform_type =3D RTE_CRYPTO_SYM_XFORM_AUTH, > + {.auth =3D { > + .algo =3D RTE_CRYPTO_AUTH_NULL, > + .block_size =3D 1, > + .key_size =3D { > + .min =3D 0, > + .max =3D 0, > + .increment =3D 0 > + }, > + .digest_size =3D { > + .min =3D 0, > + .max =3D 0, > + .increment =3D 0 > + }, > + .iv_size =3D { 0 } > + }, }, > + }, }, > + }, > + { /* NULL (CIPHER) */ > + .op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym =3D { > + .xform_type =3D RTE_CRYPTO_SYM_XFORM_CIPHER, > + {.cipher =3D { > + .algo =3D RTE_CRYPTO_CIPHER_NULL, > + .block_size =3D 1, > + .key_size =3D { > + .min =3D 0, > + .max =3D 0, > + .increment =3D 0 > + }, > + .iv_size =3D { > + .min =3D 0, > + .max =3D 0, > + .increment =3D 0 > + } > + }, }, > + }, } > + }, > + { /* 3DES CBC */ > + .op =3D RTE_CRYPTO_OP_TYPE_SYMMETRIC, > + {.sym =3D { > + .xform_type =3D RTE_CRYPTO_SYM_XFORM_CIPHER, > + {.cipher =3D { > + .algo =3D RTE_CRYPTO_CIPHER_3DES_CBC, > + .block_size =3D 8, > + .key_size =3D { > + .min =3D 24, > + .max =3D 24, > + .increment =3D 0 > + }, > + .iv_size =3D { > + .min =3D 8, > + .max =3D 8, > + .increment =3D 0 > + } > + }, } > + }, } > + }, > + { > + .op =3D RTE_CRYPTO_OP_TYPE_UNDEFINED, > + } > +}; > + > + > +#endif /* _IAVF_IPSEC_CRYPTO_CAPABILITIES_H_ */ > diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c > index 128691aaf1..80438f9f8a 100644 > --- a/drivers/net/iavf/iavf_rxtx.c > +++ b/drivers/net/iavf/iavf_rxtx.c > @@ -27,6 +27,7 @@ >=20 > #include "iavf.h" > #include "iavf_rxtx.h" > +#include "iavf_ipsec_crypto.h" > #include "rte_pmd_iavf.h" >=20 > /* Offset of mbuf dynamic field for protocol extraction's metadata */ > @@ -39,6 +40,7 @@ uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask; > uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask; > uint64_t rte_pmd_ifd_dynflag_proto_xtr_tcp_mask; > uint64_t rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask; > +uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipsec_crypto_said_mask; >=20 > uint8_t > iavf_proto_xtr_type_to_rxdid(uint8_t flex_type) > @@ -51,6 +53,8 @@ iavf_proto_xtr_type_to_rxdid(uint8_t flex_type) > [IAVF_PROTO_XTR_IPV6_FLOW] =3D > IAVF_RXDID_COMMS_AUX_IPV6_FLOW, > [IAVF_PROTO_XTR_TCP] =3D IAVF_RXDID_COMMS_AUX_TCP, > [IAVF_PROTO_XTR_IP_OFFSET] =3D > IAVF_RXDID_COMMS_AUX_IP_OFFSET, > + [IAVF_PROTO_XTR_IPSEC_CRYPTO_SAID] =3D > + IAVF_RXDID_COMMS_IPSEC_CRYPTO, > }; >=20 > return flex_type < RTE_DIM(rxdid_map) ? > @@ -508,6 +512,12 @@ iavf_select_rxd_to_pkt_fields_handler(struct > iavf_rx_queue *rxq, uint32_t rxdid) > rxq->rxd_to_pkt_fields =3D > iavf_rxd_to_pkt_fields_by_comms_aux_v2; > break; > + case IAVF_RXDID_COMMS_IPSEC_CRYPTO: > + rxq->xtr_ol_flag =3D > + rte_pmd_ifd_dynflag_proto_xtr_ipsec_crypto_said_mask; > + rxq->rxd_to_pkt_fields =3D > + iavf_rxd_to_pkt_fields_by_comms_aux_v2; > + break; > case IAVF_RXDID_COMMS_OVS_1: > rxq->rxd_to_pkt_fields =3D iavf_rxd_to_pkt_fields_by_comms_ovs; > break; > @@ -692,6 +702,8 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, > const struct rte_eth_txconf *tx_conf) > { > struct iavf_hw *hw =3D > IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private); > + struct iavf_adapter *adapter =3D > + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); > struct iavf_info *vf =3D > IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); > struct iavf_tx_queue *txq; > @@ -736,9 +748,9 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, > return -ENOMEM; > } >=20 > - if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) { > + if (adapter->vf.vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) > { > struct virtchnl_vlan_supported_caps *insertion_support =3D > - &vf->vlan_v2_caps.offloads.insertion_support; > + &adapter->vf.vlan_v2_caps.offloads.insertion_support; > uint32_t insertion_cap; >=20 > if (insertion_support->outer) > @@ -762,6 +774,10 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev, > txq->offloads =3D offloads; > txq->tx_deferred_start =3D tx_conf->tx_deferred_start; >=20 > + if (iavf_ipsec_crypto_supported(adapter)) > + txq->ipsec_crypto_pkt_md_offset =3D > + iavf_security_get_pkt_md_offset(adapter); > + > /* Allocate software ring */ > txq->sw_ring =3D > rte_zmalloc_socket("iavf tx sw ring", > @@ -1081,6 +1097,70 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb, > #endif > } >=20 > +static inline void > +iavf_flex_rxd_to_ipsec_crypto_said_get(struct rte_mbuf *mb, > + volatile union iavf_rx_flex_desc *rxdp) > +{ > + volatile struct iavf_32b_rx_flex_desc_comms_ipsec *desc =3D > + (volatile struct iavf_32b_rx_flex_desc_comms_ipsec *)rxdp; > + > + mb->dynfield1[0] =3D desc->ipsec_said & > + IAVF_RX_FLEX_DESC_IPSEC_CRYPTO_SAID_MASK; > + } > + > +static inline void > +iavf_flex_rxd_to_ipsec_crypto_status(struct rte_mbuf *mb, > + volatile union iavf_rx_flex_desc *rxdp, > + struct iavf_ipsec_crypto_stats *stats) > +{ > + uint16_t status1 =3D rte_le_to_cpu_64(rxdp->wb.status_error1); > + > + if (status1 & > BIT(IAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_PROCESSED)) { > + uint16_t ipsec_status; > + > + mb->ol_flags |=3D RTE_MBUF_F_RX_SEC_OFFLOAD; > + > + ipsec_status =3D status1 & > + IAVF_RX_FLEX_DESC_IPSEC_CRYPTO_STATUS_MASK; > + > + > + if (unlikely(ipsec_status !=3D > + IAVF_IPSEC_CRYPTO_STATUS_SUCCESS)) { > + mb->ol_flags |=3D RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED; > + > + switch (ipsec_status) { > + case IAVF_IPSEC_CRYPTO_STATUS_SAD_MISS: > + stats->ierrors.sad_miss++; > + break; > + case IAVF_IPSEC_CRYPTO_STATUS_NOT_PROCESSED: > + stats->ierrors.not_processed++; > + break; > + case IAVF_IPSEC_CRYPTO_STATUS_ICV_CHECK_FAIL: > + stats->ierrors.icv_check++; > + break; > + case IAVF_IPSEC_CRYPTO_STATUS_LENGTH_ERR: > + stats->ierrors.ipsec_length++; > + break; > + case IAVF_IPSEC_CRYPTO_STATUS_MISC_ERR: > + stats->ierrors.misc++; > + break; > +} > + > + stats->ierrors.count++; > + return; > + } > + > + stats->icount++; > + stats->ibytes +=3D rxdp->wb.pkt_len & 0x3FFF; > + > + if (rxdp->wb.rxdid =3D=3D IAVF_RXDID_COMMS_IPSEC_CRYPTO && > + ipsec_status !=3D > + IAVF_IPSEC_CRYPTO_STATUS_SAD_MISS) > + iavf_flex_rxd_to_ipsec_crypto_said_get(mb, rxdp); > + } > +} > + > + > /* Translate the rx descriptor status and error fields to pkt flags */ > static inline uint64_t > iavf_rxd_to_pkt_flags(uint64_t qword) > @@ -1399,6 +1479,8 @@ iavf_recv_pkts_flex_rxd(void *rx_queue, > rxm->packet_type =3D ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M & > rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)]; > iavf_flex_rxd_to_vlan_tci(rxm, &rxd); > + iavf_flex_rxd_to_ipsec_crypto_status(rxm, &rxd, > + &rxq->stats.ipsec_crypto); > rxq->rxd_to_pkt_fields(rxq, rxm, &rxd); > pkt_flags =3D iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0); > rxm->ol_flags |=3D pkt_flags; > @@ -1541,6 +1623,8 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, > struct rte_mbuf **rx_pkts, > first_seg->packet_type =3D ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M > & > rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)]; > iavf_flex_rxd_to_vlan_tci(first_seg, &rxd); > + iavf_flex_rxd_to_ipsec_crypto_status(first_seg, &rxd, > + &rxq->stats.ipsec_crypto); > rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd); > pkt_flags =3D iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0); >=20 > @@ -1779,6 +1863,8 @@ iavf_rx_scan_hw_ring_flex_rxd(struct > iavf_rx_queue *rxq) > mb->packet_type =3D ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M & > rte_le_to_cpu_16(rxdp[j].wb.ptype_flex_flags0)]; > iavf_flex_rxd_to_vlan_tci(mb, &rxdp[j]); > + iavf_flex_rxd_to_ipsec_crypto_status(mb, &rxdp[j], > + &rxq->stats.ipsec_crypto); > rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]); > stat_err0 =3D rte_le_to_cpu_16(rxdp[j].wb.status_error0); > pkt_flags =3D iavf_flex_rxd_error_to_pkt_flags(stat_err0); > @@ -2091,6 +2177,18 @@ iavf_fill_ctx_desc_cmd_field(volatile uint64_t *fi= eld, > struct rte_mbuf *m) > *field |=3D cmd; > } >=20 > +static inline void > +iavf_fill_ctx_desc_ipsec_field(volatile uint64_t *field, > + struct iavf_ipsec_crypto_pkt_metadata *ipsec_md) > +{ > + uint64_t ipsec_field =3D > + (uint64_t)ipsec_md->ctx_desc_ipsec_params << > + IAVF_TXD_CTX_QW1_IPSEC_PARAMS_CIPHERBLK_SHIFT; > + > + *field |=3D ipsec_field; > +} > + > + > static inline void > iavf_fill_ctx_desc_tunnelling_field(volatile uint64_t *qw0, > const struct rte_mbuf *m) > @@ -2123,15 +2221,19 @@ iavf_fill_ctx_desc_tunnelling_field(volatile > uint64_t *qw0, >=20 > static inline uint16_t > iavf_fill_ctx_desc_segmentation_field(volatile uint64_t *field, > - struct rte_mbuf *m) > + struct rte_mbuf *m, struct iavf_ipsec_crypto_pkt_metadata *ipsec_md) > { > uint64_t segmentation_field =3D 0; > uint64_t total_length =3D 0; >=20 > - total_length =3D m->pkt_len - (m->l2_len + m->l3_len + m->l4_len); > + if (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) { > + total_length =3D ipsec_md->l4_payload_len; > + } else { > + total_length =3D m->pkt_len - (m->l2_len + m->l3_len + m->l4_len); >=20 > - if (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) > - total_length -=3D m->outer_l3_len; > + if (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) > + total_length -=3D m->outer_l3_len; > + } >=20 > #ifdef RTE_LIBRTE_IAVF_DEBUG_TX > if (!m->l4_len || !m->tso_segsz) > @@ -2160,7 +2262,8 @@ struct iavf_tx_context_desc_qws { >=20 > static inline void > iavf_fill_context_desc(volatile struct iavf_tx_context_desc *desc, > - struct rte_mbuf *m, uint16_t *tlen) > + struct rte_mbuf *m, struct iavf_ipsec_crypto_pkt_metadata *ipsec_md, > + uint16_t *tlen) > { > volatile struct iavf_tx_context_desc_qws *desc_qws =3D > (volatile struct iavf_tx_context_desc_qws *)desc; > @@ -2172,8 +2275,13 @@ iavf_fill_context_desc(volatile struct > iavf_tx_context_desc *desc, >=20 > /* fill segmentation field */ > if (m->ol_flags & (RTE_MBUF_F_TX_TCP_SEG | > RTE_MBUF_F_TX_UDP_SEG)) { > + /* fill IPsec field */ > + if (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) > + iavf_fill_ctx_desc_ipsec_field(&desc_qws->qw1, > + ipsec_md); > + > *tlen =3D iavf_fill_ctx_desc_segmentation_field(&desc_qws->qw1, > - m); > + m, ipsec_md); > } >=20 > /* fill tunnelling field */ > @@ -2187,6 +2295,38 @@ iavf_fill_context_desc(volatile struct > iavf_tx_context_desc *desc, > } >=20 >=20 > +static inline void > +iavf_fill_ipsec_desc(volatile struct iavf_tx_ipsec_desc *desc, > + const struct iavf_ipsec_crypto_pkt_metadata *md, uint16_t *ipsec_len) > +{ > + desc->qw0 =3D rte_cpu_to_le_64(((uint64_t)md->l4_payload_len << > + IAVF_IPSEC_TX_DESC_QW0_L4PAYLEN_SHIFT) | > + ((uint64_t)md->esn << IAVF_IPSEC_TX_DESC_QW0_IPSECESN_SHIFT) > | > + ((uint64_t)md->esp_trailer_len << > + IAVF_IPSEC_TX_DESC_QW0_TRAILERLEN_SHIFT)); > + > + desc->qw1 =3D rte_cpu_to_le_64(((uint64_t)md->sa_idx << > + IAVF_IPSEC_TX_DESC_QW1_IPSECSA_SHIFT) | > + ((uint64_t)md->next_proto << > + IAVF_IPSEC_TX_DESC_QW1_IPSECNH_SHIFT) | > + ((uint64_t)(md->len_iv & 0x3) << > + IAVF_IPSEC_TX_DESC_QW1_IVLEN_SHIFT) | > + ((uint64_t)(md->ol_flags & IAVF_IPSEC_CRYPTO_OL_FLAGS_NATT ? > + 1ULL : 0ULL) << > + IAVF_IPSEC_TX_DESC_QW1_UDP_SHIFT) | > + (uint64_t)IAVF_TX_DESC_DTYPE_IPSEC); > + > + /** > + * TODO: Pre-calculate this in the Session initialization > + * > + * Calculate IPsec length required in data descriptor func when TSO > + * offload is enabled > + */ > + *ipsec_len =3D sizeof(struct rte_esp_hdr) + (md->len_iv >> 2) + > + (md->ol_flags & IAVF_IPSEC_CRYPTO_OL_FLAGS_NATT ? > + sizeof(struct rte_udp_hdr) : 0); > +} > + > static inline void > iavf_build_data_desc_cmd_offset_fields(volatile uint64_t *qw1, > struct rte_mbuf *m) > @@ -2298,6 +2438,17 @@ iavf_fill_data_desc(volatile struct iavf_tx_desc > *desc, > } >=20 >=20 > +static struct iavf_ipsec_crypto_pkt_metadata * > +iavf_ipsec_crypto_get_pkt_metadata(const struct iavf_tx_queue *txq, > + struct rte_mbuf *m) > +{ > + if (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) > + return RTE_MBUF_DYNFIELD(m, txq->ipsec_crypto_pkt_md_offset, > + struct iavf_ipsec_crypto_pkt_metadata *); > + > + return NULL; > +} > + > /* TX function */ > uint16_t > iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pk= ts) > @@ -2326,7 +2477,9 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf > **tx_pkts, uint16_t nb_pkts) >=20 > for (idx =3D 0; idx < nb_pkts; idx++) { > volatile struct iavf_tx_desc *ddesc; > - uint16_t nb_desc_ctx; > + struct iavf_ipsec_crypto_pkt_metadata *ipsec_md; > + > + uint16_t nb_desc_ctx, nb_desc_ipsec; > uint16_t nb_desc_data, nb_desc_required; > uint16_t tlen =3D 0, ipseclen =3D 0; > uint64_t ddesc_template =3D 0; > @@ -2336,16 +2489,23 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf > **tx_pkts, uint16_t nb_pkts) >=20 > RTE_MBUF_PREFETCH_TO_FREE(txe->mbuf); >=20 > + /** > + * Get metadata for ipsec crypto from mbuf dynamic fields if > + * security offload is specified. > + */ > + ipsec_md =3D iavf_ipsec_crypto_get_pkt_metadata(txq, mb); > + > nb_desc_data =3D mb->nb_segs; > nb_desc_ctx =3D !!(mb->ol_flags & > (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG | > RTE_MBUF_F_TX_TUNNEL_MASK)); > + nb_desc_ipsec =3D !!(mb->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD); >=20 > /** > * The number of descriptors that must be allocated for > * a packet equals to the number of the segments of that > * packet plus the context and ipsec descriptors if needed. > */ > - nb_desc_required =3D nb_desc_data + nb_desc_ctx; > + nb_desc_required =3D nb_desc_data + nb_desc_ctx + nb_desc_ipsec; >=20 > desc_idx_last =3D (uint16_t)(desc_idx + nb_desc_required - 1); >=20 > @@ -2396,7 +2556,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf > **tx_pkts, uint16_t nb_pkts) > txe->mbuf =3D NULL; > } >=20 > - iavf_fill_context_desc(ctx_desc, mb, &tlen); > + iavf_fill_context_desc(ctx_desc, mb, ipsec_md, &tlen); > IAVF_DUMP_TX_DESC(txq, ctx_desc, desc_idx); >=20 > txe->last_id =3D desc_idx_last; > @@ -2404,7 +2564,27 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf > **tx_pkts, uint16_t nb_pkts) > txe =3D txn; > } >=20 > + if (nb_desc_ipsec) { > + volatile struct iavf_tx_ipsec_desc *ipsec_desc =3D > + (volatile struct iavf_tx_ipsec_desc *) > + &txr[desc_idx]; > + > + txn =3D &txe_ring[txe->next_id]; > + RTE_MBUF_PREFETCH_TO_FREE(txn->mbuf); >=20 > + if (txe->mbuf) { > + rte_pktmbuf_free_seg(txe->mbuf); > + txe->mbuf =3D NULL; > + } > + > + iavf_fill_ipsec_desc(ipsec_desc, ipsec_md, &ipseclen); > + > + IAVF_DUMP_TX_DESC(txq, ipsec_desc, desc_idx); > + > + txe->last_id =3D desc_idx_last; > + desc_idx =3D txe->next_id; > + txe =3D txn; > + } >=20 > mb_seg =3D mb; >=20 > diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h > index 1da1278452..b88c81f8f6 100644 > --- a/drivers/net/iavf/iavf_rxtx.h > +++ b/drivers/net/iavf/iavf_rxtx.h > @@ -25,7 +25,8 @@ >=20 > #define IAVF_TX_NO_VECTOR_FLAGS ( \ > RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \ > - RTE_ETH_TX_OFFLOAD_TCP_TSO) > + RTE_ETH_TX_OFFLOAD_TCP_TSO | \ > + RTE_ETH_TX_OFFLOAD_SECURITY) >=20 > #define IAVF_TX_VECTOR_OFFLOAD ( \ > RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \ > @@ -36,10 +37,10 @@ > RTE_ETH_TX_OFFLOAD_TCP_CKSUM) >=20 > #define IAVF_RX_VECTOR_OFFLOAD ( \ > - RTE_ETH_RX_OFFLOAD_CHECKSUM | \ > - RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ > - RTE_ETH_RX_OFFLOAD_VLAN | \ > - RTE_ETH_RX_OFFLOAD_RSS_HASH) > + DEV_RX_OFFLOAD_CHECKSUM | \ > + DEV_RX_OFFLOAD_SCTP_CKSUM | \ > + DEV_RX_OFFLOAD_VLAN | \ > + DEV_RX_OFFLOAD_RSS_HASH) >=20 > #define IAVF_VECTOR_PATH 0 > #define IAVF_VECTOR_OFFLOAD_PATH 1 > @@ -47,23 +48,26 @@ > #define DEFAULT_TX_RS_THRESH 32 > #define DEFAULT_TX_FREE_THRESH 32 >=20 > -#define IAVF_MIN_TSO_MSS 88 > +#define IAVF_MIN_TSO_MSS 256 > #define IAVF_MAX_TSO_MSS 9668 > #define IAVF_TSO_MAX_SEG UINT8_MAX > #define IAVF_TX_MAX_MTU_SEG 8 >=20 > -#define IAVF_TX_CKSUM_OFFLOAD_MASK (RTE_MBUF_F_TX_IP_CKSUM | > \ > +#define IAVF_TX_CKSUM_OFFLOAD_MASK ( \ > + RTE_MBUF_F_TX_IP_CKSUM | \ > RTE_MBUF_F_TX_L4_MASK | \ > RTE_MBUF_F_TX_TCP_SEG) >=20 > -#define IAVF_TX_OFFLOAD_MASK (RTE_MBUF_F_TX_OUTER_IPV6 | \ > +#define IAVF_TX_OFFLOAD_MASK ( \ > + RTE_MBUF_F_TX_OUTER_IPV6 | \ > RTE_MBUF_F_TX_OUTER_IPV4 | \ > RTE_MBUF_F_TX_IPV6 | \ > RTE_MBUF_F_TX_IPV4 | \ > RTE_MBUF_F_TX_VLAN | \ > RTE_MBUF_F_TX_IP_CKSUM | \ > RTE_MBUF_F_TX_L4_MASK | \ > - RTE_MBUF_F_TX_TCP_SEG) > + RTE_MBUF_F_TX_TCP_SEG | \ > + RTE_ETH_TX_OFFLOAD_SECURITY) >=20 > #define IAVF_TX_OFFLOAD_NOTSUP_MASK \ > (RTE_MBUF_F_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK) > @@ -161,6 +165,24 @@ struct iavf_txq_ops { > void (*release_mbufs)(struct iavf_tx_queue *txq); > }; >=20 > +struct iavf_ipsec_crypto_stats { > + uint64_t icount; > + uint64_t ibytes; > + struct { > + uint64_t count; > + uint64_t sad_miss; > + uint64_t not_processed; > + uint64_t icv_check; > + uint64_t ipsec_length; > + uint64_t misc; > + } ierrors; > +}; > + > +struct iavf_rx_queue_stats { > + uint64_t reserved; > + struct iavf_ipsec_crypto_stats ipsec_crypto; > +}; > + > /* Structure associated with each Rx queue. */ > struct iavf_rx_queue { > struct rte_mempool *mp; /* mbuf pool to populate Rx ring */ > @@ -209,6 +231,7 @@ struct iavf_rx_queue { > /* flexible descriptor metadata extraction offload flag */ > iavf_rxd_to_pkt_fields_t rxd_to_pkt_fields; > /* handle flexible descriptor by RXDID */ > + struct iavf_rx_queue_stats stats; > uint64_t offloads; > }; >=20 > @@ -243,6 +266,7 @@ struct iavf_tx_queue { > uint64_t offloads; > uint16_t next_dd; /* next to set RS, for VPMD */ > uint16_t next_rs; /* next to check DD, for VPMD */ > + uint16_t ipsec_crypto_pkt_md_offset; >=20 > bool q_set; /* if rx queue has been configured */ > bool tx_deferred_start; /* don't start this queue in dev start *= / > @@ -345,6 +369,40 @@ struct iavf_32b_rx_flex_desc_comms_ovs { > } flex_ts; > }; >=20 > +/* Rx Flex Descriptor > + * RxDID Profile ID 24 Inline IPsec > + * Flex-field 0: RSS hash lower 16-bits > + * Flex-field 1: RSS hash upper 16-bits > + * Flex-field 2: Flow ID lower 16-bits > + * Flex-field 3: Flow ID upper 16-bits > + * Flex-field 4: Inline IPsec SAID lower 16-bits > + * Flex-field 5: Inline IPsec SAID upper 16-bits > + */ > +struct iavf_32b_rx_flex_desc_comms_ipsec { > + /* Qword 0 */ > + u8 rxdid; > + u8 mir_id_umb_cast; > + __le16 ptype_flexi_flags0; > + __le16 pkt_len; > + __le16 hdr_len_sph_flex_flags1; > + > + /* Qword 1 */ > + __le16 status_error0; > + __le16 l2tag1; > + __le32 rss_hash; > + > + /* Qword 2 */ > + __le16 status_error1; > + u8 flexi_flags2; > + u8 ts_low; > + __le16 l2tag2_1st; > + __le16 l2tag2_2nd; > + > + /* Qword 3 */ > + __le32 flow_id; > + __le32 ipsec_said; > +}; > + > /* Receive Flex Descriptor profile IDs: There are a total > * of 64 profiles where profile IDs 0/1 are for legacy; and > * profiles 2-63 are flex profiles that can be programmed > @@ -364,6 +422,7 @@ enum iavf_rxdid { > IAVF_RXDID_COMMS_AUX_TCP =3D 21, > IAVF_RXDID_COMMS_OVS_1 =3D 22, > IAVF_RXDID_COMMS_OVS_2 =3D 23, > + IAVF_RXDID_COMMS_IPSEC_CRYPTO =3D 24, > IAVF_RXDID_COMMS_AUX_IP_OFFSET =3D 25, > IAVF_RXDID_LAST =3D 63, > }; > @@ -391,9 +450,13 @@ enum iavf_rx_flex_desc_status_error_0_bits { >=20 > enum iavf_rx_flex_desc_status_error_1_bits { > /* Note: These are predefined bit offsets */ > - IAVF_RX_FLEX_DESC_STATUS1_CPM_S =3D 0, /* 4 bits */ > - IAVF_RX_FLEX_DESC_STATUS1_NAT_S =3D 4, > - IAVF_RX_FLEX_DESC_STATUS1_CRYPTO_S =3D 5, > + /* Bits 3:0 are reserved for inline ipsec status */ > + IAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_STATUS_0 =3D 0, > + IAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_STATUS_1, > + IAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_STATUS_2, > + IAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_STATUS_3, > + IAVF_RX_FLEX_DESC_STATUS1_NAT_S, > + IAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_PROCESSED, > /* [10:6] reserved */ > IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S =3D 11, > IAVF_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S =3D 12, > @@ -403,6 +466,23 @@ enum iavf_rx_flex_desc_status_error_1_bits { > IAVF_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */ > }; >=20 > +#define IAVF_RX_FLEX_DESC_IPSEC_CRYPTO_STATUS_MASK ( \ > + BIT(IAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_STATUS_0) | \ > + BIT(IAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_STATUS_1) | \ > + BIT(IAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_STATUS_2) | \ > + BIT(IAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_STATUS_3)) > + > +enum iavf_rx_flex_desc_ipsec_crypto_status { > + IAVF_IPSEC_CRYPTO_STATUS_SUCCESS =3D 0, > + IAVF_IPSEC_CRYPTO_STATUS_SAD_MISS, > + IAVF_IPSEC_CRYPTO_STATUS_NOT_PROCESSED, > + IAVF_IPSEC_CRYPTO_STATUS_ICV_CHECK_FAIL, > + IAVF_IPSEC_CRYPTO_STATUS_LENGTH_ERR, > + /* Reserved */ > + IAVF_IPSEC_CRYPTO_STATUS_MISC_ERR =3D 0xF > +}; > + > + >=20 > #define IAVF_TXD_DATA_QW1_DTYPE_SHIFT (0) > #define IAVF_TXD_DATA_QW1_DTYPE_MASK (0xFUL << > IAVF_TXD_QW1_DTYPE_SHIFT) > @@ -670,6 +750,9 @@ void iavf_dump_tx_descriptor(const struct > iavf_tx_queue *txq, > case IAVF_TX_DESC_DTYPE_CONTEXT: > name =3D "Tx_context_desc"; > break; > + case IAVF_TX_DESC_DTYPE_IPSEC: > + name =3D "Tx_IPsec_desc"; > + break; > default: > name =3D "unknown_desc"; > break; > diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.= c > index 53d1506677..353521d726 100644 > --- a/drivers/net/iavf/iavf_vchnl.c > +++ b/drivers/net/iavf/iavf_vchnl.c > @@ -1774,3 +1774,32 @@ iavf_get_max_rss_queue_region(struct > iavf_adapter *adapter) >=20 > return 0; > } > + > + > + > +int > +iavf_ipsec_crypto_request(struct iavf_adapter *adapter, > + uint8_t *msg, size_t msg_len, > + uint8_t *resp_msg, size_t resp_msg_len) > +{ > + struct iavf_info *vf =3D IAVF_DEV_PRIVATE_TO_VF(adapter); > + struct iavf_cmd_info args; > + int err; > + > + args.ops =3D VIRTCHNL_OP_INLINE_IPSEC_CRYPTO; > + args.in_args =3D msg; > + args.in_args_size =3D msg_len; > + args.out_buffer =3D vf->aq_resp; > + args.out_size =3D IAVF_AQ_BUF_SZ; > + > + err =3D iavf_execute_vf_cmd(adapter, &args, 1); > + if (err) { > + PMD_DRV_LOG(ERR, "fail to execute command %s", > + "OP_INLINE_IPSEC_CRYPTO"); > + return err; > + } > + > + memcpy(resp_msg, args.out_buffer, resp_msg_len); > + > + return 0; > +} > diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build > index 36a82e3faa..5eb230f687 100644 > --- a/drivers/net/iavf/meson.build > +++ b/drivers/net/iavf/meson.build > @@ -5,7 +5,7 @@ > cflags +=3D ['-Wno-strict-aliasing'] >=20 > includes +=3D include_directories('../../common/iavf') > -deps +=3D ['common_iavf'] > +deps +=3D ['common_iavf', 'security', 'cryptodev'] >=20 > sources =3D files( > 'iavf_ethdev.c', > @@ -15,6 +15,7 @@ sources =3D files( > 'iavf_fdir.c', > 'iavf_hash.c', > 'iavf_tm.c', > + 'iavf_ipsec_crypto.c', > ) >=20 > if arch_subdir =3D=3D 'x86' > diff --git a/drivers/net/iavf/rte_pmd_iavf.h b/drivers/net/iavf/rte_pmd_i= avf.h > index 3a045040f1..7426eb9be3 100644 > --- a/drivers/net/iavf/rte_pmd_iavf.h > +++ b/drivers/net/iavf/rte_pmd_iavf.h > @@ -92,6 +92,7 @@ extern uint64_t > rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask; > extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask; > extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_tcp_mask; > extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask; > +extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipsec_crypto_said_mask; >=20 > /** > * The mbuf dynamic field pointer for flexible descriptor's extraction > metadata. > diff --git a/drivers/net/iavf/version.map b/drivers/net/iavf/version.map > index f3efe756cf..97f0f87311 100644 > --- a/drivers/net/iavf/version.map > +++ b/drivers/net/iavf/version.map > @@ -13,4 +13,7 @@ EXPERIMENTAL { > rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask; > rte_pmd_ifd_dynflag_proto_xtr_tcp_mask; > rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask; > + > + # added in 21.11 > + rte_pmd_ifd_dynflag_proto_xtr_ipsec_crypto_said_mask; > }; > -- > 2.25.1