From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 717D4A00C4; Wed, 28 Sep 2022 14:46:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7B01942B6C; Wed, 28 Sep 2022 14:45:59 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 820B64113C for ; Wed, 28 Sep 2022 14:45:55 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 28SA4cpZ003076; Wed, 28 Sep 2022 05:45:53 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=vkDMsPEoOIG2oyZqUDRdK4plrTZ62FKD3cxPMuDtOsw=; b=ieJtj46bsY2DWuTglpSnPmTqTOzmgRFKnWfa6+YMvPW2sQ4Yirh5H64fsxeKanWVLi0w Z+H8RZO++6XaLgyUjvHIrAPCkV16kJ8ZVXiWQLr+kROXPBHW/ErVCyEu8movoVsinJCz wM25LGS0CO/RHicHlNPLZCvnbxJEYdLr4krglF+t0PdHwmswp6N30JNrQCR6EN6jv4OB GcKx0jdmWWhR/LpOd1IEOBtSBAGnOCBYATSRrvKpDUlLjUaSKk4jxvvfvEPsglji5M2w eyBGJjqWwx7w207uS94cxN2Mt48cHM7xc9ZuZPWFjsUM54uzdqOPuj87y3jxQWj73Sdi RA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3jvjkk8skd-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 28 Sep 2022 05:45:53 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 28 Sep 2022 05:45:39 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Wed, 28 Sep 2022 05:45:39 -0700 Received: from localhost.localdomain (unknown [10.28.36.102]) by maili.marvell.com (Postfix) with ESMTP id 3EEF63F7053; Wed, 28 Sep 2022 05:45:35 -0700 (PDT) From: Akhil Goyal To: CC: , , , , , , , , , , , , , , , Akhil Goyal Subject: [PATCH 3/5] net/cnxk: support MACsec Date: Wed, 28 Sep 2022 18:15:14 +0530 Message-ID: <20220928124516.93050-4-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220928124516.93050-1-gakhil@marvell.com> References: <20220928122253.23108-4-gakhil@marvell.com> <20220928124516.93050-1-gakhil@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: ZZ-JXnq0FiUXl3FK9woH6eZBwkNeiZmg X-Proofpoint-ORIG-GUID: ZZ-JXnq0FiUXl3FK9woH6eZBwkNeiZmg X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.895,Hydra:6.0.528,FMLib:17.11.122.1 definitions=2022-09-28_06,2022-09-28_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Signed-off-by: Akhil Goyal --- drivers/net/cnxk/cn10k_ethdev_mcs.c | 407 ++++++++++++++++++++++++++++ drivers/net/cnxk/cn10k_ethdev_mcs.h | 59 ++++ drivers/net/cnxk/cn10k_ethdev_sec.c | 11 +- drivers/net/cnxk/cn10k_flow.c | 14 + drivers/net/cnxk/cnxk_ethdev.h | 31 +++ drivers/net/cnxk/cnxk_ethdev_sec.c | 2 +- drivers/net/cnxk/meson.build | 1 + 7 files changed, 523 insertions(+), 2 deletions(-) create mode 100644 drivers/net/cnxk/cn10k_ethdev_mcs.c create mode 100644 drivers/net/cnxk/cn10k_ethdev_mcs.h diff --git a/drivers/net/cnxk/cn10k_ethdev_mcs.c b/drivers/net/cnxk/cn10k_ethdev_mcs.c new file mode 100644 index 0000000000..90363f8e17 --- /dev/null +++ b/drivers/net/cnxk/cn10k_ethdev_mcs.c @@ -0,0 +1,407 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2022 Marvell. + */ + +#include +#include +#include + +static int +mcs_resource_alloc(struct cnxk_mcs_dev *mcs_dev, enum rte_security_macsec_direction dir, + uint8_t rsrc_id[], uint8_t rsrc_cnt, enum cnxk_mcs_rsrc_type type) +{ + struct roc_mcs_alloc_rsrc_req req = {0}; + struct roc_mcs_alloc_rsrc_rsp rsp = {0}; + int i; + + req.rsrc_type = type; + req.rsrc_cnt = rsrc_cnt; + req.mcs_id = mcs_dev->idx; + req.dir = dir; + + if (roc_mcs_alloc_rsrc(mcs_dev->mdev, &req, &rsp)) { + printf("error: Cannot allocate mcs resource.\n"); + return -1; + } + + for (i = 0; i < rsrc_cnt; i++) { + switch (rsp.rsrc_type) { + case CNXK_MCS_RSRC_TYPE_FLOWID: + rsrc_id[i] = rsp.flow_ids[i]; + break; + case CNXK_MCS_RSRC_TYPE_SECY: + rsrc_id[i] = rsp.secy_ids[i]; + break; + case CNXK_MCS_RSRC_TYPE_SC: + rsrc_id[i] = rsp.sc_ids[i]; + break; + case CNXK_MCS_RSRC_TYPE_SA: + rsrc_id[i] = rsp.sa_ids[i]; + break; + default : + printf("error: Invalid mcs resource allocated.\n"); + return -1; + } + } + return 0; +} + +int +cn10k_eth_macsec_sa_create(void *device, struct rte_security_macsec_sa *conf) +{ + struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device; + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + struct cnxk_mcs_dev *mcs_dev = dev->mcs_dev; + struct roc_mcs_pn_table_write_req pn_req = {0}; + struct roc_mcs_sa_plcy_write_req req = {0}; + uint8_t hash_key[16] = {0}; + uint8_t sa_id = 0; + int ret = 0; + + ret = mcs_resource_alloc(mcs_dev, conf->dir, &sa_id, 1, CNXK_MCS_RSRC_TYPE_SA); + if (ret) { + printf("Failed to allocate SA id.\n"); + return -ENOMEM; + } + req.sa_index[0] = sa_id; + req.sa_cnt = 1; + req.mcs_id = mcs_dev->idx; + req.dir = conf->dir; + + if (conf->key.length != 16 || conf->key.length != 32) + return -EINVAL; + + memcpy(&req.plcy[0][0], conf->key.data, conf->key.length); + roc_aes_hash_key_derive(conf->key.data, conf->key.length, hash_key); + memcpy(&req.plcy[0][4], hash_key, CNXK_MACSEC_HASH_KEY); + memcpy(&req.plcy[0][6], conf->salt, RTE_SECURITY_MACSEC_SALT_LEN); + req.plcy[0][7] |= (uint64_t)conf->ssci << 32; + req.plcy[0][8] = conf->an & 0x3; + + ret = roc_mcs_sa_policy_write(mcs_dev->mdev, &req); + if (ret) { + printf("Failed to write SA policy.\n"); + return -EINVAL; + } + + pn_req.next_pn = conf->next_pn; + pn_req.pn_id = sa_id; + pn_req.mcs_id = mcs_dev->idx; + pn_req.dir = conf->dir; + + ret = roc_mcs_pn_table_write(mcs_dev->mdev, &pn_req); + if (ret) { + printf("Failed to write PN table.\n"); + return -EINVAL; + } + + return sa_id; +} + +int +cn10k_eth_macsec_sa_destroy(void *device, uint16_t sa_id) +{ + RTE_SET_USED(device); + RTE_SET_USED(sa_id); + + return 0; +} + +int +cn10k_eth_macsec_sc_create(void *device, struct rte_security_macsec_sc *conf) +{ + struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device; + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + struct cnxk_mcs_dev *mcs_dev = dev->mcs_dev; + uint8_t sc_id = 0; + int i, ret = 0; + + ret = mcs_resource_alloc(mcs_dev, conf->dir, &sc_id, 1, CNXK_MCS_RSRC_TYPE_SC); + if (ret) { + printf("Failed to allocate SC id.\n"); + return -ENOMEM; + } + + if (conf->dir == RTE_SECURITY_MACSEC_DIR_TX) { + struct roc_mcs_tx_sc_sa_map req = {0}; + + req.mcs_id = mcs_dev->idx; + req.sa_index0 = conf->sc_tx.sa_id & 0x7F; + req.sa_index1 = conf->sc_tx.sa_id_rekey & 0x7F; + req.rekey_ena = conf->sc_tx.re_key_en; + req.sa_index0_vld = conf->sc_tx.active; + req.sa_index1_vld = conf->sc_tx.re_key_en && conf->sc_tx.active; + req.tx_sa_active = conf->sc_tx.active; + req.sectag_sci = conf->sc_tx.sci; + req.sc_id = sc_id; + req.mcs_id = mcs_dev->idx; + + ret = roc_mcs_tx_sc_sa_map_write(mcs_dev->mdev, &req); + if (ret) { + printf("Failed to map TX SC-SA"); + return -EINVAL; + } + } else { + for (i = 0; i < RTE_SECURITY_MACSEC_NUM_AN; i++) { + struct roc_mcs_rx_sc_sa_map req = {0}; + + req.mcs_id = mcs_dev->idx; + req.sa_index = conf->sc_rx.sa_id[i] & 0x7F; + req.sa_in_use = conf->sc_rx.sa_in_use[i]; + req.sc_id = sc_id; + req.an = i & 0x3; + req.mcs_id = mcs_dev->idx; + ret = roc_mcs_rx_sc_sa_map_write(mcs_dev->mdev, &req); + if (ret) { + printf("Failed to map RX SC-SA"); + return -EINVAL; + } + } + } + return sc_id; +} + +int +cn10k_eth_macsec_sc_destroy(void *device, uint16_t sc_id) +{ + RTE_SET_USED(device); + RTE_SET_USED(sc_id); + + return 0; +} + +struct cnxk_macsec_sess * +cnxk_eth_macsec_sess_get_by_sess(struct cnxk_eth_dev *dev, + const struct rte_security_session *sess) +{ + struct cnxk_macsec_sess *macsec_sess = NULL; + + TAILQ_FOREACH(macsec_sess, &dev->mcs_list, entry) { + if (macsec_sess->sess == sess) + return macsec_sess; + } + + return NULL; +} + +int +cn10k_eth_macsec_session_create(struct cnxk_eth_dev *dev, + struct rte_security_session_conf *conf, + struct rte_security_session *sess, + struct rte_mempool *mempool) +{ + struct rte_security_macsec_xform *xform = &conf->macsec; + struct cnxk_macsec_sess *macsec_sess_priv; + struct roc_mcs_secy_plcy_write_req req; + struct cnxk_mcs_dev *mcs_dev = dev->mcs_dev; + uint8_t secy_id = 0; + uint8_t sectag_tci = 0; + int ret = 0; + + ret = mcs_resource_alloc(mcs_dev, xform->dir, &secy_id, 1, CNXK_MCS_RSRC_TYPE_SECY); + if (ret) { + printf("Failed to allocate SECY id.\n"); + return -ENOMEM; + } + + req.secy_id = secy_id; + req.mcs_id = mcs_dev->idx; + req.dir = xform->dir; + req.plcy = 0L; + + if (xform->dir == RTE_SECURITY_MACSEC_DIR_TX) { + sectag_tci = ((uint8_t)xform->tx_secy.sectag_version << 5) | + ((uint8_t)xform->tx_secy.end_station << 4) | + ((uint8_t)xform->tx_secy.send_sci << 3) | + ((uint8_t)xform->tx_secy.scb << 2) | + ((uint8_t)xform->tx_secy.encrypt << 1) | + (uint8_t)xform->tx_secy.encrypt; + req.plcy = ((uint64_t)xform->tx_secy.mtu << 48) | + (((uint64_t)sectag_tci & 0x3F) << 40) | + (((uint64_t)xform->tx_secy.sectag_off & 0x7F) << 32) | + ((uint64_t)xform->tx_secy.sectag_insert_mode << 30) | + ((uint64_t)xform->tx_secy.icv_include_da_sa << 28) | + (((uint64_t)xform->cipher_off & 0x7F) << 20) | + ((uint64_t)xform->alg << 12) | + ((uint64_t)xform->tx_secy.protect_frames << 4) | + (uint64_t)xform->tx_secy.ctrl_port_enable; + } else { + req.plcy = ((uint64_t)xform->rx_secy.replay_win_sz << 32) | + ((uint64_t)xform->rx_secy.replay_protect << 30) | + ((uint64_t)xform->rx_secy.icv_include_da_sa << 28) | + (((uint64_t)xform->cipher_off & 0x7F) << 20) | + ((uint64_t)xform->alg << 12) | + ((uint64_t)xform->rx_secy.preserve_sectag << 9) | + ((uint64_t)xform->rx_secy.preserve_icv << 8) | + ((uint64_t)xform->rx_secy.validate_frames << 4) | + (uint64_t)xform->rx_secy.ctrl_port_enable; + } + + ret = roc_mcs_secy_policy_write(mcs_dev->mdev, &req); + if (ret) { + printf("\n Failed to configure SECY"); + return -EINVAL; + } + + /*get session priv*/ + if (rte_mempool_get(mempool, (void **)&macsec_sess_priv)) { + plt_err("Could not allocate security session private data"); + return -ENOMEM; + } + + macsec_sess_priv->sci = xform->sci; + macsec_sess_priv->sc_id = xform->sc_id; + macsec_sess_priv->secy_id = secy_id; + macsec_sess_priv->dir = xform->dir; + + TAILQ_INSERT_TAIL(&dev->mcs_list, macsec_sess_priv, entry); + set_sec_session_private_data(sess, (void *)macsec_sess_priv); + + return 0; +} + +int +cn10k_eth_macsec_session_destroy(void *device, struct rte_security_session *sess) +{ + RTE_SET_USED(device); + RTE_SET_USED(sess); + + return 0; +} + +int +cn10k_mcs_flow_configure(struct rte_eth_dev *eth_dev, + const struct rte_flow_attr *attr __rte_unused, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error __rte_unused, + void **mcs_flow) +{ + struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev); + struct roc_mcs_flowid_entry_write_req req = {0}; + struct cnxk_mcs_dev *mcs_dev = dev->mcs_dev; + struct cnxk_mcs_flow_opts opts = {0}; + struct cnxk_macsec_sess *sess = cnxk_eth_macsec_sess_get_by_sess(dev, + (const struct rte_security_session *)actions->conf); + const struct rte_flow_item_eth *eth_item = NULL; + struct rte_ether_addr src; + struct rte_ether_addr dst; + int ret; + int i = 0; + + ret = mcs_resource_alloc(mcs_dev, sess->dir, &(sess->flow_id), 1, CNXK_MCS_RSRC_TYPE_FLOWID); + if (ret) { + printf("Failed to allocate FLow id.\n"); + return -ENOMEM; + } + req.sci = sess->sci; + req.flow_id = sess->flow_id; + req.secy_id = sess->secy_id; + req.sc_id = sess->sc_id; + req.ena = 1; + req.ctr_pkt = 0; /* TBD */ + req.mcs_id = mcs_dev->idx; + req.dir = sess->dir; + + while (pattern[i].type != RTE_FLOW_ITEM_TYPE_END) { + if (pattern[i].type == RTE_FLOW_ITEM_TYPE_ETH) + eth_item = pattern[i].spec; + else + printf("%s:%d unhandled flow item : %d", __func__, __LINE__, + pattern[i].type); + i++; + } + if (eth_item) { + dst = eth_item->hdr.dst_addr; + src = eth_item->hdr.src_addr; + + /* Find ways to fill opts */ + + req.data[0] = (uint64_t)dst.addr_bytes[0] << 40 | (uint64_t)dst.addr_bytes[1] << 32 | + (uint64_t)dst.addr_bytes[2] << 24 | (uint64_t)dst.addr_bytes[3] << 16 | + (uint64_t)dst.addr_bytes[4] << 8 | (uint64_t)dst.addr_bytes[5] | + (uint64_t)src.addr_bytes[5] << 48 | (uint64_t)src.addr_bytes[4] << 56; + req.data[1] = (uint64_t)src.addr_bytes[3] | (uint64_t)src.addr_bytes[2] << 8 | + (uint64_t)src.addr_bytes[1] << 16 | (uint64_t)src.addr_bytes[0] << 24 | + (uint64_t)eth_item->hdr.ether_type << 32 | + ((uint64_t)opts.outer_tag_id & 0xFFFF) << 48; + req.data[2] = ((uint64_t)opts.outer_tag_id & 0xF0000) | + ((uint64_t)opts.outer_priority & 0xF) << 4 | + ((uint64_t)opts.second_outer_tag_id & 0xFFFFF) << 8 | + ((uint64_t)opts.second_outer_priority & 0xF) << 28 | + ((uint64_t)opts.bonus_data << 32) | + ((uint64_t)opts.tag_match_bitmap << 48) | + ((uint64_t)opts.packet_type & 0xF) << 56 | + ((uint64_t)opts.outer_vlan_type & 0x7) << 60 | + ((uint64_t)opts.inner_vlan_type & 0x1) << 63; + req.data[3] = ((uint64_t)opts.inner_vlan_type & 0x6) | + ((uint64_t)opts.num_tags & 0x7F) << 2 | ((uint64_t)opts.express & 1) << 9 | + ((uint64_t)opts.port & 0x3) << 10 | + ((uint64_t)opts.flowid_user & 0xF) << 12; + + req.mask[0] = 0x0; + req.mask[1] = 0xFFFFFFFF00000000; + req.mask[2] = 0xFFFFFFFFFFFFFFFF; + req.mask[3] = 0xFFFFFFFFFFFFF3FF; + + ret = roc_mcs_flowid_entry_write(mcs_dev->mdev, &req); + if (ret) + return ret; + + *mcs_flow = &req; + } else { + printf("\nFlow not confirured"); + return -EINVAL; + } + return 0; +} + +int +cn10k_eth_macsec_sa_stats_get(void *device, uint16_t sa_id, + struct rte_security_macsec_sa_stats *stats) +{ + RTE_SET_USED(device); + RTE_SET_USED(sa_id); + RTE_SET_USED(stats); + + return 0; +} + +int +cn10k_eth_macsec_sc_stats_get(void *device, uint16_t sc_id, + struct rte_security_macsec_sc_stats *stats) +{ + RTE_SET_USED(device); + RTE_SET_USED(sc_id); + RTE_SET_USED(stats); + + return 0; +} + +void +cnxk_mcs_dev_fini(struct cnxk_mcs_dev *mcs_dev) +{ + /* Cleanup MACsec dev */ + roc_mcs_dev_fini(mcs_dev->mdev); + + plt_free(mcs_dev); +} + +struct cnxk_mcs_dev * +cnxk_mcs_dev_init(uint8_t mcs_idx) +{ + struct cnxk_mcs_dev *mcs_dev; + + mcs_dev = plt_zmalloc(sizeof(struct cnxk_mcs_dev), PLT_CACHE_LINE_SIZE); + if (!mcs_dev) + return NULL; + + mcs_dev->mdev = roc_mcs_dev_init(mcs_dev->idx); + if (!mcs_dev->mdev) { + plt_free(mcs_dev); + return NULL; + } + mcs_dev->idx = mcs_idx; + + return mcs_dev; +} diff --git a/drivers/net/cnxk/cn10k_ethdev_mcs.h b/drivers/net/cnxk/cn10k_ethdev_mcs.h new file mode 100644 index 0000000000..b905f4402a --- /dev/null +++ b/drivers/net/cnxk/cn10k_ethdev_mcs.h @@ -0,0 +1,59 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2022 Marvell. + */ + +#include + +#define CNXK_MACSEC_HASH_KEY 16 + +struct cnxk_mcs_dev { + uint64_t default_sci; + void *mdev; + uint8_t port_id; + uint8_t idx; +}; + +enum cnxk_mcs_rsrc_type { + CNXK_MCS_RSRC_TYPE_FLOWID, + CNXK_MCS_RSRC_TYPE_SECY, + CNXK_MCS_RSRC_TYPE_SC, + CNXK_MCS_RSRC_TYPE_SA, +}; + +struct cnxk_mcs_flow_opts { + uint32_t outer_tag_id; + /**< {VLAN_ID[11:0]}, or 20-bit MPLS label*/ + uint8_t outer_priority; + /**< {PCP/Pbits, DE/CFI} or {1'b0, EXP} for MPLS.*/ + uint32_t second_outer_tag_id; + /**< {VLAN_ID[11:0]}, or 20-bit MPLS label*/ + uint8_t second_outer_priority; + /**< {PCP/Pbits, DE/CFI} or {1'b0, EXP} for MPLS. */ + uint16_t bonus_data; + /**< 2 bytes of additional bonus data extracted from one of the custom tags*/ + uint8_t tag_match_bitmap; + uint8_t packet_type; + uint8_t outer_vlan_type; + uint8_t inner_vlan_type; + uint8_t num_tags; + bool express; + uint8_t port; /**< port 0-3 */ + uint8_t flowid_user; +}; + +int cn10k_eth_macsec_sa_create(void *device, struct rte_security_macsec_sa *conf); +int cn10k_eth_macsec_sc_create(void *device, struct rte_security_macsec_sc *conf); + +int cn10k_eth_macsec_sa_destroy(void *device, uint16_t sa_id); +int cn10k_eth_macsec_sc_destroy(void *device, uint16_t sc_id); + +int cn10k_eth_macsec_sa_stats_get(void *device, uint16_t sa_id, + struct rte_security_macsec_sa_stats *stats); +int cn10k_eth_macsec_sc_stats_get(void *device, uint16_t sa_id, + struct rte_security_macsec_sc_stats *stats); + +int cn10k_eth_macsec_session_create(struct cnxk_eth_dev *dev, + struct rte_security_session_conf *conf, + struct rte_security_session *sess, + struct rte_mempool *mempool); +int cn10k_eth_macsec_session_destroy(void *device, struct rte_security_session *sess); diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c index 3795b0c78b..70fb1eb39a 100644 --- a/drivers/net/cnxk/cn10k_ethdev_sec.c +++ b/drivers/net/cnxk/cn10k_ethdev_sec.c @@ -9,6 +9,7 @@ #include #include +#include #include #include @@ -601,7 +602,9 @@ cn10k_eth_sec_session_create(void *device, if (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) return -ENOTSUP; - if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC) + if (conf->protocol == RTE_SECURITY_PROTOCOL_MACSEC) + return cn10k_eth_macsec_session_create(dev, conf, sess, mempool); + else if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC) return -ENOTSUP; if (rte_security_dynfield_register() < 0) @@ -1058,9 +1061,15 @@ cn10k_eth_sec_ops_override(void) init_once = 1; /* Update platform specific ops */ + cnxk_eth_sec_ops.macsec_sa_create = cn10k_eth_macsec_sa_create; + cnxk_eth_sec_ops.macsec_sc_create = cn10k_eth_macsec_sc_create; + cnxk_eth_sec_ops.macsec_sa_destroy = cn10k_eth_macsec_sa_destroy; + cnxk_eth_sec_ops.macsec_sc_destroy = cn10k_eth_macsec_sc_destroy; cnxk_eth_sec_ops.session_create = cn10k_eth_sec_session_create; cnxk_eth_sec_ops.session_destroy = cn10k_eth_sec_session_destroy; cnxk_eth_sec_ops.capabilities_get = cn10k_eth_sec_capabilities_get; cnxk_eth_sec_ops.session_update = cn10k_eth_sec_session_update; cnxk_eth_sec_ops.session_stats_get = cn10k_eth_sec_session_stats_get; + cnxk_eth_sec_ops.macsec_sc_stats_get = cn10k_eth_macsec_sc_stats_get; + cnxk_eth_sec_ops.macsec_sa_stats_get = cn10k_eth_macsec_sa_stats_get; } diff --git a/drivers/net/cnxk/cn10k_flow.c b/drivers/net/cnxk/cn10k_flow.c index 7df879a2bb..e95a73ec55 100644 --- a/drivers/net/cnxk/cn10k_flow.c +++ b/drivers/net/cnxk/cn10k_flow.c @@ -2,6 +2,7 @@ * Copyright(C) 2020 Marvell. */ #include +#include "cn10k_ethdev_mcs.h" #include "cn10k_flow.h" #include "cn10k_ethdev.h" #include "cn10k_rx.h" @@ -133,6 +134,7 @@ cn10k_flow_create(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, const struct rte_flow_action *act_q = NULL; struct roc_npc *npc = &dev->npc; struct roc_npc_flow *flow; + void *mcs_flow = NULL; int vtag_actions = 0; uint32_t req_act = 0; int i, rc; @@ -186,6 +188,18 @@ cn10k_flow_create(struct rte_eth_dev *eth_dev, const struct rte_flow_attr *attr, } } + if (actions[0].type == RTE_FLOW_ACTION_TYPE_SECURITY && + cnxk_eth_macsec_sess_get_by_sess(dev, actions[0].conf) != NULL) { + rc = cn10k_mcs_flow_configure(eth_dev, attr, pattern, actions, error, &mcs_flow); + if (rc) { + rte_flow_error_set(error, rc, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Failed to configure mcs flow"); + return NULL; + } + return (struct rte_flow *)mcs_flow; + } + flow = cnxk_flow_create(eth_dev, attr, pattern, actions, error); if (!flow) { if (mtr) diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h index c09e9bff8e..4ae64060f1 100644 --- a/drivers/net/cnxk/cnxk_ethdev.h +++ b/drivers/net/cnxk/cnxk_ethdev.h @@ -337,6 +337,21 @@ struct cnxk_eth_dev_sec_outb { rte_spinlock_t lock; }; +/* MACsec session private data */ +struct cnxk_macsec_sess { + /* List entry */ + TAILQ_ENTRY(cnxk_macsec_sess) entry; + + /* Back pointer to session */ + struct rte_security_session *sess; + enum rte_security_macsec_direction dir; + uint64_t sci; + uint8_t secy_id; + uint8_t sc_id; + uint8_t flow_id; +}; +TAILQ_HEAD(cnxk_macsec_sess_list, cnxk_macsec_sess); + struct cnxk_eth_dev { /* ROC NIX */ struct roc_nix nix; @@ -437,6 +452,10 @@ struct cnxk_eth_dev { /* Reassembly dynfield/flag offsets */ int reass_dynfield_off; int reass_dynflag_bit; + + /* MCS device */ + struct cnxk_mcs_dev *mcs_dev; + struct cnxk_macsec_sess_list mcs_list; }; struct cnxk_eth_rxq_sp { @@ -649,6 +668,18 @@ cnxk_eth_sec_sess_get_by_sess(struct cnxk_eth_dev *dev, int cnxk_nix_inl_meta_pool_cb(uint64_t *aura_handle, uint32_t buf_sz, uint32_t nb_bufs, bool destroy); +struct cnxk_mcs_dev * cnxk_mcs_dev_init(uint8_t mcs_idx); +void cnxk_mcs_dev_fini(struct cnxk_mcs_dev *mcs_dev); + +struct cnxk_macsec_sess * +cnxk_eth_macsec_sess_get_by_sess(struct cnxk_eth_dev *dev, + const struct rte_security_session *sess); +int cn10k_mcs_flow_configure(struct rte_eth_dev *eth_dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error, void **mcs_flow); + /* Other private functions */ int nix_recalc_mtu(struct rte_eth_dev *eth_dev); int nix_mtr_validate(struct rte_eth_dev *dev, uint32_t id); diff --git a/drivers/net/cnxk/cnxk_ethdev_sec.c b/drivers/net/cnxk/cnxk_ethdev_sec.c index 9304b1465d..56fb2733a4 100644 --- a/drivers/net/cnxk/cnxk_ethdev_sec.c +++ b/drivers/net/cnxk/cnxk_ethdev_sec.c @@ -203,7 +203,7 @@ cnxk_eth_sec_sess_get_by_sess(struct cnxk_eth_dev *dev, static unsigned int cnxk_eth_sec_session_get_size(void *device __rte_unused) { - return sizeof(struct cnxk_eth_sec_sess); + return RTE_MAX(sizeof(struct cnxk_macsec_sess), sizeof(struct cnxk_eth_sec_sess)); } struct rte_security_ops cnxk_eth_sec_ops = { diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build index f347e98fce..34bba3fb23 100644 --- a/drivers/net/cnxk/meson.build +++ b/drivers/net/cnxk/meson.build @@ -106,6 +106,7 @@ sources += files( # CN10K sources += files( 'cn10k_ethdev.c', + 'cn10k_ethdev_mcs.c', 'cn10k_ethdev_sec.c', 'cn10k_flow.c', 'cn10k_rx_select.c', -- 2.25.1