From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 27BB8A04FF; Tue, 22 Mar 2022 18:59:58 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 20DC24280D; Tue, 22 Mar 2022 18:59:32 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id D71744280B for ; Tue, 22 Mar 2022 18:59:29 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 22MG9rZw022265; Tue, 22 Mar 2022 10:59:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=/k0wRJ/Dfy7sh1FwCA/c1WMhXUTOXBftclhMypOgrqM=; b=d9TOsFMlUYz2cN6kz4W8c/0PkfkKehRjBvdWqgtrZJtI9ednT7x+2i/GIAjHtGIGG8NB JwUj/DS/aXfzmdvmXEf2wzY8ZAbF+4RN9amvUt77uIiXphuGBSRfNyt4ZUVy1kCPxxdo fXncTh+Bfcnm8uV484Lg1dGUWQRzTbWmO2lR9590WOwN4o0ln6gSM3Cl7OmoCsnWLpSF gVE80iEA6UEeD2v0dWe0Zu92kbk4zNMluhTuENKmEi80YYFTmGOeLYEulPz7LRc+Sz6m vA8ilo5sY4FU11JtryZPlNsRq8PmF/5bGzM/hXmglg0R00qLeKkWrxlO2irDHXy5te22 zQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3eyhqw0j72-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 22 Mar 2022 10:59:28 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 22 Mar 2022 10:59:27 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Tue, 22 Mar 2022 10:59:27 -0700 Received: from localhost.localdomain (unknown [10.28.34.24]) by maili.marvell.com (Postfix) with ESMTP id 6BED43F704A; Tue, 22 Mar 2022 10:59:25 -0700 (PDT) From: Nithin Dabilpuram To: , Radu Nicolau , Akhil Goyal CC: , , Nithin Dabilpuram Subject: [PATCH 7/7] examples/ipsec-secgw: add poll mode worker for inline proto Date: Tue, 22 Mar 2022 23:28:45 +0530 Message-ID: <20220322175902.363520-7-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220322175902.363520-1-ndabilpuram@marvell.com> References: <20220322175902.363520-1-ndabilpuram@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: f1heMVx2fjbCxqFZaTi9_B2iTNEe7yoE X-Proofpoint-ORIG-GUID: f1heMVx2fjbCxqFZaTi9_B2iTNEe7yoE X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-22_07,2022-03-22_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add separate worker thread when all SA's are of type inline protocol offload and librte_ipsec is enabled in order to make it more optimal for that case. Current default worker supports all kinds of SA leading to doing lot of per-packet checks and branching based on SA type which can be of 5 types of SA's. Also make a provision for choosing different poll mode workers for different combinations of SA types with default being existing poll mode worker that supports all kinds of SA's. Signed-off-by: Nithin Dabilpuram --- examples/ipsec-secgw/ipsec-secgw.c | 6 +- examples/ipsec-secgw/ipsec-secgw.h | 10 + examples/ipsec-secgw/ipsec_worker.c | 378 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/ipsec_worker.h | 4 + examples/ipsec-secgw/sa.c | 9 + 5 files changed, 403 insertions(+), 4 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 84f6150..515b344 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -68,8 +68,6 @@ volatile bool force_quit; #define CDEV_MP_CACHE_MULTIPLIER 1.5 /* from rte_mempool.c */ #define MAX_QUEUE_PAIRS 1 -#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */ - #define MAX_LCORE_PARAMS 1024 /* @@ -173,7 +171,7 @@ static uint64_t enabled_cryptodev_mask = UINT64_MAX; static int32_t promiscuous_on = 1; static int32_t numa_on = 1; /**< NUMA is enabled by default. */ static uint32_t nb_lcores; -static uint32_t single_sa; +uint32_t single_sa; uint32_t nb_bufs_in_pool; /* @@ -238,6 +236,7 @@ struct socket_ctx socket_ctx[NB_SOCKETS]; bool per_port_pool; +uint16_t wrkr_flags; /* * Determine is multi-segment support required: * - either frame buffer size is smaller then mtu @@ -1233,6 +1232,7 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) single_sa = 1; single_sa_idx = ret; eh_conf->ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; + wrkr_flags |= SS_F; printf("Configured with single SA index %u\n", single_sa_idx); break; diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h index 2edf631..f027360 100644 --- a/examples/ipsec-secgw/ipsec-secgw.h +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -135,6 +135,7 @@ extern uint32_t unprotected_port_mask; /* Index of SA in single mode */ extern uint32_t single_sa_idx; +extern uint32_t single_sa; extern volatile bool force_quit; @@ -145,6 +146,15 @@ extern bool per_port_pool; extern uint32_t mtu_size; extern uint32_t frag_tbl_sz; +#define SS_F (1U << 0) /* Single SA mode */ +#define INL_PR_F (1U << 1) /* Inline Protocol */ +#define INL_CR_F (1U << 2) /* Inline Crypto */ +#define LA_PR_F (1U << 3) /* Lookaside Protocol */ +#define LA_ANY_F (1U << 4) /* Lookaside Any */ +#define MAX_F (LA_ANY_F << 1) + +extern uint16_t wrkr_flags; + static inline uint8_t is_unprotected_port(uint16_t port_id) { diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index 8639426..2b96951 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -17,6 +17,8 @@ struct port_drv_mode_data { struct rte_security_ctx *ctx; }; +typedef void (*ipsec_worker_fn_t)(void); + static inline enum pkt_type process_ipsec_get_pkt_type(struct rte_mbuf *pkt, uint8_t **nlp) { @@ -1004,6 +1006,380 @@ ipsec_eventmode_worker(struct eh_conf *conf) eh_launch_worker(conf, ipsec_wrkr, nb_wrkr_param); } +static __rte_always_inline void +outb_inl_pro_spd_process(struct sp_ctx *sp, + struct sa_ctx *sa_ctx, + struct traffic_type *ip, + struct traffic_type *match, + struct traffic_type *mismatch, + bool match_flag, + struct ipsec_spd_stats *stats) +{ + uint32_t prev_sa_idx = UINT32_MAX; + struct rte_mbuf *ipsec[MAX_PKT_BURST]; + struct rte_ipsec_session *ips; + uint32_t i, j, j_mis, sa_idx; + struct ipsec_sa *sa = NULL; + uint32_t ipsec_num = 0; + struct rte_mbuf *m; + uint64_t satp; + + if (ip->num == 0 || sp == NULL) + return; + + rte_acl_classify((struct rte_acl_ctx *)sp, ip->data, ip->res, + ip->num, DEFAULT_MAX_CATEGORIES); + + j = match->num; + j_mis = mismatch->num; + + for (i = 0; i < ip->num; i++) { + m = ip->pkts[i]; + sa_idx = ip->res[i] - 1; + + if (unlikely(ip->res[i] == DISCARD)) { + free_pkts(&m, 1); + + stats->discard++; + } else if (unlikely(ip->res[i] == BYPASS)) { + match->pkts[j++] = m; + + stats->bypass++; + } else { + if (prev_sa_idx == UINT32_MAX) { + prev_sa_idx = sa_idx; + sa = &sa_ctx->sa[sa_idx]; + ips = ipsec_get_primary_session(sa); + satp = rte_ipsec_sa_type(ips->sa); + } + + if (sa_idx != prev_sa_idx) { + prep_process_group(sa, ipsec, ipsec_num); + + /* Prepare packets for outbound */ + rte_ipsec_pkt_process(ips, ipsec, ipsec_num); + + /* Copy to current tr or a different tr */ + if (SATP_OUT_IPV4(satp) == match_flag) { + memcpy(&match->pkts[j], ipsec, + ipsec_num * sizeof(void *)); + j += ipsec_num; + } else { + memcpy(&mismatch->pkts[j_mis], ipsec, + ipsec_num * sizeof(void *)); + j_mis += ipsec_num; + } + + /* Update to new SA */ + sa = &sa_ctx->sa[sa_idx]; + ips = ipsec_get_primary_session(sa); + satp = rte_ipsec_sa_type(ips->sa); + ipsec_num = 0; + } + + ipsec[ipsec_num++] = m; + stats->protect++; + } + } + + if (ipsec_num) { + prep_process_group(sa, ipsec, ipsec_num); + + /* Prepare pacekts for outbound */ + rte_ipsec_pkt_process(ips, ipsec, ipsec_num); + + /* Copy to current tr or a different tr */ + if (SATP_OUT_IPV4(satp) == match_flag) { + memcpy(&match->pkts[j], ipsec, + ipsec_num * sizeof(void *)); + j += ipsec_num; + } else { + memcpy(&mismatch->pkts[j_mis], ipsec, + ipsec_num * sizeof(void *)); + j_mis += ipsec_num; + } + } + match->num = j; + mismatch->num = j_mis; +} + +/* Poll mode worker when all SA's are of type inline protocol */ +void +ipsec_poll_mode_wrkr_inl_pr(void) +{ + const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) + / US_PER_S * BURST_TX_DRAIN_US; + struct sp_ctx *sp4_in, *sp6_in, *sp4_out, *sp6_out; + struct rte_mbuf *pkts[MAX_PKT_BURST]; + uint64_t prev_tsc, diff_tsc, cur_tsc; + struct ipsec_core_statistics *stats; + struct rt_ctx *rt4_ctx, *rt6_ctx; + struct sa_ctx *sa_in, *sa_out; + struct traffic_type ip4, ip6; + struct lcore_rx_queue *rxql; + struct rte_mbuf **v4, **v6; + struct ipsec_traffic trf; + struct lcore_conf *qconf; + uint16_t v4_num, v6_num; + int32_t socket_id; + uint32_t lcore_id; + int32_t i, nb_rx; + uint16_t portid; + uint8_t queueid; + + prev_tsc = 0; + lcore_id = rte_lcore_id(); + qconf = &lcore_conf[lcore_id]; + rxql = qconf->rx_queue_list; + socket_id = rte_lcore_to_socket_id(lcore_id); + stats = &core_statistics[lcore_id]; + + rt4_ctx = socket_ctx[socket_id].rt_ip4; + rt6_ctx = socket_ctx[socket_id].rt_ip6; + + sp4_in = socket_ctx[socket_id].sp_ip4_in; + sp6_in = socket_ctx[socket_id].sp_ip6_in; + sa_in = socket_ctx[socket_id].sa_in; + + sp4_out = socket_ctx[socket_id].sp_ip4_out; + sp6_out = socket_ctx[socket_id].sp_ip6_out; + sa_out = socket_ctx[socket_id].sa_out; + + qconf->frag.pool_indir = socket_ctx[socket_id].mbuf_pool_indir; + + if (qconf->nb_rx_queue == 0) { + RTE_LOG(DEBUG, IPSEC, "lcore %u has nothing to do\n", + lcore_id); + return; + } + + RTE_LOG(INFO, IPSEC, "entering main loop on lcore %u\n", lcore_id); + + for (i = 0; i < qconf->nb_rx_queue; i++) { + portid = rxql[i].port_id; + queueid = rxql[i].queue_id; + RTE_LOG(INFO, IPSEC, + " -- lcoreid=%u portid=%u rxqueueid=%hhu\n", + lcore_id, portid, queueid); + } + + while (!force_quit) { + cur_tsc = rte_rdtsc(); + + /* TX queue buffer drain */ + diff_tsc = cur_tsc - prev_tsc; + + if (unlikely(diff_tsc > drain_tsc)) { + drain_tx_buffers(qconf); + prev_tsc = cur_tsc; + } + + for (i = 0; i < qconf->nb_rx_queue; ++i) { + /* Read packets from RX queues */ + portid = rxql[i].port_id; + queueid = rxql[i].queue_id; + nb_rx = rte_eth_rx_burst(portid, queueid, + pkts, MAX_PKT_BURST); + + if (nb_rx <= 0) + continue; + + core_stats_update_rx(nb_rx); + + prepare_traffic(rxql[i].sec_ctx, pkts, &trf, nb_rx); + + /* Drop any IPsec traffic */ + free_pkts(trf.ipsec.pkts, trf.ipsec.num); + + if (is_unprotected_port(portid)) { + inbound_sp_sa(sp4_in, sa_in, &trf.ip4, + trf.ip4.num, + &stats->inbound.spd4); + + inbound_sp_sa(sp6_in, sa_in, &trf.ip6, + trf.ip6.num, + &stats->inbound.spd6); + + v4 = trf.ip4.pkts; + v4_num = trf.ip4.num; + v6 = trf.ip6.pkts; + v6_num = trf.ip6.num; + } else { + ip4.num = 0; + ip6.num = 0; + + outb_inl_pro_spd_process(sp4_out, sa_out, + &trf.ip4, &ip4, &ip6, + true, + &stats->outbound.spd4); + + outb_inl_pro_spd_process(sp6_out, sa_out, + &trf.ip6, &ip6, &ip4, + false, + &stats->outbound.spd6); + v4 = ip4.pkts; + v4_num = ip4.num; + v6 = ip6.pkts; + v6_num = ip6.num; + } + + route4_pkts(rt4_ctx, v4, v4_num, 0, false); + route6_pkts(rt6_ctx, v6, v6_num); + } + } +} + +/* Poll mode worker when all SA's are of type inline protocol + * and single sa mode is enabled. + */ +void +ipsec_poll_mode_wrkr_inl_pr_ss(void) +{ + const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) + / US_PER_S * BURST_TX_DRAIN_US; + struct rte_mbuf *pkts[MAX_PKT_BURST], *pkt; + uint64_t prev_tsc, diff_tsc, cur_tsc; + struct rte_ipsec_session *ips; + struct lcore_rx_queue *rxql; + struct lcore_conf *qconf; + struct ipsec_traffic trf; + struct sa_ctx *sa_out; + uint32_t i, nb_rx, j; + struct ipsec_sa *sa; + int32_t socket_id; + uint32_t lcore_id; + uint16_t portid; + uint8_t queueid; + + prev_tsc = 0; + lcore_id = rte_lcore_id(); + qconf = &lcore_conf[lcore_id]; + rxql = qconf->rx_queue_list; + socket_id = rte_lcore_to_socket_id(lcore_id); + + /* Get SA info */ + sa_out = socket_ctx[socket_id].sa_out; + sa = &sa_out->sa[single_sa_idx]; + ips = ipsec_get_primary_session(sa); + + qconf->frag.pool_indir = socket_ctx[socket_id].mbuf_pool_indir; + + if (qconf->nb_rx_queue == 0) { + RTE_LOG(DEBUG, IPSEC, "lcore %u has nothing to do\n", + lcore_id); + return; + } + + RTE_LOG(INFO, IPSEC, "entering main loop on lcore %u\n", lcore_id); + + for (i = 0; i < qconf->nb_rx_queue; i++) { + portid = rxql[i].port_id; + queueid = rxql[i].queue_id; + RTE_LOG(INFO, IPSEC, + " -- lcoreid=%u portid=%u rxqueueid=%hhu\n", + lcore_id, portid, queueid); + } + + while (!force_quit) { + cur_tsc = rte_rdtsc(); + + /* TX queue buffer drain */ + diff_tsc = cur_tsc - prev_tsc; + + if (unlikely(diff_tsc > drain_tsc)) { + drain_tx_buffers(qconf); + prev_tsc = cur_tsc; + } + + for (i = 0; i < qconf->nb_rx_queue; ++i) { + /* Read packets from RX queues */ + portid = rxql[i].port_id; + queueid = rxql[i].queue_id; + nb_rx = rte_eth_rx_burst(portid, queueid, + pkts, MAX_PKT_BURST); + + if (nb_rx <= 0) + continue; + + core_stats_update_rx(nb_rx); + + if (is_unprotected_port(portid)) { + /* Nothing much to do for inbound inline + * decrypted traffic. + */ + for (j = 0; j < nb_rx; j++) { + uint32_t ptype, proto; + + pkt = pkts[j]; + ptype = pkt->packet_type & + RTE_PTYPE_L3_MASK; + if (ptype == RTE_PTYPE_L3_IPV4) + proto = IPPROTO_IP; + else + proto = IPPROTO_IPV6; + + send_single_packet(pkt, portid, proto); + } + + continue; + } + + /* Prepare packets for outbound */ + prepare_traffic(rxql[i].sec_ctx, pkts, &trf, nb_rx); + + /* Drop any IPsec traffic */ + free_pkts(trf.ipsec.pkts, trf.ipsec.num); + + rte_ipsec_pkt_process(ips, trf.ip4.pkts, + trf.ip4.num); + rte_ipsec_pkt_process(ips, trf.ip6.pkts, + trf.ip6.num); + portid = sa->portid; + + /* Send v4 pkts out */ + for (j = 0; j < trf.ip4.num; j++) { + pkt = trf.ip4.pkts[j]; + + rte_pktmbuf_prepend(pkt, RTE_ETHER_HDR_LEN); + pkt->l2_len = RTE_ETHER_HDR_LEN; + send_single_packet(pkt, portid, IPPROTO_IP); + } + + /* Send v6 pkts out */ + for (j = 0; j < trf.ip6.num; j++) { + pkt = trf.ip6.pkts[j]; + + rte_pktmbuf_prepend(pkt, RTE_ETHER_HDR_LEN); + pkt->l2_len = RTE_ETHER_HDR_LEN; + send_single_packet(pkt, portid, IPPROTO_IPV6); + } + } + } +} + +static void +ipsec_poll_mode_wrkr_launch(void) +{ + static ipsec_worker_fn_t poll_mode_wrkrs[MAX_F] = { + [INL_PR_F] = ipsec_poll_mode_wrkr_inl_pr, + [INL_PR_F | SS_F] = ipsec_poll_mode_wrkr_inl_pr_ss, + }; + ipsec_worker_fn_t fn; + + if (!app_sa_prm.enable) { + fn = ipsec_poll_mode_worker; + } else { + fn = poll_mode_wrkrs[wrkr_flags]; + + /* Always default to all mode worker */ + if (!fn) + fn = ipsec_poll_mode_worker; + } + + /* Launch worker */ + (*fn)(); +} + int ipsec_launch_one_lcore(void *args) { struct eh_conf *conf; @@ -1012,7 +1388,7 @@ int ipsec_launch_one_lcore(void *args) if (conf->mode == EH_PKT_TRANSFER_MODE_POLL) { /* Run in poll mode */ - ipsec_poll_mode_worker(); + ipsec_poll_mode_wrkr_launch(); } else if (conf->mode == EH_PKT_TRANSFER_MODE_EVENT) { /* Run in event mode */ ipsec_eventmode_worker(conf); diff --git a/examples/ipsec-secgw/ipsec_worker.h b/examples/ipsec-secgw/ipsec_worker.h index b183248..a040d94 100644 --- a/examples/ipsec-secgw/ipsec_worker.h +++ b/examples/ipsec-secgw/ipsec_worker.h @@ -13,6 +13,8 @@ /* Configure how many packets ahead to prefetch, when reading packets */ #define PREFETCH_OFFSET 3 +#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */ + enum pkt_type { PKT_TYPE_PLAIN_IPV4 = 1, PKT_TYPE_IPSEC_IPV4, @@ -42,6 +44,8 @@ struct lcore_conf_ev_tx_int_port_wrkr { } __rte_cache_aligned; void ipsec_poll_mode_worker(void); +void ipsec_poll_mode_wrkr_inl_pr(void); +void ipsec_poll_mode_wrkr_inl_pr_ss(void); int ipsec_launch_one_lcore(void *args); diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index 36d890f..db3d6bb 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -936,6 +936,15 @@ parse_sa_tokens(char **tokens, uint32_t n_tokens, ips->type = RTE_SECURITY_ACTION_TYPE_NONE; } + if (ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) + wrkr_flags |= INL_CR_F; + else if (ips->type == RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) + wrkr_flags |= INL_PR_F; + else if (ips->type == RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL) + wrkr_flags |= LA_PR_F; + else + wrkr_flags |= LA_ANY_F; + nb_crypto_sessions++; *ri = *ri + 1; } -- 2.8.4