From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2EDEBA034C; Thu, 28 Apr 2022 17:05:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 716FB4281E; Thu, 28 Apr 2022 17:05:24 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 48C7542832 for ; Thu, 28 Apr 2022 17:05:22 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 23SA6dti016780; Thu, 28 Apr 2022 08:05:21 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=LjRWvBpftcdoypcIhg2ylr6+rk+Zbr4NrTS+oEuuuqk=; b=Z2aeGyeyj19wZIXVCpfcXNAFY3IgE/Bw943rSM3727pEcgKcmbglY6x33Steb+Qtr3pN QiKcxMOxFmqRqBuR06qdOdiNk0X8iC8NYG0v09kdVA3D9AMpOLptWPygHA8GWgXHD+MP CZOF5Wd+I4I0TP4Trhtk/LHKZcdoycr5zq088uAIA5G7/ngelApGZGL9JLwsJ5/9rEho algn16LBya7kuHvEMpmcpfCnHM2i4+amvpsPPLwZQara42+9X5/nddA5Ps4ph3fFFSbs gn9ApLjNQDteHBMKzdtPRPlXOVae1RbPA7craevNQliLdfzhz+oJOAfilIZp/UJyejLY Mg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3fqpvy1j25-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 28 Apr 2022 08:05:21 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Thu, 28 Apr 2022 08:05:19 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Thu, 28 Apr 2022 08:05:19 -0700 Received: from hyd1588t430.marvell.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id DFF745B698C; Thu, 28 Apr 2022 08:05:16 -0700 (PDT) From: Nithin Dabilpuram To: , , Radu Nicolau , Akhil Goyal CC: , , Nithin Dabilpuram Subject: [PATCH v3 5/7] examples/ipsec-secgw: get security context from lcore conf Date: Thu, 28 Apr 2022 20:34:57 +0530 Message-ID: <20220428150459.23950-5-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20220428150459.23950-1-ndabilpuram@marvell.com> References: <20220322175902.363520-1-ndabilpuram@marvell.com> <20220428150459.23950-1-ndabilpuram@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-GUID: oRvpNPEDcistPayr_CZ3SBcBxeawxA1T X-Proofpoint-ORIG-GUID: oRvpNPEDcistPayr_CZ3SBcBxeawxA1T X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.858,Hydra:6.0.486,FMLib:17.11.64.514 definitions=2022-04-28_02,2022-04-28_01,2022-02-23_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Store security context pointer in lcore Rx queue config and get it from there in fast path for better performance. Currently rte_eth_dev_get_sec_ctx() which is meant to be control path API is called per packet basis. For every call to that API, ethdev port status is checked. Signed-off-by: Nithin Dabilpuram --- examples/ipsec-secgw/ipsec-secgw.c | 22 ++++++++++++++++++--- examples/ipsec-secgw/ipsec.h | 1 + examples/ipsec-secgw/ipsec_worker.h | 39 +++++++++++++++++-------------------- 3 files changed, 38 insertions(+), 24 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 88984a6..14b9c06 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -544,11 +544,11 @@ process_pkts_outbound_nosp(struct ipsec_ctx *ipsec_ctx, static inline void process_pkts(struct lcore_conf *qconf, struct rte_mbuf **pkts, - uint8_t nb_pkts, uint16_t portid) + uint8_t nb_pkts, uint16_t portid, struct rte_security_ctx *ctx) { struct ipsec_traffic traffic; - prepare_traffic(pkts, &traffic, nb_pkts); + prepare_traffic(ctx, pkts, &traffic, nb_pkts); if (unlikely(single_sa)) { if (is_unprotected_port(portid)) @@ -740,7 +740,8 @@ ipsec_poll_mode_worker(void) if (nb_rx > 0) { core_stats_update_rx(nb_rx); - process_pkts(qconf, pkts, nb_rx, portid); + process_pkts(qconf, pkts, nb_rx, portid, + rxql->sec_ctx); } /* dequeue and process completed crypto-ops */ @@ -3060,6 +3061,21 @@ main(int32_t argc, char **argv) flow_init(); + /* Get security context if available and only if dynamic field is + * registered for fast path access. + */ + if (!rte_security_dynfield_is_registered()) + goto skip_sec_ctx; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + for (i = 0; i < lcore_conf[lcore_id].nb_rx_queue; i++) { + portid = lcore_conf[lcore_id].rx_queue_list[i].port_id; + lcore_conf[lcore_id].rx_queue_list[i].sec_ctx = + rte_eth_dev_get_sec_ctx(portid); + } + } +skip_sec_ctx: + check_all_ports_link_status(enabled_port_mask); if (stats_interval > 0) diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 9a4e7ea..ecad262 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -269,6 +269,7 @@ struct cnt_blk { struct lcore_rx_queue { uint16_t port_id; uint8_t queue_id; + struct rte_security_ctx *sec_ctx; } __rte_cache_aligned; struct buffer { diff --git a/examples/ipsec-secgw/ipsec_worker.h b/examples/ipsec-secgw/ipsec_worker.h index 7397291..b1fc364 100644 --- a/examples/ipsec-secgw/ipsec_worker.h +++ b/examples/ipsec-secgw/ipsec_worker.h @@ -88,7 +88,7 @@ prep_process_group(void *sa, struct rte_mbuf *mb[], uint32_t cnt) } } -static inline void +static __rte_always_inline void adjust_ipv4_pktlen(struct rte_mbuf *m, const struct rte_ipv4_hdr *iph, uint32_t l2_len) { @@ -101,7 +101,7 @@ adjust_ipv4_pktlen(struct rte_mbuf *m, const struct rte_ipv4_hdr *iph, } } -static inline void +static __rte_always_inline void adjust_ipv6_pktlen(struct rte_mbuf *m, const struct rte_ipv6_hdr *iph, uint32_t l2_len) { @@ -114,8 +114,9 @@ adjust_ipv6_pktlen(struct rte_mbuf *m, const struct rte_ipv6_hdr *iph, } } -static inline void -prepare_one_packet(struct rte_mbuf *pkt, struct ipsec_traffic *t) +static __rte_always_inline void +prepare_one_packet(struct rte_security_ctx *ctx, struct rte_mbuf *pkt, + struct ipsec_traffic *t) { uint32_t ptype = pkt->packet_type; const struct rte_ether_hdr *eth; @@ -203,13 +204,9 @@ prepare_one_packet(struct rte_mbuf *pkt, struct ipsec_traffic *t) * with the security session. */ - if (pkt->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD && - rte_security_dynfield_is_registered()) { + if (ctx && pkt->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) { struct ipsec_sa *sa; struct ipsec_mbuf_metadata *priv; - struct rte_security_ctx *ctx = (struct rte_security_ctx *) - rte_eth_dev_get_sec_ctx( - pkt->port); /* Retrieve the userdata registered. Here, the userdata * registered is the SA pointer. @@ -230,9 +227,9 @@ prepare_one_packet(struct rte_mbuf *pkt, struct ipsec_traffic *t) } } -static inline void -prepare_traffic(struct rte_mbuf **pkts, struct ipsec_traffic *t, - uint16_t nb_pkts) +static __rte_always_inline void +prepare_traffic(struct rte_security_ctx *ctx, struct rte_mbuf **pkts, + struct ipsec_traffic *t, uint16_t nb_pkts) { int32_t i; @@ -243,11 +240,11 @@ prepare_traffic(struct rte_mbuf **pkts, struct ipsec_traffic *t, for (i = 0; i < (nb_pkts - PREFETCH_OFFSET); i++) { rte_prefetch0(rte_pktmbuf_mtod(pkts[i + PREFETCH_OFFSET], void *)); - prepare_one_packet(pkts[i], t); + prepare_one_packet(ctx, pkts[i], t); } /* Process left packets */ for (; i < nb_pkts; i++) - prepare_one_packet(pkts[i], t); + prepare_one_packet(ctx, pkts[i], t); } static inline void @@ -305,7 +302,7 @@ prepare_tx_burst(struct rte_mbuf *pkts[], uint16_t nb_pkts, uint16_t port, } /* Send burst of packets on an output interface */ -static inline int32_t +static __rte_always_inline int32_t send_burst(struct lcore_conf *qconf, uint16_t n, uint16_t port) { struct rte_mbuf **m_table; @@ -333,7 +330,7 @@ send_burst(struct lcore_conf *qconf, uint16_t n, uint16_t port) /* * Helper function to fragment and queue for TX one packet. */ -static inline uint32_t +static __rte_always_inline uint32_t send_fragment_packet(struct lcore_conf *qconf, struct rte_mbuf *m, uint16_t port, uint8_t proto) { @@ -372,7 +369,7 @@ send_fragment_packet(struct lcore_conf *qconf, struct rte_mbuf *m, } /* Enqueue a single packet, and send burst if queue is filled */ -static inline int32_t +static __rte_always_inline int32_t send_single_packet(struct rte_mbuf *m, uint16_t port, uint8_t proto) { uint32_t lcore_id; @@ -404,7 +401,7 @@ send_single_packet(struct rte_mbuf *m, uint16_t port, uint8_t proto) return 0; } -static inline void +static __rte_always_inline void inbound_sp_sa(struct sp_ctx *sp, struct sa_ctx *sa, struct traffic_type *ip, uint16_t lim, struct ipsec_spd_stats *stats) { @@ -451,7 +448,7 @@ inbound_sp_sa(struct sp_ctx *sp, struct sa_ctx *sa, struct traffic_type *ip, ip->num = j; } -static inline int32_t +static __rte_always_inline int32_t get_hop_for_offload_pkt(struct rte_mbuf *pkt, int is_ipv6) { struct ipsec_mbuf_metadata *priv; @@ -531,7 +528,7 @@ route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts) } } -static inline void +static __rte_always_inline void route6_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts) { int32_t hop[MAX_PKT_BURST * 2]; @@ -585,7 +582,7 @@ route6_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts) } } -static inline void +static __rte_always_inline void drain_tx_buffers(struct lcore_conf *qconf) { struct buffer *buf; -- 2.8.4