From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 256F3A034F; Wed, 6 May 2020 14:47:56 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8B4A61D9A8; Wed, 6 May 2020 14:47:55 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 863F31D9A7 for ; Wed, 6 May 2020 14:47:54 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 046CeCih027549; Wed, 6 May 2020 05:47:53 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=j0cvmQywfUkEQw0jIbAgayJ3J98XCzmDq17F6R0L/A0=; b=Alyy3NbaRavnHvH8fvgf/7HHjw6MTIlH36Y/r3lzTOABHy0AuQvL1cHCCek9YX9JCGax t2tTRvA9RiRUP3zyHmltfVSgEZ8EW/jVogJ8gi/V6iGgjGoNHiy1Xj7d6SC8QUwJn0FI KtyqoRfUfDy1dMARTq59jvV9hj9X2nopBXs+ePr99/yxYuQ0wEW10UxoMQI4kR5q6ZZQ tmlEv77RBUAQ3MRA1UAEpfHQNH7/xqDDMOcsZ74QQaX7uncbMoFYBUOQBFiURtd9NzgP qpE2Qagcz739ikrFvJOz8SJ0zE+UqQyAIplHbNRsWey2J7azmOY9LV706G2Umig8Tn15 HA== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 30uaukvyby-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 06 May 2020 05:47:53 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 6 May 2020 05:47:51 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 6 May 2020 05:47:51 -0700 Received: from ajoseph83.caveonetworks.com (ajoseph83.caveonetworks.com [10.29.45.60]) by maili.marvell.com (Postfix) with ESMTP id ABF5D3F703F; Wed, 6 May 2020 05:47:49 -0700 (PDT) From: Anoob Joseph To: Akhil Goyal , Radu Nicolau CC: Anoob Joseph , Narayana Prasad , Konstantin Ananyev , Date: Wed, 6 May 2020 18:17:33 +0530 Message-ID: <1588769253-10405-1-git-send-email-anoobj@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1587647245-10524-1-git-send-email-anoobj@marvell.com> References: <1587647245-10524-1-git-send-email-anoobj@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.676 definitions=2020-05-06_05:2020-05-05, 2020-05-06 signatures=0 Subject: [dpdk-dev] [PATCH v3] examples/ipsec-secgw: add per core packet stats X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Adding per core packet handling stats to analyze traffic distribution when multiple cores are engaged. Since aggregating the packet stats across cores would affect performance, keeping the feature disabled using compile time flags. Signed-off-by: Anoob Joseph --- v3: * Added wrapper functions for updating rx, tx & dropped counts * Updated free_pkts() to have stats updated internally * Introduced similar free_pkt() function which updates stats and frees one packet * Moved all inline functions and macros to ipsec-secgw.h * Made STATS_INTERVAL macro to control the interval of the stats update. STATS_INTERVAL = 0 would disable the feature. v2: * Added lookup failure cases to drop count examples/ipsec-secgw/ipsec-secgw.c | 113 ++++++++++++++++++++++++++++------- examples/ipsec-secgw/ipsec-secgw.h | 68 +++++++++++++++++++++ examples/ipsec-secgw/ipsec.c | 20 +++---- examples/ipsec-secgw/ipsec_process.c | 11 +--- 4 files changed, 171 insertions(+), 41 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 6d02341..e97a4f8 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -288,6 +288,59 @@ adjust_ipv6_pktlen(struct rte_mbuf *m, const struct rte_ipv6_hdr *iph, } } +#if (STATS_INTERVAL > 0) + +/* Print out statistics on packet distribution */ +static void +print_stats(void) +{ + uint64_t total_packets_dropped, total_packets_tx, total_packets_rx; + unsigned int coreid; + float burst_percent; + + total_packets_dropped = 0; + total_packets_tx = 0; + total_packets_rx = 0; + + const char clr[] = { 27, '[', '2', 'J', '\0' }; + + /* Clear screen and move to top left */ + printf("%s", clr); + + printf("\nCore statistics ===================================="); + + for (coreid = 0; coreid < RTE_MAX_LCORE; coreid++) { + /* skip disabled cores */ + if (rte_lcore_is_enabled(coreid) == 0) + continue; + burst_percent = (float)(core_statistics[coreid].burst_rx * 100)/ + core_statistics[coreid].rx; + printf("\nStatistics for core %u ------------------------------" + "\nPackets received: %20"PRIu64 + "\nPackets sent: %24"PRIu64 + "\nPackets dropped: %21"PRIu64 + "\nBurst percent: %23.2f", + coreid, + core_statistics[coreid].rx, + core_statistics[coreid].tx, + core_statistics[coreid].dropped, + burst_percent); + + total_packets_dropped += core_statistics[coreid].dropped; + total_packets_tx += core_statistics[coreid].tx; + total_packets_rx += core_statistics[coreid].rx; + } + printf("\nAggregate statistics ===============================" + "\nTotal packets received: %14"PRIu64 + "\nTotal packets sent: %18"PRIu64 + "\nTotal packets dropped: %15"PRIu64, + total_packets_rx, + total_packets_tx, + total_packets_dropped); + printf("\n====================================================\n"); +} +#endif /* STATS_INTERVAL */ + static inline void prepare_one_packet(struct rte_mbuf *pkt, struct ipsec_traffic *t) { @@ -333,7 +386,7 @@ prepare_one_packet(struct rte_mbuf *pkt, struct ipsec_traffic *t) /* drop packet when IPv6 header exceeds first segment length */ if (unlikely(l3len > pkt->data_len)) { - rte_pktmbuf_free(pkt); + free_pkt(pkt); return; } @@ -350,7 +403,7 @@ prepare_one_packet(struct rte_mbuf *pkt, struct ipsec_traffic *t) /* Unknown/Unsupported type, drop the packet */ RTE_LOG(ERR, IPSEC, "Unsupported packet type 0x%x\n", rte_be_to_cpu_16(eth->ether_type)); - rte_pktmbuf_free(pkt); + free_pkt(pkt); return; } @@ -477,9 +530,12 @@ send_burst(struct lcore_conf *qconf, uint16_t n, uint16_t port) prepare_tx_burst(m_table, n, port, qconf); ret = rte_eth_tx_burst(port, queueid, m_table, n); + + core_stats_update_tx(ret); + if (unlikely(ret < n)) { do { - rte_pktmbuf_free(m_table[ret]); + free_pkt(m_table[ret]); } while (++ret < n); } @@ -525,7 +581,7 @@ send_fragment_packet(struct lcore_conf *qconf, struct rte_mbuf *m, "error code: %d\n", __func__, m->pkt_len, rte_errno); - rte_pktmbuf_free(m); + free_pkt(m); return len; } @@ -550,7 +606,7 @@ send_single_packet(struct rte_mbuf *m, uint16_t port, uint8_t proto) } else if (frag_tbl_sz > 0) len = send_fragment_packet(qconf, m, port, proto); else - rte_pktmbuf_free(m); + free_pkt(m); /* enough pkts to be sent */ if (unlikely(len == MAX_PKT_BURST)) { @@ -584,19 +640,19 @@ inbound_sp_sa(struct sp_ctx *sp, struct sa_ctx *sa, struct traffic_type *ip, continue; } if (res == DISCARD) { - rte_pktmbuf_free(m); + free_pkt(m); continue; } /* Only check SPI match for processed IPSec packets */ if (i < lim && ((m->ol_flags & PKT_RX_SEC_OFFLOAD) == 0)) { - rte_pktmbuf_free(m); + free_pkt(m); continue; } sa_idx = res - 1; if (!inbound_sa_check(sa, m, sa_idx)) { - rte_pktmbuf_free(m); + free_pkt(m); continue; } ip->pkts[j++] = m; @@ -631,7 +687,7 @@ split46_traffic(struct ipsec_traffic *trf, struct rte_mbuf *mb[], uint32_t num) offsetof(struct ip6_hdr, ip6_nxt)); n6++; } else - rte_pktmbuf_free(m); + free_pkt(m); } trf->ip4.num = n4; @@ -683,7 +739,7 @@ outbound_sp(struct sp_ctx *sp, struct traffic_type *ip, m = ip->pkts[i]; sa_idx = ip->res[i] - 1; if (ip->res[i] == DISCARD) - rte_pktmbuf_free(m); + free_pkt(m); else if (ip->res[i] == BYPASS) ip->pkts[j++] = m; else { @@ -702,8 +758,7 @@ process_pkts_outbound(struct ipsec_ctx *ipsec_ctx, uint16_t idx, nb_pkts_out, i; /* Drop any IPsec traffic from protected ports */ - for (i = 0; i < traffic->ipsec.num; i++) - rte_pktmbuf_free(traffic->ipsec.pkts[i]); + free_pkts(traffic->ipsec.pkts, traffic->ipsec.num); traffic->ipsec.num = 0; @@ -743,14 +798,12 @@ process_pkts_inbound_nosp(struct ipsec_ctx *ipsec_ctx, uint32_t nb_pkts_in, i, idx; /* Drop any IPv4 traffic from unprotected ports */ - for (i = 0; i < traffic->ip4.num; i++) - rte_pktmbuf_free(traffic->ip4.pkts[i]); + free_pkts(traffic->ip4.pkts, traffic->ip4.num); traffic->ip4.num = 0; /* Drop any IPv6 traffic from unprotected ports */ - for (i = 0; i < traffic->ip6.num; i++) - rte_pktmbuf_free(traffic->ip6.pkts[i]); + free_pkts(traffic->ip6.pkts, traffic->ip6.num); traffic->ip6.num = 0; @@ -786,8 +839,7 @@ process_pkts_outbound_nosp(struct ipsec_ctx *ipsec_ctx, struct ip *ip; /* Drop any IPsec traffic from protected ports */ - for (i = 0; i < traffic->ipsec.num; i++) - rte_pktmbuf_free(traffic->ipsec.pkts[i]); + free_pkts(traffic->ipsec.pkts, traffic->ipsec.num); n = 0; @@ -901,7 +953,7 @@ route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts) } if ((pkt_hop & RTE_LPM_LOOKUP_SUCCESS) == 0) { - rte_pktmbuf_free(pkts[i]); + free_pkt(pkts[i]); continue; } send_single_packet(pkts[i], pkt_hop & 0xff, IPPROTO_IP); @@ -953,7 +1005,7 @@ route6_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts) } if (pkt_hop == -1) { - rte_pktmbuf_free(pkts[i]); + free_pkt(pkts[i]); continue; } send_single_packet(pkts[i], pkt_hop & 0xff, IPPROTO_IPV6); @@ -1099,6 +1151,10 @@ ipsec_poll_mode_worker(void) const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S * BURST_TX_DRAIN_US; struct lcore_rx_queue *rxql; +#if (STATS_INTERVAL > 0) + const uint64_t timer_period = STATS_INTERVAL * rte_get_timer_hz(); + uint64_t timer_tsc = 0; +#endif /* STATS_INTERVAL */ prev_tsc = 0; lcore_id = rte_lcore_id(); @@ -1159,6 +1215,19 @@ ipsec_poll_mode_worker(void) drain_tx_buffers(qconf); drain_crypto_buffers(qconf); prev_tsc = cur_tsc; +#if (STATS_INTERVAL > 0) + if (lcore_id == rte_get_master_lcore()) { + /* advance the timer */ + timer_tsc += diff_tsc; + + /* if timer has reached its timeout */ + if (unlikely(timer_tsc >= timer_period)) { + print_stats(); + /* reset the timer */ + timer_tsc = 0; + } + } +#endif /* STATS_INTERVAL */ } for (i = 0; i < qconf->nb_rx_queue; ++i) { @@ -1169,8 +1238,10 @@ ipsec_poll_mode_worker(void) nb_rx = rte_eth_rx_burst(portid, queueid, pkts, MAX_PKT_BURST); - if (nb_rx > 0) + if (nb_rx > 0) { + core_stats_update_rx(nb_rx); process_pkts(qconf, pkts, nb_rx, portid); + } /* dequeue and process completed crypto-ops */ if (is_unprotected_port(portid)) diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h index 4b53cb5..5b3561f 100644 --- a/examples/ipsec-secgw/ipsec-secgw.h +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -6,6 +6,8 @@ #include +#define STATS_INTERVAL 0 + #define NB_SOCKETS 4 #define MAX_PKT_BURST 32 @@ -69,6 +71,17 @@ struct ethaddr_info { uint64_t src, dst; }; +#if (STATS_INTERVAL > 0) +struct ipsec_core_statistics { + uint64_t tx; + uint64_t rx; + uint64_t dropped; + uint64_t burst_rx; +} __rte_cache_aligned; + +struct ipsec_core_statistics core_statistics[RTE_MAX_LCORE]; +#endif /* STATS_INTERVAL */ + extern struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS]; /* Port mask to identify the unprotected ports */ @@ -85,4 +98,59 @@ is_unprotected_port(uint16_t port_id) return unprotected_port_mask & (1 << port_id); } +static inline void +core_stats_update_rx(int n) +{ +#if (STATS_INTERVAL > 0) + int lcore_id = rte_lcore_id(); + core_statistics[lcore_id].rx += n; + if (n == MAX_PKT_BURST) + core_statistics[lcore_id].burst_rx += n; +#else + RTE_SET_USED(n); +#endif /* STATS_INTERVAL */ +} + +static inline void +core_stats_update_tx(int n) +{ +#if (STATS_INTERVAL > 0) + int lcore_id = rte_lcore_id(); + core_statistics[lcore_id].tx += n; +#else + RTE_SET_USED(n); +#endif /* STATS_INTERVAL */ +} + +static inline void +core_stats_update_drop(int n) +{ +#if (STATS_INTERVAL > 0) + int lcore_id = rte_lcore_id(); + core_statistics[lcore_id].dropped += n; +#else + RTE_SET_USED(n); +#endif /* STATS_INTERVAL */ +} + +/* helper routine to free bulk of packets */ +static inline void +free_pkts(struct rte_mbuf *mb[], uint32_t n) +{ + uint32_t i; + + for (i = 0; i != n; i++) + rte_pktmbuf_free(mb[i]); + + core_stats_update_drop(n); +} + +/* helper routine to free single packet */ +static inline void +free_pkt(struct rte_mbuf *mb) +{ + rte_pktmbuf_free(mb); + core_stats_update_drop(1); +} + #endif /* _IPSEC_SECGW_H_ */ diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c index bf88d80..351f1f1 100644 --- a/examples/ipsec-secgw/ipsec.c +++ b/examples/ipsec-secgw/ipsec.c @@ -500,7 +500,7 @@ enqueue_cop_burst(struct cdev_qp *cqp) cqp->id, cqp->qp, ret, len); /* drop packets that we fail to enqueue */ for (i = ret; i < len; i++) - rte_pktmbuf_free(cqp->buf[i]->sym->m_src); + free_pkt(cqp->buf[i]->sym->m_src); } cqp->in_flight += ret; cqp->len = 0; @@ -528,7 +528,7 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx, for (i = 0; i < nb_pkts; i++) { if (unlikely(sas[i] == NULL)) { - rte_pktmbuf_free(pkts[i]); + free_pkt(pkts[i]); continue; } @@ -549,7 +549,7 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx, if ((unlikely(ips->security.ses == NULL)) && create_lookaside_session(ipsec_ctx, sa, ips)) { - rte_pktmbuf_free(pkts[i]); + free_pkt(pkts[i]); continue; } @@ -563,7 +563,7 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx, case RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO: RTE_LOG(ERR, IPSEC, "CPU crypto is not supported by the" " legacy mode."); - rte_pktmbuf_free(pkts[i]); + free_pkt(pkts[i]); continue; case RTE_SECURITY_ACTION_TYPE_NONE: @@ -575,7 +575,7 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx, if ((unlikely(ips->crypto.ses == NULL)) && create_lookaside_session(ipsec_ctx, sa, ips)) { - rte_pktmbuf_free(pkts[i]); + free_pkt(pkts[i]); continue; } @@ -584,7 +584,7 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx, ret = xform_func(pkts[i], sa, &priv->cop); if (unlikely(ret)) { - rte_pktmbuf_free(pkts[i]); + free_pkt(pkts[i]); continue; } break; @@ -608,7 +608,7 @@ ipsec_enqueue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx, ret = xform_func(pkts[i], sa, &priv->cop); if (unlikely(ret)) { - rte_pktmbuf_free(pkts[i]); + free_pkt(pkts[i]); continue; } @@ -643,7 +643,7 @@ ipsec_inline_dequeue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx, sa = priv->sa; ret = xform_func(pkt, sa, &priv->cop); if (unlikely(ret)) { - rte_pktmbuf_free(pkt); + free_pkt(pkt); continue; } pkts[nb_pkts++] = pkt; @@ -690,13 +690,13 @@ ipsec_dequeue(ipsec_xform_fn xform_func, struct ipsec_ctx *ipsec_ctx, RTE_SECURITY_ACTION_TYPE_NONE) { ret = xform_func(pkt, sa, cops[j]); if (unlikely(ret)) { - rte_pktmbuf_free(pkt); + free_pkt(pkt); continue; } } else if (ipsec_get_action_type(sa) == RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL) { if (cops[j]->status) { - rte_pktmbuf_free(pkt); + free_pkt(pkt); continue; } } diff --git a/examples/ipsec-secgw/ipsec_process.c b/examples/ipsec-secgw/ipsec_process.c index bb2f2b8..4748299 100644 --- a/examples/ipsec-secgw/ipsec_process.c +++ b/examples/ipsec-secgw/ipsec_process.c @@ -12,22 +12,13 @@ #include #include "ipsec.h" +#include "ipsec-secgw.h" #define SATP_OUT_IPV4(t) \ ((((t) & RTE_IPSEC_SATP_MODE_MASK) == RTE_IPSEC_SATP_MODE_TRANS && \ (((t) & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4)) || \ ((t) & RTE_IPSEC_SATP_MODE_MASK) == RTE_IPSEC_SATP_MODE_TUNLV4) -/* helper routine to free bulk of packets */ -static inline void -free_pkts(struct rte_mbuf *mb[], uint32_t n) -{ - uint32_t i; - - for (i = 0; i != n; i++) - rte_pktmbuf_free(mb[i]); -} - /* helper routine to free bulk of crypto-ops and related packets */ static inline void free_cops(struct rte_crypto_op *cop[], uint32_t n) -- 2.7.4