From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5A011A0561; Tue, 21 Apr 2020 06:51:04 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A42E11D686; Tue, 21 Apr 2020 06:51:03 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 54B7D1D62C for ; Tue, 21 Apr 2020 06:51:02 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 03L4p008003172; Mon, 20 Apr 2020 21:51:01 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=zWjnXXWDBe91PCX2sjXe3Oqn0WmHtn/Buzy4Ok0rtiI=; b=i6WvlnrwwJ7yAd5bweWJoYDJuV6Pe7Oz6+1yj9RqOiaPoXBxDVFFlaKIabheAFeBPUO9 P5WXZh4lgixKF1GjHRIUxRvVMlmTGn2vrmqGrkKPipTaJ3K6JDrJaJtlu4DQlU9v4s2Y OhXvjol9PgavUvn7aj7joYiWZzGSkVdX2PKKBt2tiDvTjjpOqI5xGqdkN9ay4hIk3ajQ v4tY1HspTWH0nxNkVF99utM3FKaBXokCp1KZJVx4vQSsozp/KYpOFQBnAiX3TlEYwhoE g1dXG15zi65zQUYzlSYIqiBgPz6rFMcHogoyJSBdyoa6tI/eJCSp93SRc/jQ96RD00mi 2w== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 30g12nt58c-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 20 Apr 2020 21:51:01 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 20 Apr 2020 21:50:56 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 20 Apr 2020 21:50:56 -0700 Received: from ajoseph83.caveonetworks.com (ajoseph83.caveonetworks.com [10.29.45.60]) by maili.marvell.com (Postfix) with ESMTP id ABB0C3F703F; Mon, 20 Apr 2020 21:50:54 -0700 (PDT) From: Anoob Joseph To: Akhil Goyal , Radu Nicolau CC: Anoob Joseph , Narayana Prasad , Date: Tue, 21 Apr 2020 10:20:33 +0530 Message-ID: <1587444633-17996-1-git-send-email-anoobj@marvell.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138, 18.0.676 definitions=2020-04-21_01:2020-04-20, 2020-04-21 signatures=0 Subject: [dpdk-dev] [PATCH] examples/ipsec-secgw: add per core packet stats X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Adding per core packet handling stats to analyze traffic distribution when multiple cores are engaged. Since aggregating the packet stats across cores would affect performance, keeping the feature disabled using compile time flags. Signed-off-by: Anoob Joseph --- examples/ipsec-secgw/ipsec-secgw.c | 112 +++++++++++++++++++++++++++++++++-- examples/ipsec-secgw/ipsec-secgw.h | 2 + examples/ipsec-secgw/ipsec.h | 22 +++++++ examples/ipsec-secgw/ipsec_process.c | 5 ++ 4 files changed, 137 insertions(+), 4 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 6d02341..eb94187 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -288,6 +288,61 @@ adjust_ipv6_pktlen(struct rte_mbuf *m, const struct rte_ipv6_hdr *iph, } } +#ifdef ENABLE_STATS +static uint64_t timer_period = 10; /* default period is 10 seconds */ + +/* Print out statistics on packet distribution */ +static void +print_stats(void) +{ + uint64_t total_packets_dropped, total_packets_tx, total_packets_rx; + unsigned int coreid; + float burst_percent; + + total_packets_dropped = 0; + total_packets_tx = 0; + total_packets_rx = 0; + + const char clr[] = { 27, '[', '2', 'J', '\0' }; + const char topLeft[] = { 27, '[', '1', ';', '1', 'H', '\0' }; + + /* Clear screen and move to top left */ + printf("%s%s", clr, topLeft); + + printf("\nCore statistics ===================================="); + + for (coreid = 0; coreid < RTE_MAX_LCORE; coreid++) { + /* skip disabled cores */ + if (rte_lcore_is_enabled(coreid) == 0) + continue; + burst_percent = (float)(core_statistics[coreid].burst_rx * 100)/ + core_statistics[coreid].rx; + printf("\nStatistics for core %u ------------------------------" + "\nPackets received: %20"PRIu64 + "\nPackets sent: %24"PRIu64 + "\nPackets dropped: %21"PRIu64 + "\nBurst percent: %23.2f", + coreid, + core_statistics[coreid].rx, + core_statistics[coreid].tx, + core_statistics[coreid].dropped, + burst_percent); + + total_packets_dropped += core_statistics[coreid].dropped; + total_packets_tx += core_statistics[coreid].tx; + total_packets_rx += core_statistics[coreid].rx; + } + printf("\nAggregate statistics ===============================" + "\nTotal packets received: %14"PRIu64 + "\nTotal packets sent: %18"PRIu64 + "\nTotal packets dropped: %15"PRIu64, + total_packets_rx, + total_packets_tx, + total_packets_dropped); + printf("\n====================================================\n"); +} +#endif /* ENABLE_STATS */ + static inline void prepare_one_packet(struct rte_mbuf *pkt, struct ipsec_traffic *t) { @@ -351,6 +406,7 @@ prepare_one_packet(struct rte_mbuf *pkt, struct ipsec_traffic *t) RTE_LOG(ERR, IPSEC, "Unsupported packet type 0x%x\n", rte_be_to_cpu_16(eth->ether_type)); rte_pktmbuf_free(pkt); + core_stats_update_drop(1); return; } @@ -471,6 +527,11 @@ send_burst(struct lcore_conf *qconf, uint16_t n, uint16_t port) int32_t ret; uint16_t queueid; +#ifdef ENABLE_STATS + int lcore_id = rte_lcore_id(); + core_statistics[lcore_id].tx += n; +#endif /* ENABLE_STATS */ + queueid = qconf->tx_queue_id[port]; m_table = (struct rte_mbuf **)qconf->tx_mbufs[port].m_table; @@ -478,6 +539,9 @@ send_burst(struct lcore_conf *qconf, uint16_t n, uint16_t port) ret = rte_eth_tx_burst(port, queueid, m_table, n); if (unlikely(ret < n)) { +#ifdef ENABLE_STATS + core_statistics[lcore_id].dropped += n-ret; +#endif /* ENABLE_STATS */ do { rte_pktmbuf_free(m_table[ret]); } while (++ret < n); @@ -584,18 +648,21 @@ inbound_sp_sa(struct sp_ctx *sp, struct sa_ctx *sa, struct traffic_type *ip, continue; } if (res == DISCARD) { + core_stats_update_drop(1); rte_pktmbuf_free(m); continue; } /* Only check SPI match for processed IPSec packets */ if (i < lim && ((m->ol_flags & PKT_RX_SEC_OFFLOAD) == 0)) { + core_stats_update_drop(1); rte_pktmbuf_free(m); continue; } sa_idx = res - 1; if (!inbound_sa_check(sa, m, sa_idx)) { + core_stats_update_drop(1); rte_pktmbuf_free(m); continue; } @@ -630,8 +697,10 @@ split46_traffic(struct ipsec_traffic *trf, struct rte_mbuf *mb[], uint32_t num) uint8_t *, offsetof(struct ip6_hdr, ip6_nxt)); n6++; - } else + } else { + core_stats_update_drop(1); rte_pktmbuf_free(m); + } } trf->ip4.num = n4; @@ -682,11 +751,12 @@ outbound_sp(struct sp_ctx *sp, struct traffic_type *ip, for (i = 0; i < ip->num; i++) { m = ip->pkts[i]; sa_idx = ip->res[i] - 1; - if (ip->res[i] == DISCARD) + if (ip->res[i] == DISCARD) { + core_stats_update_drop(1); rte_pktmbuf_free(m); - else if (ip->res[i] == BYPASS) + } else if (ip->res[i] == BYPASS) { ip->pkts[j++] = m; - else { + } else { ipsec->res[ipsec->num] = sa_idx; ipsec->pkts[ipsec->num++] = m; } @@ -705,6 +775,8 @@ process_pkts_outbound(struct ipsec_ctx *ipsec_ctx, for (i = 0; i < traffic->ipsec.num; i++) rte_pktmbuf_free(traffic->ipsec.pkts[i]); + core_stats_update_drop(traffic->ipsec.num); + traffic->ipsec.num = 0; outbound_sp(ipsec_ctx->sp4_ctx, &traffic->ip4, &traffic->ipsec); @@ -745,12 +817,14 @@ process_pkts_inbound_nosp(struct ipsec_ctx *ipsec_ctx, /* Drop any IPv4 traffic from unprotected ports */ for (i = 0; i < traffic->ip4.num; i++) rte_pktmbuf_free(traffic->ip4.pkts[i]); + core_stats_update_drop(traffic->ip4.num); traffic->ip4.num = 0; /* Drop any IPv6 traffic from unprotected ports */ for (i = 0; i < traffic->ip6.num; i++) rte_pktmbuf_free(traffic->ip6.pkts[i]); + core_stats_update_drop(traffic->ip6.num); traffic->ip6.num = 0; @@ -788,6 +862,7 @@ process_pkts_outbound_nosp(struct ipsec_ctx *ipsec_ctx, /* Drop any IPsec traffic from protected ports */ for (i = 0; i < traffic->ipsec.num; i++) rte_pktmbuf_free(traffic->ipsec.pkts[i]); + core_stats_update_drop(traffic->ipsec.num); n = 0; @@ -901,6 +976,7 @@ route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts) } if ((pkt_hop & RTE_LPM_LOOKUP_SUCCESS) == 0) { + core_stats_update_drop(1); rte_pktmbuf_free(pkts[i]); continue; } @@ -953,6 +1029,7 @@ route6_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint8_t nb_pkts) } if (pkt_hop == -1) { + core_stats_update_drop(1); rte_pktmbuf_free(pkts[i]); continue; } @@ -1099,6 +1176,9 @@ ipsec_poll_mode_worker(void) const uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S * BURST_TX_DRAIN_US; struct lcore_rx_queue *rxql; +#ifdef ENABLE_STATS + uint64_t timer_tsc = 0; +#endif /* ENABLE_STATS */ prev_tsc = 0; lcore_id = rte_lcore_id(); @@ -1159,6 +1239,19 @@ ipsec_poll_mode_worker(void) drain_tx_buffers(qconf); drain_crypto_buffers(qconf); prev_tsc = cur_tsc; +#ifdef ENABLE_STATS + if (lcore_id == rte_get_master_lcore()) { + /* advance the timer */ + timer_tsc += diff_tsc; + + /* if timer has reached its timeout */ + if (unlikely(timer_tsc >= timer_period)) { + print_stats(); + /* reset the timer */ + timer_tsc = 0; + } + } +#endif /* ENABLE_STATS */ } for (i = 0; i < qconf->nb_rx_queue; ++i) { @@ -1169,6 +1262,12 @@ ipsec_poll_mode_worker(void) nb_rx = rte_eth_rx_burst(portid, queueid, pkts, MAX_PKT_BURST); +#ifdef ENABLE_STATS + core_statistics[lcore_id].rx += nb_rx; + if (nb_rx == MAX_PKT_BURST) + core_statistics[lcore_id].burst_rx += nb_rx; +#endif /* ENABLE_STATS */ + if (nb_rx > 0) process_pkts(qconf, pkts, nb_rx, portid); @@ -2747,6 +2846,11 @@ main(int32_t argc, char **argv) signal(SIGINT, signal_handler); signal(SIGTERM, signal_handler); +#ifdef ENABLE_STATS + /* convert to number of cycles */ + timer_period *= rte_get_timer_hz(); +#endif /* ENABLE_STATS */ + /* initialize event helper configuration */ eh_conf = eh_conf_init(); if (eh_conf == NULL) diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h index 4b53cb5..d886a35 100644 --- a/examples/ipsec-secgw/ipsec-secgw.h +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -6,6 +6,8 @@ #include +//#define ENABLE_STATS + #define NB_SOCKETS 4 #define MAX_PKT_BURST 32 diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 1e642d1..8519eab 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -46,6 +46,17 @@ #define IP6_VERSION (6) +#ifdef ENABLE_STATS +struct ipsec_core_statistics { + uint64_t tx; + uint64_t rx; + uint64_t dropped; + uint64_t burst_rx; +} __rte_cache_aligned; + +struct ipsec_core_statistics core_statistics[RTE_MAX_ETHPORTS]; +#endif /* ENABLE_STATS */ + struct rte_crypto_xform; struct ipsec_xform; struct rte_mbuf; @@ -416,4 +427,15 @@ check_flow_params(uint16_t fdir_portid, uint8_t fdir_qid); int create_ipsec_esp_flow(struct ipsec_sa *sa); +static inline void +core_stats_update_drop(int n) +{ +#ifdef ENABLE_STATS + int lcore_id = rte_lcore_id(); + core_statistics[lcore_id].dropped += n; +#else + RTE_SET_USED(n); +#endif /* ENABLE_STATS */ +} + #endif /* __IPSEC_H__ */ diff --git a/examples/ipsec-secgw/ipsec_process.c b/examples/ipsec-secgw/ipsec_process.c index bb2f2b8..05cb3ad 100644 --- a/examples/ipsec-secgw/ipsec_process.c +++ b/examples/ipsec-secgw/ipsec_process.c @@ -24,6 +24,11 @@ free_pkts(struct rte_mbuf *mb[], uint32_t n) { uint32_t i; +#ifdef ENABLE_STATS + int lcore_id = rte_lcore_id(); + core_statistics[lcore_id].dropped += n; +#endif /* ENABLE_STATS */ + for (i = 0; i != n; i++) rte_pktmbuf_free(mb[i]); } -- 2.7.4