From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D78A8A00E6 for ; Wed, 7 Aug 2019 08:13:07 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E4EB42C6A; Wed, 7 Aug 2019 08:13:06 +0200 (CEST) Received: from sessmg23.ericsson.net (sessmg23.ericsson.net [193.180.251.45]) by dpdk.org (Postfix) with ESMTP id 1140B2C16 for ; Wed, 7 Aug 2019 08:13:04 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; d=ericsson.com; s=mailgw201801; c=relaxed/relaxed; q=dns/txt; i=@ericsson.com; t=1565158384; x=1567750384; h=From:Sender:Reply-To:Subject:Date:Message-ID:To:CC:MIME-Version:Content-Type: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=m8fj+05xsdPwXAVIdHIFUDuWMapwi9uC7asDqbbFmWM=; b=UcL9mfG0NI0zGLblOx2jc1Qz4znRmf789FqtDacjgyvKPSzN9kVQy/n1dqLDeUdm res5e7PpxikOAoeAK8iWoUPJFZ6k5VxIgOTJIF18snYubTzaOUn+CDNf3C/UlhtZ KOe2VggkZ9tZyQOLBCCoZY9Xh1TJD/sl4yqH08Z1hng=; X-AuditID: c1b4fb2d-5fccc9e000001a6d-ee-5d4a6bf05197 Received: from ESESSMB505.ericsson.se (Unknown_Domain [153.88.183.123]) by sessmg23.ericsson.net (Symantec Mail Security) with SMTP id 85.FC.06765.0FB6A4D5; Wed, 7 Aug 2019 08:13:04 +0200 (CEST) Received: from ESESBMR506.ericsson.se (153.88.183.202) by ESESSMB505.ericsson.se (153.88.183.193) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1713.5; Wed, 7 Aug 2019 08:13:04 +0200 Received: from ESESBMR506.ericsson.se (153.88.183.202) by ESESBMR506.ericsson.se (153.88.183.202) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1713.5; Wed, 7 Aug 2019 08:13:03 +0200 Received: from ESGSCHC003.ericsson.se (146.11.116.74) by ESESBMR506.ericsson.se (153.88.183.202) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256) id 15.1.1713.5 via Frontend Transport; Wed, 7 Aug 2019 08:13:03 +0200 Received: from localhost.localdomain (146.11.116.127) by smtps-ao.internal.ericsson.com (146.11.116.74) with Microsoft SMTP Server (TLS) id 14.3.439.0; Wed, 7 Aug 2019 14:13:00 +0800 From: Nitin Katiyar To: CC: Nitin Katiyar , Anju Thomas Date: Wed, 7 Aug 2019 19:43:56 +0530 Message-ID: <1565187236-22545-1-git-send-email-nitin.katiyar@ericsson.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [146.11.116.127] X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrFLMWRmVeSWpSXmKPExsUyM2J7te6HbK9Yg9UT5C3efdrO5MDo8WvB UtYAxigum5TUnMyy1CJ9uwSujLc7RAsealZs7FzA0sD4S76LkZNDQsBE4uidyYxdjFwcQgJH GSX+Xp/DCJIQEvjKKNG40w0iAWQvnL4WqqqDSWL1oTesEM4uRon2E+vZQVrYBAwktl+cBGaL CAhJLP14GcxmFoiQ+NR+jBnEFhbwljja3APUzMHBIqAi8f+HGYjJK+AlsfQ5B8RFchInj01m BbF5BQQlTs58wgIxRULi4IsXzBDHqUs8edTNBFGvJPHs23LWCYyCs5C0zELSsoCRaRWjaHFq cXFuupGxXmpRZnJxcX6eXl5qySZGYAge3PJbdwfj6teOhxgFOBiVeHifbhWPFWJNLCuuzD3E KMHBrCTCK3lOJFaINyWxsiq1KD++qDQntfgQozQHi5I473rvfzFCAumJJanZqakFqUUwWSYO TqkGRns2EYaNy29t9ftwN0r7hr515Heum/vuHZDhf9h8lUNkI2dhjcX/gOr1t3M+FlY+fdDE XlXCqWAdLsD/bG2t1uzoW5JfZ+p9vOyqM+/dyQl59YtX9E/JevPzdHZ217WfnuUux3rPtCtE XLka9e2KXsqOTY3cT9MOrOu+OPvkTb5jDM8z8rcdLVJiKc5INNRiLipOBABPcY8LPQIAAA== Subject: [dpdk-dev] [PATCH] Do RCU synchronization at fixed interval in PMD main loop. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Each PMD updates the global sequence number for RCU synchronization purpose with other OVS threads. This is done at every 1025th iteration in PMD main loop. If the PMD thread is responsible for polling large number of queues that are carrying traffic, it spends a lot of time processing packets and this results in significant delay in performing the housekeeping activities. If the OVS main thread is waiting to synchronize with the PMD threads and if those threads delay performing housekeeping activities for more than 3 sec then LACP processing will be impacted and it will lead to LACP flaps. Similarly, other controls protocols run by OVS main thread are impacted. For e.g. a PMD thread polling 200 ports/queues with average of 1600 processing cycles per packet with batch size of 32 may take 10240000 (200 * 1600 * 32) cycles per iteration. In system with 2.0 GHz CPU it means more than 5 ms per iteration. So, for 1024 iterations to complete it would be more than 5 seconds. This gets worse when there are PMD threads which are less loaded. It reduces possibility of getting mutex lock in ovsrcu_try_quiesce() by heavily loaded PMD and next attempt to quiesce would be after 1024 iterations. With this patch, PMD RCU synchronization will be performed after fixed interval instead after a fixed number of iterations. This will ensure that even if the packet processing load is high the RCU synchronization will not be delayed long. Signed-off-by: Anju Thomas Signed-off-by: Nitin Katiyar --- lib/dpif-netdev-perf.c | 16 ---------------- lib/dpif-netdev-perf.h | 17 +++++++++++++++++ lib/dpif-netdev.c | 27 +++++++++++++++++++++++++++ 3 files changed, 44 insertions(+), 16 deletions(-) diff --git a/lib/dpif-netdev-perf.c b/lib/dpif-netdev-perf.c index e7ed49e..c888e5d 100644 --- a/lib/dpif-netdev-perf.c +++ b/lib/dpif-netdev-perf.c @@ -43,22 +43,6 @@ uint64_t iter_cycle_threshold; static struct vlog_rate_limit latency_rl = VLOG_RATE_LIMIT_INIT(600, 600); -#ifdef DPDK_NETDEV -static uint64_t -get_tsc_hz(void) -{ - return rte_get_tsc_hz(); -} -#else -/* This function is only invoked from PMD threads which depend on DPDK. - * A dummy function is sufficient when building without DPDK_NETDEV. */ -static uint64_t -get_tsc_hz(void) -{ - return 1; -} -#endif - /* Histogram functions. */ static void diff --git a/lib/dpif-netdev-perf.h b/lib/dpif-netdev-perf.h index 244813f..3f2ee1c 100644 --- a/lib/dpif-netdev-perf.h +++ b/lib/dpif-netdev-perf.h @@ -187,6 +187,23 @@ struct pmd_perf_stats { char *log_reason; }; +#ifdef DPDK_NETDEV +static inline uint64_t +get_tsc_hz(void) +{ + return rte_get_tsc_hz(); +} +#else +/* This function is only invoked from PMD threads which depend on DPDK. + * A dummy function is sufficient when building without DPDK_NETDEV. */ +static inline uint64_t +get_tsc_hz(void) +{ + return 1; +} +#endif + + #ifdef __linux__ static inline uint64_t rdtsc_syscall(struct pmd_perf_stats *s) diff --git a/lib/dpif-netdev.c b/lib/dpif-netdev.c index d0a1c58..c3d6835 100644 --- a/lib/dpif-netdev.c +++ b/lib/dpif-netdev.c @@ -751,6 +751,9 @@ struct dp_netdev_pmd_thread { /* Set to true if the pmd thread needs to be reloaded. */ bool need_reload; + + /* Last time (in tsc) when PMD was last quiesced */ + uint64_t last_rcu_quiesced; }; /* Interface to netdev-based datapath. */ @@ -5445,6 +5448,7 @@ pmd_thread_main(void *f_) int poll_cnt; int i; int process_packets = 0; + uint64_t rcu_quiesce_interval = 0; poll_list = NULL; @@ -5486,6 +5490,13 @@ reload: pmd->intrvl_tsc_prev = 0; atomic_store_relaxed(&pmd->intrvl_cycles, 0); cycles_counter_update(s); + + if (get_tsc_hz() > 1) { + /* Calculate ~10 ms interval. */ + rcu_quiesce_interval = get_tsc_hz() / 100; + pmd->last_rcu_quiesced = cycles_counter_get(s); + } + /* Protect pmd stats from external clearing while polling. */ ovs_mutex_lock(&pmd->perf_stats.stats_mutex); for (;;) { @@ -5493,6 +5504,19 @@ reload: pmd_perf_start_iteration(s); + /* Do RCU synchronization at fixed interval instead of doing it + * at fixed number of iterations. This ensures that synchronization + * would not be delayed long even at high load of packet + * processing. */ + + if (rcu_quiesce_interval && + ((cycles_counter_get(s) - pmd->last_rcu_quiesced) > + rcu_quiesce_interval)) { + if (!ovsrcu_try_quiesce()) { + pmd->last_rcu_quiesced = cycles_counter_get(s); + } + } + for (i = 0; i < poll_cnt; i++) { if (!poll_list[i].rxq_enabled) { @@ -5527,6 +5551,9 @@ reload: dp_netdev_pmd_try_optimize(pmd, poll_list, poll_cnt); if (!ovsrcu_try_quiesce()) { emc_cache_slow_sweep(&((pmd->flow_cache).emc_cache)); + if (rcu_quiesce_interval) { + pmd->last_rcu_quiesced = cycles_counter_get(s); + } } for (i = 0; i < poll_cnt; i++) { -- 1.9.1