From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C2DA8A0547; Mon, 8 Feb 2021 10:28:12 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3E9DC40693; Mon, 8 Feb 2021 10:28:12 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by mails.dpdk.org (Postfix) with ESMTP id 0024E40147 for ; Mon, 8 Feb 2021 10:28:09 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from matan@nvidia.com) with SMTP; 8 Feb 2021 11:28:05 +0200 Received: from pegasus25.mtr.labs.mlnx. (pegasus25.mtr.labs.mlnx [10.210.16.10]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 1189S55Z010719; Mon, 8 Feb 2021 11:28:05 +0200 From: Matan Azrad To: dev@dpdk.org Cc: Maxime Coquelin Date: Mon, 8 Feb 2021 09:28:01 +0000 Message-Id: <1612776481-151396-1-git-send-email-matan@nvidia.com> X-Mailer: git-send-email 1.8.3.1 Subject: [dpdk-dev] [PATCH] vdpa/mlx5: fix polling threads scheduling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When the event mode is with 0 fixed delay, the polling-thread will never give-up CPU. So, when multi-polling-threads are active, the context-switch between them will be managed by the system which may affect latency according to the time-out decided by the system. In order to fix multi-devices polling thread scheduling, this patch forces rescheduling for each CQ poll iteration. Move the polling thread to SCHED_RR mode with maximum priority to complete the fairness. Fixes: 6956a48cabbb ("vdpa/mlx5: set polling mode default delay to zero") Signed-off-by: Matan Azrad --- drivers/vdpa/mlx5/mlx5_vdpa_event.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index 0f635ff..86adc86 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -232,6 +232,9 @@ } if (priv->timer_delay_us) usleep(priv->timer_delay_us); + else + /* Give-up CPU to improve polling threads scheduling. */ + pthread_yield(); } static void * @@ -500,6 +503,9 @@ rte_cpuset_t cpuset; pthread_attr_t attr; char name[16]; + const struct sched_param sp = { + .sched_priority = sched_get_priority_max(SCHED_RR), + }; if (!priv->eventc) /* All virtqs are in poll mode. */ @@ -520,6 +526,16 @@ DRV_LOG(ERR, "Failed to set thread affinity."); return -1; } + ret = pthread_attr_setschedpolicy(&attr, SCHED_RR); + if (ret) { + DRV_LOG(ERR, "Failed to set thread sched policy = RR."); + return -1; + } + ret = pthread_attr_setschedparam(&attr, &sp); + if (ret) { + DRV_LOG(ERR, "Failed to set thread priority."); + return -1; + } ret = pthread_create(&priv->timer_tid, &attr, mlx5_vdpa_poll_handle, (void *)priv); if (ret) { -- 1.8.3.1