From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E8C7AA0547; Mon, 8 Feb 2021 12:17:18 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 76D4C4014E; Mon, 8 Feb 2021 12:17:18 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mails.dpdk.org (Postfix) with ESMTP id C4E6D40147 for ; Mon, 8 Feb 2021 12:17:16 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1612783036; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+0gbmxnENiUu2TkTjgVOqLPSYuJfJ+KcXOne6FX3IQQ=; b=LiURCImKfM/3DcMsvrao37sdOM8gpoiBzpHT1MQrNqgo53Gqq1gUXV4+bOKvLaVjWmaC7N 8v98FmPPiJADisJCfuAFb5RM4lzjqxGm7oDLxVfCKhoR/bFSXQFQlTsDiiUobM6YFCcijm q2E0ydTjdLJwxaReKw439ydXBrUvKvY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-393-t7l3hVGSNauwIqD42dfQGw-1; Mon, 08 Feb 2021 06:17:13 -0500 X-MC-Unique: t7l3hVGSNauwIqD42dfQGw-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5F82E9126F; Mon, 8 Feb 2021 11:17:12 +0000 (UTC) Received: from [10.36.110.29] (unknown [10.36.110.29]) by smtp.corp.redhat.com (Postfix) with ESMTPS id A20F81002388; Mon, 8 Feb 2021 11:17:11 +0000 (UTC) To: Matan Azrad , dev@dpdk.org References: <1612776481-151396-1-git-send-email-matan@nvidia.com> From: Maxime Coquelin Message-ID: Date: Mon, 8 Feb 2021 12:17:09 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.0 MIME-Version: 1.0 In-Reply-To: <1612776481-151396-1-git-send-email-matan@nvidia.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH] vdpa/mlx5: fix polling threads scheduling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 2/8/21 10:28 AM, Matan Azrad wrote: > When the event mode is with 0 fixed delay, the polling-thread will never > give-up CPU. > > So, when multi-polling-threads are active, the context-switch between > them will be managed by the system which may affect latency according to > the time-out decided by the system. > > In order to fix multi-devices polling thread scheduling, this patch > forces rescheduling for each CQ poll iteration. > > Move the polling thread to SCHED_RR mode with maximum priority to > complete the fairness. > > Fixes: 6956a48cabbb ("vdpa/mlx5: set polling mode default delay to zero") > > Signed-off-by: Matan Azrad > --- > drivers/vdpa/mlx5/mlx5_vdpa_event.c | 16 ++++++++++++++++ > 1 file changed, 16 insertions(+) > > diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c > index 0f635ff..86adc86 100644 > --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c > +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c > @@ -232,6 +232,9 @@ > } > if (priv->timer_delay_us) > usleep(priv->timer_delay_us); > + else > + /* Give-up CPU to improve polling threads scheduling. */ > + pthread_yield(); > } > > static void * > @@ -500,6 +503,9 @@ > rte_cpuset_t cpuset; > pthread_attr_t attr; > char name[16]; > + const struct sched_param sp = { > + .sched_priority = sched_get_priority_max(SCHED_RR), > + }; > > if (!priv->eventc) > /* All virtqs are in poll mode. */ > @@ -520,6 +526,16 @@ > DRV_LOG(ERR, "Failed to set thread affinity."); > return -1; > } > + ret = pthread_attr_setschedpolicy(&attr, SCHED_RR); > + if (ret) { > + DRV_LOG(ERR, "Failed to set thread sched policy = RR."); > + return -1; > + } > + ret = pthread_attr_setschedparam(&attr, &sp); > + if (ret) { > + DRV_LOG(ERR, "Failed to set thread priority."); > + return -1; > + } > ret = pthread_create(&priv->timer_tid, &attr, > mlx5_vdpa_poll_handle, (void *)priv); > if (ret) { > Reviewed-by: Maxime Coquelin