From: "Xueming(Steven) Li" <xuemingl@nvidia.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>,
Matan Azrad <matan@nvidia.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] vdpa/mlx5: fix polling threads scheduling
Date: Tue, 9 Feb 2021 03:15:46 +0000 [thread overview]
Message-ID: <BY5PR12MB4324D427ABA75D24CCB66631A18E9@BY5PR12MB4324.namprd12.prod.outlook.com> (raw)
In-Reply-To: <e673fa7c-639c-3729-5273-2bc483135cd6@redhat.com>
>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Maxime Coquelin
>Sent: Monday, February 8, 2021 7:17 PM
>To: Matan Azrad <matan@nvidia.com>; dev@dpdk.org
>Subject: Re: [dpdk-dev] [PATCH] vdpa/mlx5: fix polling threads scheduling
>
>
>
>On 2/8/21 10:28 AM, Matan Azrad wrote:
>> When the event mode is with 0 fixed delay, the polling-thread will
>> never give-up CPU.
>>
>> So, when multi-polling-threads are active, the context-switch between
>> them will be managed by the system which may affect latency according
>> to the time-out decided by the system.
>>
>> In order to fix multi-devices polling thread scheduling, this patch
>> forces rescheduling for each CQ poll iteration.
>>
>> Move the polling thread to SCHED_RR mode with maximum priority to
>> complete the fairness.
>>
>> Fixes: 6956a48cabbb ("vdpa/mlx5: set polling mode default delay to
>> zero")
>>
>> Signed-off-by: Matan Azrad <matan@nvidia.com>
>> ---
>> drivers/vdpa/mlx5/mlx5_vdpa_event.c | 16 ++++++++++++++++
>> 1 file changed, 16 insertions(+)
>>
>> diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
>> b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
>> index 0f635ff..86adc86 100644
>> --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c
>> +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c
>> @@ -232,6 +232,9 @@
>> }
>> if (priv->timer_delay_us)
>> usleep(priv->timer_delay_us);
>> + else
>> + /* Give-up CPU to improve polling threads scheduling. */
>> + pthread_yield();
>> }
>>
>> static void *
>> @@ -500,6 +503,9 @@
>> rte_cpuset_t cpuset;
>> pthread_attr_t attr;
>> char name[16];
>> + const struct sched_param sp = {
>> + .sched_priority = sched_get_priority_max(SCHED_RR),
>> + };
>>
>> if (!priv->eventc)
>> /* All virtqs are in poll mode. */
>> @@ -520,6 +526,16 @@
>> DRV_LOG(ERR, "Failed to set thread affinity.");
>> return -1;
>> }
>> + ret = pthread_attr_setschedpolicy(&attr, SCHED_RR);
>> + if (ret) {
>> + DRV_LOG(ERR, "Failed to set thread sched policy = RR.");
>> + return -1;
>> + }
>> + ret = pthread_attr_setschedparam(&attr, &sp);
>> + if (ret) {
>> + DRV_LOG(ERR, "Failed to set thread priority.");
>> + return -1;
>> + }
>> ret = pthread_create(&priv->timer_tid, &attr,
>> mlx5_vdpa_poll_handle, (void *)priv);
>> if (ret) {
>>
>
>Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Xueming Li <xuemingl@mellanox.com>
next prev parent reply other threads:[~2021-02-09 3:15 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-08 9:28 Matan Azrad
2021-02-08 11:17 ` Maxime Coquelin
2021-02-09 3:15 ` Xueming(Steven) Li [this message]
2021-02-10 21:17 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BY5PR12MB4324D427ABA75D24CCB66631A18E9@BY5PR12MB4324.namprd12.prod.outlook.com \
--to=xuemingl@nvidia.com \
--cc=dev@dpdk.org \
--cc=matan@nvidia.com \
--cc=maxime.coquelin@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).