DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Hunt, David" <david.hunt@intel.com>
To: "Burakov, Anatoly" <anatoly.burakov@intel.com>, dev@dpdk.org
Cc: liang.j.ma@intel.com
Subject: Re: [dpdk-dev] [PATCH v2] examples/distributor: detect high frequency cores
Date: Thu, 28 Mar 2019 14:42:00 +0000	[thread overview]
Message-ID: <2a10fb6d-348c-18ab-d727-c4179243d72e@intel.com> (raw)
In-Reply-To: <6d478ef6-cc2a-71c0-d7b0-de86c04eb41c@intel.com>


On 28/3/2019 1:58 PM, Burakov, Anatoly wrote:
> On 28-Mar-19 1:13 PM, David Hunt wrote:
>> The distributor application is bottlenecked by the distributor core,
>> so if we can give more frequency to this core, then the overall
>> performance of the application may increase.
>>
>> This patch uses the rte_power_get_capabilities() API to query the
>> cores provided in the core mask, and if any high frequency cores are
>> found (e.g. Turbo Boost is enabled), we will pin the distributor
>> workload to that core.
>>
>> Signed-off-by: Liang Ma <liang.j.ma@intel.com>
>> Signed-off-by: David Hunt <david.hunt@intel.com>
>> ---
>
> <...>
>
>> +    if (power_lib_initialised)
>> +        rte_power_exit(rte_lcore_id());
>>       printf("\nCore %u exiting tx task.\n", rte_lcore_id());
>>       return 0;
>>   }
>> @@ -575,9 +582,35 @@ lcore_worker(struct lcore_params *p)
>>           if (num > 0)
>>               app_stats.worker_bursts[p->worker_id][num-1]++;
>>       }
>> +    if (power_lib_initialised)
>> +        rte_power_exit(rte_lcore_id());
>> +    rte_free(p);
>>       return 0;
>>   }
>>   +static int
>> +init_power_library(void)
>> +{
>> +    int ret = 0, lcore_id;
>> +    RTE_LCORE_FOREACH_SLAVE(lcore_id) {
>> +        if (rte_lcore_is_enabled(lcore_id)) {
>
> Please correct me if i'm wrong, but RTE_LCORE_FOREACH_SLAVE already 
> checks if the lcore is enabled.


You're correct, I'll fix in next version.


>
> <...>
>
>>   +    if (power_lib_initialised) {
>> +        /*
>> +         * Here we'll pre-assign lcore ids to the rx, tx and
>> +         * distributor workloads if there's higher frequency
>> +         * on those cores e.g. if Turbo Boost is enabled.
>> +         * It's also worth mentioning that it will assign cores in a
>> +         * specific order, so that if there's less than three
>> +         * available, the higher frequency cores will go to the
>> +         * distributor first, then rx, then tx.
>> +         */
>> +        RTE_LCORE_FOREACH_SLAVE(lcore_id) {
>> +
>> +            rte_power_get_capabilities(lcore_id, &lcore_cap);
>> +
>> +            if (lcore_cap.turbo == 1) {
>> +                priority_num++;
>> +                switch (priority_num) {
>> +                case 1:
>> +                    distr_core_id = lcore_id;
>> +                    printf("Distributor on priority core %d\n",
>> +                            lcore_id);
>> +                    break;
>> +                case 2:
>> +                    rx_core_id = lcore_id;
>> +                    printf("Rx on priority core %d\n",
>> +                            lcore_id);
>> +                    break;
>> +                case 3:
>> +                    tx_core_id = lcore_id;
>> +                    printf("Tx on priority core %d\n",
>> +                            lcore_id);
>> +                    break;
>> +                default:
>> +                    break;
>> +                }
>
> This seems to be doing the same thing as right below (assigning lcore 
> id's in order), yet in one case you use a switch, and in the other you 
> use a simple loop. I don't see priority_num used anywhere else, so you 
> might as well simplify this loop to be similar to what you have below, 
> with "skip-if-not-turbo, if not assigned, assign-and-continue" type flow.


There doing different things. The loop with the switch is looking for up 
to three priority cores, and storing those choices in distr_core_id, 
tx_core_id and rx_core_id. This is because we don't know which are the 
priority cores ahead of time. priority_num is used in the switch 
statement, and when it finds a priority core, it increments, so we know 
which variable to assign with the next available priority core. Imagine 
we have turbo enabled on cores 2,4 and 6. That's what I'm trying to 
solve here.

Then, when we get to the next loop, we're just assigning the 
non-priority cores if the three key workloads have not already been 
assigned a core, hence the simple loop, using the remaining cores.

I looked at simplifying the flow, but as far as I can see, I need two 
stages, a 'discovery' for the priority cores first, then whatever is 
left can be done in a normal loop.

Does that make sense, or my I missing an obvious refactor opportunity?


>
> Once that is fixed,
>
> Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
>

  parent reply	other threads:[~2019-03-28 14:42 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-22 11:45 [dpdk-dev] [PATCH v1] " David Hunt
2019-03-27 13:58 ` Burakov, Anatoly
2019-03-27 13:58   ` Burakov, Anatoly
2019-03-28 10:20   ` Hunt, David
2019-03-28 10:20     ` Hunt, David
2019-03-28 13:13 ` [dpdk-dev] [PATCH v2] " David Hunt
2019-03-28 13:13   ` David Hunt
2019-03-28 13:58   ` Burakov, Anatoly
2019-03-28 13:58     ` Burakov, Anatoly
2019-03-28 14:42     ` Hunt, David [this message]
2019-03-28 14:42       ` Hunt, David
2019-03-28 15:10       ` Burakov, Anatoly
2019-03-28 15:10         ` Burakov, Anatoly
2019-03-28 15:20         ` Hunt, David
2019-03-28 15:20           ` Hunt, David
2019-03-29 13:15   ` [dpdk-dev] [PATCH v3] " David Hunt
2019-03-29 13:15     ` David Hunt
2019-04-01  9:07     ` Hunt, David
2019-04-01  9:07       ` Hunt, David
2019-04-01 15:30     ` [dpdk-dev] [PATCH v4 1/2] lib/power: add bit for " David Hunt
2019-04-01 15:30       ` David Hunt
2019-04-01 15:30       ` [dpdk-dev] [PATCH v4 2/2] examples/distributor: detect " David Hunt
2019-04-01 15:30         ` David Hunt
2019-04-01 15:43       ` [dpdk-dev] [PATCH v4 1/2] lib/power: add bit for " Burakov, Anatoly
2019-04-01 15:43         ` Burakov, Anatoly
2019-04-01 15:49         ` Hunt, David
2019-04-01 15:49           ` Hunt, David
2019-04-01 16:14       ` [dpdk-dev] [PATCH v5 " David Hunt
2019-04-01 16:14         ` David Hunt
2019-04-01 16:14         ` [dpdk-dev] [PATCH v5 2/2] examples/distributor: detect " David Hunt
2019-04-01 16:14           ` David Hunt
2019-04-02  0:06           ` Thomas Monjalon
2019-04-02  0:06             ` Thomas Monjalon
2019-04-02  0:20             ` Thomas Monjalon
2019-04-02  0:20               ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2a10fb6d-348c-18ab-d727-c4179243d72e@intel.com \
    --to=david.hunt@intel.com \
    --cc=anatoly.burakov@intel.com \
    --cc=dev@dpdk.org \
    --cc=liang.j.ma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).