From: "Hunt, David" <david.hunt@intel.com>
To: "Burakov, Anatoly" <anatoly.burakov@intel.com>, dev@dpdk.org
Cc: liang.j.ma@intel.com
Subject: Re: [dpdk-dev] [PATCH v2] examples/distributor: detect high frequency cores
Date: Thu, 28 Mar 2019 15:20:38 +0000 [thread overview]
Message-ID: <7f42f3c3-dcb3-4b1c-6d4e-01cc015316cd@intel.com> (raw)
Message-ID: <20190328152038.99MTUqknhsaxqTyPsVG4uloeqrOSNP18c6_b7IcIVKY@z> (raw)
In-Reply-To: <b27d9d86-5613-2a71-2582-da6de9bb7e6e@intel.com>
On 28/3/2019 3:10 PM, Burakov, Anatoly wrote:
> On 28-Mar-19 2:42 PM, Hunt, David wrote:
>>
>> On 28/3/2019 1:58 PM, Burakov, Anatoly wrote:
>>> On 28-Mar-19 1:13 PM, David Hunt wrote:
>>>> The distributor application is bottlenecked by the distributor core,
>>>> so if we can give more frequency to this core, then the overall
>>>> performance of the application may increase.
>>>>
>>>> This patch uses the rte_power_get_capabilities() API to query the
>>>> cores provided in the core mask, and if any high frequency cores are
>>>> found (e.g. Turbo Boost is enabled), we will pin the distributor
>>>> workload to that core.
>>>>
>>>> Signed-off-by: Liang Ma <liang.j.ma@intel.com>
>>>> Signed-off-by: David Hunt <david.hunt@intel.com>
>>>> ---
>>>
>>> <...>
>>>
>>>> + if (power_lib_initialised)
>>>> + rte_power_exit(rte_lcore_id());
>>>> printf("\nCore %u exiting tx task.\n", rte_lcore_id());
>>>> return 0;
>>>> }
>>>> @@ -575,9 +582,35 @@ lcore_worker(struct lcore_params *p)
>>>> if (num > 0)
>>>> app_stats.worker_bursts[p->worker_id][num-1]++;
>>>> }
>>>> + if (power_lib_initialised)
>>>> + rte_power_exit(rte_lcore_id());
>>>> + rte_free(p);
>>>> return 0;
>>>> }
>>>> +static int
>>>> +init_power_library(void)
>>>> +{
>>>> + int ret = 0, lcore_id;
>>>> + RTE_LCORE_FOREACH_SLAVE(lcore_id) {
>>>> + if (rte_lcore_is_enabled(lcore_id)) {
>>>
>>> Please correct me if i'm wrong, but RTE_LCORE_FOREACH_SLAVE already
>>> checks if the lcore is enabled.
>>
>>
>> You're correct, I'll fix in next version.
>>
>>
>>>
>>> <...>
>>>
>>>> + if (power_lib_initialised) {
>>>> + /*
>>>> + * Here we'll pre-assign lcore ids to the rx, tx and
>>>> + * distributor workloads if there's higher frequency
>>>> + * on those cores e.g. if Turbo Boost is enabled.
>>>> + * It's also worth mentioning that it will assign cores in a
>>>> + * specific order, so that if there's less than three
>>>> + * available, the higher frequency cores will go to the
>>>> + * distributor first, then rx, then tx.
>>>> + */
>>>> + RTE_LCORE_FOREACH_SLAVE(lcore_id) {
>>>> +
>>>> + rte_power_get_capabilities(lcore_id, &lcore_cap);
>>>> +
>>>> + if (lcore_cap.turbo == 1) {
>>>> + priority_num++;
>>>> + switch (priority_num) {
>>>> + case 1:
>>>> + distr_core_id = lcore_id;
>>>> + printf("Distributor on priority core %d\n",
>>>> + lcore_id);
>>>> + break;
>>>> + case 2:
>>>> + rx_core_id = lcore_id;
>>>> + printf("Rx on priority core %d\n",
>>>> + lcore_id);
>>>> + break;
>>>> + case 3:
>>>> + tx_core_id = lcore_id;
>>>> + printf("Tx on priority core %d\n",
>>>> + lcore_id);
>>>> + break;
>>>> + default:
>>>> + break;
>>>> + }
>>>
>>> This seems to be doing the same thing as right below (assigning
>>> lcore id's in order), yet in one case you use a switch, and in the
>>> other you use a simple loop. I don't see priority_num used anywhere
>>> else, so you might as well simplify this loop to be similar to what
>>> you have below, with "skip-if-not-turbo, if not assigned,
>>> assign-and-continue" type flow.
>>
>>
>> There doing different things. The loop with the switch is looking for
>> up to three priority cores, and storing those choices in
>> distr_core_id, tx_core_id and rx_core_id. This is because we don't
>> know which are the priority cores ahead of time. priority_num is used
>> in the switch statement, and when it finds a priority core, it
>> increments, so we know which variable to assign with the next
>> available priority core. Imagine we have turbo enabled on cores 2,4
>> and 6. That's what I'm trying to solve here.
>>
>> Then, when we get to the next loop, we're just assigning the
>> non-priority cores if the three key workloads have not already been
>> assigned a core, hence the simple loop, using the remaining cores.
>>
>> I looked at simplifying the flow, but as far as I can see, I need two
>> stages, a 'discovery' for the priority cores first, then whatever is
>> left can be done in a normal loop.
>>
>> Does that make sense, or my I missing an obvious refactor opportunity?
>
> I don't see how this is different from what you're doing below.
>
> You are looping over cores, checking if it's a priority core, and
> assigning any priority cores found to distributor, Rx and Tx cores, in
> that order.
>
> Below, you're looping over cores, checking if the core is already
> assigned, and assigning these cores to distributor, Rx and Tx cores,
> in that order.
>
> So, the only tangible difference between the two is 1) the check for
> whether the cores are already assigned (you don't need that because
> these cores *cannot* be attached - you haven't looped over them yet!),
> and 2) check for whether the core is priority.
>
> Just to clarify: i'm not saying merge the two loops, that can't work
> :) I'm saying, drop the switch and rewrite it like this:
>
Ah, gocha now. Will do in next rev. Thanks!
> for (cores) {
> if (core not priority)
> continue;
> if (dist_core not assigned) {
> assign dist core;
> continue;
> }
> if (rx core not assigned) {
> assign rx core;
> continue;
> }
> if (tx core not assigned) {
> assign tx core;
> continue;
> }
> }
>
> The functionality would be equivalent to the current switch method,
> but the code would be much clearer :)
>
>>
>>
>>>
>>> Once that is fixed,
>>>
>>> Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
>>>
>>
>
>
next prev parent reply other threads:[~2019-03-28 15:20 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-22 11:45 [dpdk-dev] [PATCH v1] " David Hunt
2019-03-27 13:58 ` Burakov, Anatoly
2019-03-27 13:58 ` Burakov, Anatoly
2019-03-28 10:20 ` Hunt, David
2019-03-28 10:20 ` Hunt, David
2019-03-28 13:13 ` [dpdk-dev] [PATCH v2] " David Hunt
2019-03-28 13:13 ` David Hunt
2019-03-28 13:58 ` Burakov, Anatoly
2019-03-28 13:58 ` Burakov, Anatoly
2019-03-28 14:42 ` Hunt, David
2019-03-28 14:42 ` Hunt, David
2019-03-28 15:10 ` Burakov, Anatoly
2019-03-28 15:10 ` Burakov, Anatoly
2019-03-28 15:20 ` Hunt, David [this message]
2019-03-28 15:20 ` Hunt, David
2019-03-29 13:15 ` [dpdk-dev] [PATCH v3] " David Hunt
2019-03-29 13:15 ` David Hunt
2019-04-01 9:07 ` Hunt, David
2019-04-01 9:07 ` Hunt, David
2019-04-01 15:30 ` [dpdk-dev] [PATCH v4 1/2] lib/power: add bit for " David Hunt
2019-04-01 15:30 ` David Hunt
2019-04-01 15:30 ` [dpdk-dev] [PATCH v4 2/2] examples/distributor: detect " David Hunt
2019-04-01 15:30 ` David Hunt
2019-04-01 15:43 ` [dpdk-dev] [PATCH v4 1/2] lib/power: add bit for " Burakov, Anatoly
2019-04-01 15:43 ` Burakov, Anatoly
2019-04-01 15:49 ` Hunt, David
2019-04-01 15:49 ` Hunt, David
2019-04-01 16:14 ` [dpdk-dev] [PATCH v5 " David Hunt
2019-04-01 16:14 ` David Hunt
2019-04-01 16:14 ` [dpdk-dev] [PATCH v5 2/2] examples/distributor: detect " David Hunt
2019-04-01 16:14 ` David Hunt
2019-04-02 0:06 ` Thomas Monjalon
2019-04-02 0:06 ` Thomas Monjalon
2019-04-02 0:20 ` Thomas Monjalon
2019-04-02 0:20 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7f42f3c3-dcb3-4b1c-6d4e-01cc015316cd@intel.com \
--to=david.hunt@intel.com \
--cc=anatoly.burakov@intel.com \
--cc=dev@dpdk.org \
--cc=liang.j.ma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).