DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] Arm roadmap for 20.11
@ 2020-09-11  4:18 Honnappa Nagarahalli
  2020-11-25  5:39 ` [dpdk-dev] NTT TechnoCross roadmap for 21.02 Hideyuki Yamashita
  0 siblings, 1 reply; 8+ messages in thread
From: Honnappa Nagarahalli @ 2020-09-11  4:18 UTC (permalink / raw)
  To: dev, thomas, david.marchand; +Cc: Ruifeng Wang, Honnappa Nagarahalli, nd, nd

(Bcc: Arm internal stake holders)

Hello,
	Following are the work items planned for 20.11:

1) Scatter-Gather APIs for rte_ring
2) RCU integration with hash library
3) Performance improvements to rte_stack library
4) Relaxed memory ordering changes to bbdev, eal, ethdev, power libraries
5) rte_cio_*mb deprecation changes
6) Enable runtime config for burst stats and CPU cycle stats in testpmd

Thank you,
Honnappa

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dpdk-dev] NTT TechnoCross roadmap for 21.02
  2020-09-11  4:18 [dpdk-dev] Arm roadmap for 20.11 Honnappa Nagarahalli
@ 2020-11-25  5:39 ` Hideyuki Yamashita
  2020-11-25 11:01   ` Morten Brørup
  0 siblings, 1 reply; 8+ messages in thread
From: Hideyuki Yamashita @ 2020-11-25  5:39 UTC (permalink / raw)
  To: dev

Hello,

Following are the work items planned for 21.02 from NTT TechnoCross:
I will try to post patch set after 20.11 is released.

---
1) Introduce API stats function
In general, DPDK application consumes CPU usage because it polls
incoming packets using rx_burst API in infinite loop.
This makes difficult to estimate how much CPU usage is really
used to send/receive packets by the DPDK application.

For example, even if no incoming packets arriving, CPU usage
looks nearly 100% when observed by top command. 

It is beneficial if developers can observe real CPU usage of the 
DPDK application.
Such information can be exported to monitoring application like 
prometheus/graphana and shows CPU usage graphically.

To achieve above, this patch set provides apistats functionality.
apistats provides the followiing two counters for each lcore.
- rx_burst_counts[RTE_MAX_LCORE]
- tx_burst_counts[RTE_MAX_LCORE]
Those accumulates rx_burst/tx_burst counts since the application starts.

By using those values, developers can roughly estimate CPU usage.
Let us assume a DPDK application is simply forwarding packets.
It calls tx_burst only if it receive packets.
If rx_burst_counts=1000 and tx_burst_count=1000 during certain 
period of time, one can assume CPU usage is 100%.
If rx_burst_counts=1000 and tx_burst_count=100 during certain
period of time, one can assume CPU usage is 10%.
Here we assumes that tx_burst_count equals counts which rx_burst function
really receives incoming packets. 
--

Thank you,
Hideyuki Yamashita
NTT TechnoCross



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] NTT TechnoCross roadmap for 21.02
  2020-11-25  5:39 ` [dpdk-dev] NTT TechnoCross roadmap for 21.02 Hideyuki Yamashita
@ 2020-11-25 11:01   ` Morten Brørup
  2020-11-26  1:20     ` Hideyuki Yamashita
  0 siblings, 1 reply; 8+ messages in thread
From: Morten Brørup @ 2020-11-25 11:01 UTC (permalink / raw)
  To: Hideyuki Yamashita; +Cc: dev

> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hideyuki Yamashita
> Sent: Wednesday, November 25, 2020 6:40 AM
> 
> Hello,
> 
> Following are the work items planned for 21.02 from NTT TechnoCross:
> I will try to post patch set after 20.11 is released.
> 
> ---
> 1) Introduce API stats function
> In general, DPDK application consumes CPU usage because it polls
> incoming packets using rx_burst API in infinite loop.
> This makes difficult to estimate how much CPU usage is really
> used to send/receive packets by the DPDK application.
> 
> For example, even if no incoming packets arriving, CPU usage
> looks nearly 100% when observed by top command.
> 
> It is beneficial if developers can observe real CPU usage of the
> DPDK application.
> Such information can be exported to monitoring application like
> prometheus/graphana and shows CPU usage graphically.

This would be very beneficial.

Unfortunately, this seems to be not so simple for applications like the SmartShare StraightShaper, which is not a simple packet forwarding application, but has multiple pipeline stages. Our application also keeps some packets in queues for shaping purposes, so the number of packets transmitted does not match the number of packets received within some time interval.

> 
> To achieve above, this patch set provides apistats functionality.
> apistats provides the followiing two counters for each lcore.
> - rx_burst_counts[RTE_MAX_LCORE]
> - tx_burst_counts[RTE_MAX_LCORE]
> Those accumulates rx_burst/tx_burst counts since the application
> starts.
> 
> By using those values, developers can roughly estimate CPU usage.
> Let us assume a DPDK application is simply forwarding packets.
> It calls tx_burst only if it receive packets.
> If rx_burst_counts=1000 and tx_burst_count=1000 during certain
> period of time, one can assume CPU usage is 100%.
> If rx_burst_counts=1000 and tx_burst_count=100 during certain
> period of time, one can assume CPU usage is 10%.
> Here we assumes that tx_burst_count equals counts which rx_burst
> function
> really receives incoming packets.

I am not sure I understand what is being counted in these counters. The number of packets in the bursts, or the number of invocations of the rx_burst/tx_burst functions.


Here are some data from our purpose built profiler, illustrating how nonlinear this really is. These data are from a SmartShare appliance in live production at an ISP. I hope you find it useful:

Rx_burst uses ca. 40 CPU cycles if there are no packets, ca. 260 cycles if there is one packet, and down to ca. 40 cycles per packet for a burst of many packets.

Tx_burst uses ca. 350 cycles for one packet, and down to ca. 20 cycles per packet for a burst of many packets.

One of our intermediate pipeline stages (which not is not receiving or transmitting packets, only processing them) uses ca. 150 cycles for a burst of one packet, and down to ca. 110 cycles for a burst of many packets.


Nevertheless, your suggested API might be usable by simple ingress->routing->egress applications. So don’t let me discourage you!



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] NTT TechnoCross roadmap for 21.02
  2020-11-25 11:01   ` Morten Brørup
@ 2020-11-26  1:20     ` Hideyuki Yamashita
  2020-11-29 14:43       ` Baruch Even
  0 siblings, 1 reply; 8+ messages in thread
From: Hideyuki Yamashita @ 2020-11-26  1:20 UTC (permalink / raw)
  To: Morten Brorup; +Cc: dev

Hello Morten,

Thanks for your giving me your valuable feedback.
Please see inline tagged with [Hideyuki].

> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hideyuki Yamashita
> > Sent: Wednesday, November 25, 2020 6:40 AM
> > 
> > Hello,
> > 
> > Following are the work items planned for 21.02 from NTT TechnoCross:
> > I will try to post patch set after 20.11 is released.
> > 
> > ---
> > 1) Introduce API stats function
> > In general, DPDK application consumes CPU usage because it polls
> > incoming packets using rx_burst API in infinite loop.
> > This makes difficult to estimate how much CPU usage is really
> > used to send/receive packets by the DPDK application.
> > 
> > For example, even if no incoming packets arriving, CPU usage
> > looks nearly 100% when observed by top command.
> > 
> > It is beneficial if developers can observe real CPU usage of the
> > DPDK application.
> > Such information can be exported to monitoring application like
> > prometheus/graphana and shows CPU usage graphically.
> 
> This would be very beneficial.
> 
> Unfortunately, this seems to be not so simple for applications like the SmartShare StraightShaper, which is not a simple packet forwarding application, but has multiple pipeline stages. Our application also keeps some packets in queues for shaping purposes, so the number of packets transmitted does not match the number of packets received within some time interval.

[Hideyuki]
Thanks.
I share the same view with you.
DPDK application varies and not all applications "simply forward
incoming packets".
So I think maybe target applications are limited.
Though I believe this enhancement is useful for those applications still.

> > 
> > To achieve above, this patch set provides apistats functionality.
> > apistats provides the followiing two counters for each lcore.
> > - rx_burst_counts[RTE_MAX_LCORE]
> > - tx_burst_counts[RTE_MAX_LCORE]
> > Those accumulates rx_burst/tx_burst counts since the application
> > starts.
> > 
> > By using those values, developers can roughly estimate CPU usage.
> > Let us assume a DPDK application is simply forwarding packets.
> > It calls tx_burst only if it receive packets.
> > If rx_burst_counts=1000 and tx_burst_count=1000 during certain
> > period of time, one can assume CPU usage is 100%.
> > If rx_burst_counts=1000 and tx_burst_count=100 during certain
> > period of time, one can assume CPU usage is 10%.
> > Here we assumes that tx_burst_count equals counts which rx_burst
> > function
> > really receives incoming packets.
> 
> I am not sure I understand what is being counted in these counters. The number of packets in the bursts, or the number of invocations of the rx_burst/tx_burst functions.
[Hideyuki]
Latter.
I think exsisting mechanism may store number of packets.
(maybe I am wrong)

> 
> Here are some data from our purpose built profiler, illustrating how nonlinear this really is. These data are from a SmartShare appliance in live production at an ISP. I hope you find it useful:
> 
> Rx_burst uses ca. 40 CPU cycles if there are no packets, ca. 260 cycles if there is one packet, and down to ca. 40 cycles per packet for a burst of many packets.
> 
> Tx_burst uses ca. 350 cycles for one packet, and down to ca. 20 cycles per packet for a burst of many packets.
[Hideyuki]
Thanks for your sharing useful info!
Ah, I realized that consumption of CPU cycle is not linear like
following.

0 packet receive  -> 0 cycle
1 packet receive ->  1 cycle
10 packets receive -> 10 cycle

It is very interesting.
Thanks for your information.
I will keep your information in my mind.

> One of our intermediate pipeline stages (which not is not receiving or transmitting packets, only processing them) uses ca. 150 cycles for a burst of one packet, and down to ca. 110 cycles for a burst of many packets.

> 
> Nevertheless, your suggested API might be usable by simple ingress->routing->egress applications. So don’t let me discourage you!
[Hideyuki]
Thanks for your supporting my idea.
Yes, I agree with you that for simple forwarding applications, 
this enhancement might be useful to monitor the cpu usage "roughly".

BR,
Hideyuki Yamashita
NTT TechnoCross

> 



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] NTT TechnoCross roadmap for 21.02
  2020-11-26  1:20     ` Hideyuki Yamashita
@ 2020-11-29 14:43       ` Baruch Even
  2020-12-01  0:40         ` Hideyuki Yamashita
  0 siblings, 1 reply; 8+ messages in thread
From: Baruch Even @ 2020-11-29 14:43 UTC (permalink / raw)
  To: Hideyuki Yamashita; +Cc: Morten Brorup, dpdk-dev

Hi,

The way we do this accounting is completely different, it depends on having
logic that says you are in idle state and counts the start and stop time
from entering to exiting the idle function. It then subtracts the idle time
from the stat period time and that gives you the time (also percentage)
that you spend idle polling. The idle loop at the most basic form only does
the polling and excludes time when there were packets received (or other
things your app does not connected to DPDK on the same core).

Using the counters for the number of polls is going to be harder to use and
far less effective.

Baruch



On Thu, Nov 26, 2020 at 3:21 AM Hideyuki Yamashita <
yamashita.hideyuki@ntt-tx.co.jp> wrote:

> Hello Morten,
>
> Thanks for your giving me your valuable feedback.
> Please see inline tagged with [Hideyuki].
>
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hideyuki
> Yamashita
> > > Sent: Wednesday, November 25, 2020 6:40 AM
> > >
> > > Hello,
> > >
> > > Following are the work items planned for 21.02 from NTT TechnoCross:
> > > I will try to post patch set after 20.11 is released.
> > >
> > > ---
> > > 1) Introduce API stats function
> > > In general, DPDK application consumes CPU usage because it polls
> > > incoming packets using rx_burst API in infinite loop.
> > > This makes difficult to estimate how much CPU usage is really
> > > used to send/receive packets by the DPDK application.
> > >
> > > For example, even if no incoming packets arriving, CPU usage
> > > looks nearly 100% when observed by top command.
> > >
> > > It is beneficial if developers can observe real CPU usage of the
> > > DPDK application.
> > > Such information can be exported to monitoring application like
> > > prometheus/graphana and shows CPU usage graphically.
> >
> > This would be very beneficial.
> >
> > Unfortunately, this seems to be not so simple for applications like the
> SmartShare StraightShaper, which is not a simple packet forwarding
> application, but has multiple pipeline stages. Our application also keeps
> some packets in queues for shaping purposes, so the number of packets
> transmitted does not match the number of packets received within some time
> interval.
>
> [Hideyuki]
> Thanks.
> I share the same view with you.
> DPDK application varies and not all applications "simply forward
> incoming packets".
> So I think maybe target applications are limited.
> Though I believe this enhancement is useful for those applications still.
>
> > >
> > > To achieve above, this patch set provides apistats functionality.
> > > apistats provides the followiing two counters for each lcore.
> > > - rx_burst_counts[RTE_MAX_LCORE]
> > > - tx_burst_counts[RTE_MAX_LCORE]
> > > Those accumulates rx_burst/tx_burst counts since the application
> > > starts.
> > >
> > > By using those values, developers can roughly estimate CPU usage.
> > > Let us assume a DPDK application is simply forwarding packets.
> > > It calls tx_burst only if it receive packets.
> > > If rx_burst_counts=1000 and tx_burst_count=1000 during certain
> > > period of time, one can assume CPU usage is 100%.
> > > If rx_burst_counts=1000 and tx_burst_count=100 during certain
> > > period of time, one can assume CPU usage is 10%.
> > > Here we assumes that tx_burst_count equals counts which rx_burst
> > > function
> > > really receives incoming packets.
> >
> > I am not sure I understand what is being counted in these counters. The
> number of packets in the bursts, or the number of invocations of the
> rx_burst/tx_burst functions.
> [Hideyuki]
> Latter.
> I think exsisting mechanism may store number of packets.
> (maybe I am wrong)
>
> >
> > Here are some data from our purpose built profiler, illustrating how
> nonlinear this really is. These data are from a SmartShare appliance in
> live production at an ISP. I hope you find it useful:
> >
> > Rx_burst uses ca. 40 CPU cycles if there are no packets, ca. 260 cycles
> if there is one packet, and down to ca. 40 cycles per packet for a burst of
> many packets.
> >
> > Tx_burst uses ca. 350 cycles for one packet, and down to ca. 20 cycles
> per packet for a burst of many packets.
> [Hideyuki]
> Thanks for your sharing useful info!
> Ah, I realized that consumption of CPU cycle is not linear like
> following.
>
> 0 packet receive  -> 0 cycle
> 1 packet receive ->  1 cycle
> 10 packets receive -> 10 cycle
>
> It is very interesting.
> Thanks for your information.
> I will keep your information in my mind.
>
> > One of our intermediate pipeline stages (which not is not receiving or
> transmitting packets, only processing them) uses ca. 150 cycles for a burst
> of one packet, and down to ca. 110 cycles for a burst of many packets.
>
> >
> > Nevertheless, your suggested API might be usable by simple
> ingress->routing->egress applications. So don’t let me discourage you!
> [Hideyuki]
> Thanks for your supporting my idea.
> Yes, I agree with you that for simple forwarding applications,
> this enhancement might be useful to monitor the cpu usage "roughly".
>
> BR,
> Hideyuki Yamashita
> NTT TechnoCross
>
> >
>
>
>

-- 
Baruch Even
Platform Team Manager at WekaIO
+972-54-2577223
 *•*  baruch@weka.io  *•* https://www.weka.io/
<https://www.weka.io/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
The World's Fastest File System
<https://www.weka.io/promo/2020-10-esg-validation-paper/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] NTT TechnoCross roadmap for 21.02
  2020-11-29 14:43       ` Baruch Even
@ 2020-12-01  0:40         ` Hideyuki Yamashita
  2020-12-01  5:01           ` [dpdk-dev] Basic question about where to write config for optional feature Hideyuki Yamashita
  0 siblings, 1 reply; 8+ messages in thread
From: Hideyuki Yamashita @ 2020-12-01  0:40 UTC (permalink / raw)
  To: Baruch Even; +Cc: Morten Brorup, dpdk-dev

Hello Baruch,

Thanks for your feedback to our roadmap.
And thanks for your sharing your thought.

As you pointed out, I agree that there are different ways 
to measure/estimate cpu usage.
I think my proposal is "roughly way".

I understand that there are some interests on this enhancement.
(measure cpu usage)
I think it is good.

Anyway I will start preparing patch itself first
and want to hear "how others think about my idea/proposal".

Thanks!

BR,
Hideyuki Yamashita
NTT TechnoCross

> Hi,
> 
> The way we do this accounting is completely different, it depends on having
> logic that says you are in idle state and counts the start and stop time
> from entering to exiting the idle function. It then subtracts the idle time
> from the stat period time and that gives you the time (also percentage)
> that you spend idle polling. The idle loop at the most basic form only does
> the polling and excludes time when there were packets received (or other
> things your app does not connected to DPDK on the same core).
> 
> Using the counters for the number of polls is going to be harder to use and
> far less effective.
> 
> Baruch
> 
> 
> 
> On Thu, Nov 26, 2020 at 3:21 AM Hideyuki Yamashita <
> yamashita.hideyuki@ntt-tx.co.jp> wrote:
> 
> > Hello Morten,
> >
> > Thanks for your giving me your valuable feedback.
> > Please see inline tagged with [Hideyuki].
> >
> > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hideyuki
> > Yamashita
> > > > Sent: Wednesday, November 25, 2020 6:40 AM
> > > >
> > > > Hello,
> > > >
> > > > Following are the work items planned for 21.02 from NTT TechnoCross:
> > > > I will try to post patch set after 20.11 is released.
> > > >
> > > > ---
> > > > 1) Introduce API stats function
> > > > In general, DPDK application consumes CPU usage because it polls
> > > > incoming packets using rx_burst API in infinite loop.
> > > > This makes difficult to estimate how much CPU usage is really
> > > > used to send/receive packets by the DPDK application.
> > > >
> > > > For example, even if no incoming packets arriving, CPU usage
> > > > looks nearly 100% when observed by top command.
> > > >
> > > > It is beneficial if developers can observe real CPU usage of the
> > > > DPDK application.
> > > > Such information can be exported to monitoring application like
> > > > prometheus/graphana and shows CPU usage graphically.
> > >
> > > This would be very beneficial.
> > >
> > > Unfortunately, this seems to be not so simple for applications like the
> > SmartShare StraightShaper, which is not a simple packet forwarding
> > application, but has multiple pipeline stages. Our application also keeps
> > some packets in queues for shaping purposes, so the number of packets
> > transmitted does not match the number of packets received within some time
> > interval.
> >
> > [Hideyuki]
> > Thanks.
> > I share the same view with you.
> > DPDK application varies and not all applications "simply forward
> > incoming packets".
> > So I think maybe target applications are limited.
> > Though I believe this enhancement is useful for those applications still.
> >
> > > >
> > > > To achieve above, this patch set provides apistats functionality.
> > > > apistats provides the followiing two counters for each lcore.
> > > > - rx_burst_counts[RTE_MAX_LCORE]
> > > > - tx_burst_counts[RTE_MAX_LCORE]
> > > > Those accumulates rx_burst/tx_burst counts since the application
> > > > starts.
> > > >
> > > > By using those values, developers can roughly estimate CPU usage.
> > > > Let us assume a DPDK application is simply forwarding packets.
> > > > It calls tx_burst only if it receive packets.
> > > > If rx_burst_counts=1000 and tx_burst_count=1000 during certain
> > > > period of time, one can assume CPU usage is 100%.
> > > > If rx_burst_counts=1000 and tx_burst_count=100 during certain
> > > > period of time, one can assume CPU usage is 10%.
> > > > Here we assumes that tx_burst_count equals counts which rx_burst
> > > > function
> > > > really receives incoming packets.
> > >
> > > I am not sure I understand what is being counted in these counters. The
> > number of packets in the bursts, or the number of invocations of the
> > rx_burst/tx_burst functions.
> > [Hideyuki]
> > Latter.
> > I think exsisting mechanism may store number of packets.
> > (maybe I am wrong)
> >
> > >
> > > Here are some data from our purpose built profiler, illustrating how
> > nonlinear this really is. These data are from a SmartShare appliance in
> > live production at an ISP. I hope you find it useful:
> > >
> > > Rx_burst uses ca. 40 CPU cycles if there are no packets, ca. 260 cycles
> > if there is one packet, and down to ca. 40 cycles per packet for a burst of
> > many packets.
> > >
> > > Tx_burst uses ca. 350 cycles for one packet, and down to ca. 20 cycles
> > per packet for a burst of many packets.
> > [Hideyuki]
> > Thanks for your sharing useful info!
> > Ah, I realized that consumption of CPU cycle is not linear like
> > following.
> >
> > 0 packet receive  -> 0 cycle
> > 1 packet receive ->  1 cycle
> > 10 packets receive -> 10 cycle
> >
> > It is very interesting.
> > Thanks for your information.
> > I will keep your information in my mind.
> >
> > > One of our intermediate pipeline stages (which not is not receiving or
> > transmitting packets, only processing them) uses ca. 150 cycles for a burst
> > of one packet, and down to ca. 110 cycles for a burst of many packets.
> >
> > >
> > > Nevertheless, your suggested API might be usable by simple
> > ingress->routing->egress applications. So don’t let me discourage you!
> > [Hideyuki]
> > Thanks for your supporting my idea.
> > Yes, I agree with you that for simple forwarding applications,
> > this enhancement might be useful to monitor the cpu usage "roughly".
> >
> > BR,
> > Hideyuki Yamashita
> > NTT TechnoCross
> >
> > >
> >
> >
> >
> 
> -- 
> Baruch Even
> Platform Team Manager at WekaIO
> +972-54-2577223
>  *?*  baruch@weka.io  *?* https://www.weka.io/
> <https://www.weka.io/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
> The World's Fastest File System
> <https://www.weka.io/promo/2020-10-esg-validation-paper/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>




^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dpdk-dev] Basic question about where to write config for optional feature
  2020-12-01  0:40         ` Hideyuki Yamashita
@ 2020-12-01  5:01           ` Hideyuki Yamashita
  2020-12-01  9:37             ` Bruce Richardson
  0 siblings, 1 reply; 8+ messages in thread
From: Hideyuki Yamashita @ 2020-12-01  5:01 UTC (permalink / raw)
  To: dpdk-dev

Hello,

I am planning to propose patch for 
optional feature.

Question:
Where should I write compile switch for 
optional features?

I was planning to add flag to enable 
optional festures into config/common_base
like following:

+#
+# Compile the api statistics library
+#
+CONFIG_RTE_LIBRTE_APISTATS=n

But make build system was gone.
I know that meson system is there instead of make.

But I am not so familiar with meson and 
if someone can teach me where to write flag,
it is highly appreciated.
If you guide documents for "how to write config in meson",
it is also appreciated.

Thanks in advance!

BR,
Hideyuki Yamasahita
NTT TechnoCross





^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-dev] Basic question about where to write config for optional feature
  2020-12-01  5:01           ` [dpdk-dev] Basic question about where to write config for optional feature Hideyuki Yamashita
@ 2020-12-01  9:37             ` Bruce Richardson
  0 siblings, 0 replies; 8+ messages in thread
From: Bruce Richardson @ 2020-12-01  9:37 UTC (permalink / raw)
  To: Hideyuki Yamashita; +Cc: dpdk-dev

On Tue, Dec 01, 2020 at 02:01:26PM +0900, Hideyuki Yamashita wrote:
> Hello,
> 
> I am planning to propose patch for 
> optional feature.
> 
> Question:
> Where should I write compile switch for 
> optional features?
> 
> I was planning to add flag to enable 
> optional festures into config/common_base
> like following:
> 
> +#
> +# Compile the api statistics library
> +#
> +CONFIG_RTE_LIBRTE_APISTATS=n
> 
> But make build system was gone.
> I know that meson system is there instead of make.
> 
We are really trying to move away from build-time config. Instead, please
look to make this a runtime option.

Thanks,
/Bruce

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-12-01  9:38 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-11  4:18 [dpdk-dev] Arm roadmap for 20.11 Honnappa Nagarahalli
2020-11-25  5:39 ` [dpdk-dev] NTT TechnoCross roadmap for 21.02 Hideyuki Yamashita
2020-11-25 11:01   ` Morten Brørup
2020-11-26  1:20     ` Hideyuki Yamashita
2020-11-29 14:43       ` Baruch Even
2020-12-01  0:40         ` Hideyuki Yamashita
2020-12-01  5:01           ` [dpdk-dev] Basic question about where to write config for optional feature Hideyuki Yamashita
2020-12-01  9:37             ` Bruce Richardson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).