DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!
@ 2014-09-17  4:18 Zhang, Helin
  2014-09-17  8:33 ` David Marchand
  0 siblings, 1 reply; 11+ messages in thread
From: Zhang, Helin @ 2014-09-17  4:18 UTC (permalink / raw)
  To: dev

Hi all

As a lot of special configurations are needed for achieving the best performance on DPDK, and we are asked by a lot of guys here, I'd like to share with all of you about the steps and required configurations of how to achieve the best performance of i40e. I am trying to list all and what I am using and have done to get the best performance on i40e, though something might still be missed. So, supplements are welcome!

Please do not ask me the real performance numbers, as I am not the official way to publish the real numbers!

1. Hardware Platform:
-- Intel(r) Haswell(r) server
-- Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
-- Big enough memory, e.g. 32G, deployed on different memory channels
-- Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ / Intel Corporation Ethernet Controller X710 for 10GbE SFP+
  -- Make sure it is B0 hardware
  -- Update the firmware to at least 4.2.4 or newer, as they may have impact on performance
-- Make sure inserting the NICs to the correct PCIe Gen3 slot, as PCIe Gen2 cannot provide enough bandwidth

2. Software Platform:
-- Fedora 20 with updating kernel to 3.15.10, this is what I am using for testing.
  -- GCC 4.8.2
  -- Kernle 3.15.10
-- DPDK 1.7.0 or later version

3. BIOS Configurations:
-- Enhanced Intel Speedstep: Disabled
-- Processor C3: Disabled
-- Processor C6: Disabled
-- Hyper-Threading: Enabled
-- Intel VT-d: Disable
-- MLC Streamer: Enabled
-- MLC Spatial Prefetcher: Enabled
-- DCU Data Prefetcher: Enabled
-- DCU Instruction Prefetcher: Enabled
-- Direct Cache Access (DCA): Enabled
-- CPU Power and Performance Policy: Performance
-- Memory Power Optimization: Performance
-- Memory RAS and Performance Configuratioin -> NUMA Optimized: Enabled
-- *Extended Tag: Enabled
Note that 'Extended Tag' might not be seen in some BIOS, see 'compile settings' for doing that at runtime.

4. Grub Configurations:
-- Set huge pages, e.g. 'default_hugepagesz=1G hugepagesz=1G hugepages=8'
-- Isolate cpu cores to be used for rx/tx from Linux scheduling, e.g. ' isolcpus=2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17'

5. Compile Settings:
-- Change below configuration items in config files
  CONFIG_RTE_PCI_CONFIG=y
  CONFIG_RTE_PCI_EXTENDED_TAG="on"
  CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y

6. Application Command Line Parameters:
-- For 40G ports, make sure to use two ports on two different cards, as it cannot provide 80G bps on a single PCI gen3 slot.
-- Make sure the lcores to be used are on the CPU socket which the NIC PCI slot directly connected to.
  -- Run tools/ cpu_layout.py to get the lcore/socket topology.
  -- use 'lspci -q | grep Eth' to check the PCI address of the NIC ports
  -- e.g. for PCI address of 8x:00.x, it should use lcores on socket 1, otherwise, use lcores on socket 0
-- Make sure to use 2, 4 or more queues for l3fwd or testpmd to get better performance, 4 queues might be enough
-- e.g. run l3fwd on two ports on two different cards, with using two queues and two lcores per port
  ./l3fwd -c 0x3c0000 -n 4 -w 82:00.0 -w 85:00.0 -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
-- e.g. run testpmd on two ports on two different cards, with using two queues and two lcores per port
  ./testpmd -c 0x3fc0001 -n 4 -w 82:00.0 -w 85:00.0 -- -i --coremask=0x3c0000 --nb-cores=4 --burst=32 --rxfreet=64 --txfreet=64 --txqflags=0xf03 --mbcache=256 --rxq=2 --txq=2 --rss-ip

7. Stream Configurations on Packet Generator:
-- Create a stream, e.g. on IXIA
  -- Set the Ethernet II type to 0x0800
  -- Set the protocols to IPv4
  -- Do not set any layer protocols, as I use IP rss
  -- Set correct destination IP address according to l3fwd-lpm/l3fwd-exact_match routing table.
  -- Set the source IP to random, this is very important to make sure the packet will be received in multiple queues.
-- Note that the desc MAC equals or not equals to the NIC port MAC may result in different performance numbers.
-- Note that promiscuous mode is enabled or not may result in different performance numbers.

Regards,
Helin

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!
  2014-09-17  4:18 [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance! Zhang, Helin
@ 2014-09-17  8:33 ` David Marchand
  2014-09-17  8:50   ` Zhang, Helin
  0 siblings, 1 reply; 11+ messages in thread
From: David Marchand @ 2014-09-17  8:33 UTC (permalink / raw)
  To: Zhang, Helin; +Cc: dev

Hello,

Some questions/comments :

On Wed, Sep 17, 2014 at 6:18 AM, Zhang, Helin <helin.zhang@intel.com> wrote:

> -- *Extended Tag: Enabled
> Note that 'Extended Tag' might not be seen in some BIOS, see 'compile
> settings' for doing that at runtime.
>

I am not sure I understand this point.

Either you have a bios that configures extended tag and you don't need
anything in the dpdk or your bios does not support it and you must set it
at runtime ?
Then why not just set it at runtime and we avoid touching bios config ?


5. Compile Settings:
> -- Change below configuration items in config files
>   CONFIG_RTE_PCI_CONFIG=y
>   CONFIG_RTE_PCI_EXTENDED_TAG="on"
>

Why have this build option for what looks to be a runtime decision ?
Why don't we have RTE_PCI_CONFIG always set and extended tag to "on" ?
(which means that we can get rid of these build options)

Looking at igb_uio code, I am a bit concerned that this option affects all
"igb_uio" pci devices in dpdk.
Can you ensure me that any pci device going through igb_uio (em, igb, ixgbe
etc... devices) will behave well with this option enabled ?

It would be better to have a per-device (or per-pmd) option.

Plus, build option should really be avoided for any feature in dpdk (and we
have a lot of cleanup work about this ...).



>   CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y
>

 Why is it disabled as a default ?

Thanks.

-- 
David Marchand

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!
  2014-09-17  8:33 ` David Marchand
@ 2014-09-17  8:50   ` Zhang, Helin
  2014-09-17 14:03     ` David Marchand
  0 siblings, 1 reply; 11+ messages in thread
From: Zhang, Helin @ 2014-09-17  8:50 UTC (permalink / raw)
  To: David Marchand; +Cc: dev

Hi David

For the ‘extended tag’, it was defined in PCIe spec, but actually not all BIOS implements it. Enabling it in BIOS or at runtime are two choices of doing the same thing. I don’t think it can be configured per PCI device in BIOS, so we don’t need to do that per PCI device in DPDK. Right? Actually we don’t want to touch PCIe settings in DPDK code, that’s why we want to let BIOS config as it is by default. If no better choice, we can do it in DPDK by changing configurations.

For ‘CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n’ by default, we want to support 32 bytes rx descriptors by default. Two reasons:
One is 32 bytes rx descriptors can provide more powerful features, and more offload features.
The other is Linux PF host use 32 bytes rx descriptor by default which might not able to be changed, to support Linux PF host, it would be better to use 32 bytes rx descriptors in DPDK VF by default.

Regards,
Helin

From: David Marchand [mailto:david.marchand@6wind.com]
Sent: Wednesday, September 17, 2014 4:34 PM
To: Zhang, Helin
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!

Hello,

Some questions/comments :

On Wed, Sep 17, 2014 at 6:18 AM, Zhang, Helin <helin.zhang@intel.com<mailto:helin.zhang@intel.com>> wrote:
-- *Extended Tag: Enabled
Note that 'Extended Tag' might not be seen in some BIOS, see 'compile settings' for doing that at runtime.

I am not sure I understand this point.

Either you have a bios that configures extended tag and you don't need anything in the dpdk or your bios does not support it and you must set it at runtime ?
Then why not just set it at runtime and we avoid touching bios config ?


5. Compile Settings:
-- Change below configuration items in config files
  CONFIG_RTE_PCI_CONFIG=y
  CONFIG_RTE_PCI_EXTENDED_TAG="on"

Why have this build option for what looks to be a runtime decision ?
Why don't we have RTE_PCI_CONFIG always set and extended tag to "on" ? (which means that we can get rid of these build options)

Looking at igb_uio code, I am a bit concerned that this option affects all "igb_uio" pci devices in dpdk.
Can you ensure me that any pci device going through igb_uio (em, igb, ixgbe etc... devices) will behave well with this option enabled ?

It would be better to have a per-device (or per-pmd) option.

Plus, build option should really be avoided for any feature in dpdk (and we have a lot of cleanup work about this ...).


  CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y

 Why is it disabled as a default ?

Thanks.

--
David Marchand

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!
  2014-09-17  8:50   ` Zhang, Helin
@ 2014-09-17 14:03     ` David Marchand
  2014-09-18  2:39       ` Zhang, Helin
  0 siblings, 1 reply; 11+ messages in thread
From: David Marchand @ 2014-09-17 14:03 UTC (permalink / raw)
  To: Zhang, Helin; +Cc: dev

On Wed, Sep 17, 2014 at 10:50 AM, Zhang, Helin <helin.zhang@intel.com>
wrote:

>  For the ‘extended tag’, it was defined in PCIe spec, but actually not
> all BIOS implements it. Enabling it in BIOS or at runtime are two choices
> of doing the same thing. I don’t think it can be configured per PCI device
> in BIOS, so we don’t need to do that per PCI device in DPDK. Right?
> Actually we don’t want to touch PCIe settings in DPDK code, that’s why we
> want to let BIOS config as it is by default. If no better choice, we can do
> it in DPDK by changing configurations.
>

- Ok, then if we can make a runtime decision (at dpdk level), there is no
need for bios configuration and there is no need for a build option.
Why don't we get rid of this option ?

As far as the per-device runtime configuration is concerned, I want to make
sure this pci configuration will not break other "igb_uio" pci devices.
If Intel can tell for sure this won't break other devices, then fine, we
can go and enable this for all "igb_uio" pci devices.


- By the way, there is also the CONFIG_MAX_READ_REQUEST_SIZE option that
seems to be disabled (or at least its value 0 seems to tell so).
What is its purpose ?



>
> For ‘CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n’ by default, we want to
> support 32 bytes rx descriptors by default. Two reasons:
>
> One is 32 bytes rx descriptors can provide more powerful features, and
> more offload features.
>
> The other is Linux PF host use 32 bytes rx descriptor by default which
> might not able to be changed, to support Linux PF host, it would be better
> to use 32 bytes rx descriptors in DPDK VF by default.
>

Ok, good to know.

Thanks.

-- 
David Marchand

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!
  2014-09-17 14:03     ` David Marchand
@ 2014-09-18  2:39       ` Zhang, Helin
  2014-09-18  8:57         ` David Marchand
  0 siblings, 1 reply; 11+ messages in thread
From: Zhang, Helin @ 2014-09-18  2:39 UTC (permalink / raw)
  To: David Marchand; +Cc: dev

Hi David

From: David Marchand [mailto:david.marchand@6wind.com]
Sent: Wednesday, September 17, 2014 10:03 PM
To: Zhang, Helin
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!

On Wed, Sep 17, 2014 at 10:50 AM, Zhang, Helin <helin.zhang@intel.com<mailto:helin.zhang@intel.com>> wrote:
For the ‘extended tag’, it was defined in PCIe spec, but actually not all BIOS implements it. Enabling it in BIOS or at runtime are two choices of doing the same thing. I don’t think it can be configured per PCI device in BIOS, so we don’t need to do that per PCI device in DPDK. Right? Actually we don’t want to touch PCIe settings in DPDK code, that’s why we want to let BIOS config as it is by default. If no better choice, we can do it in DPDK by changing configurations.

- Ok, then if we can make a runtime decision (at dpdk level), there is no need for bios configuration and there is no need for a build option.
Why don't we get rid of this option ?

[Helin] Initially, we want to do that for BIOS, if specific BIOS does not implement it. That way it needs to be initialized once during initialization for each PCI device. Sure, that might not be the best option, but it is the easiest way. For Linux end users, the best option could be using ‘setpci’ command. It can enable ‘extended_tag’ per PCI device.

As far as the per-device runtime configuration is concerned, I want to make sure this pci configuration will not break other "igb_uio" pci devices.
If Intel can tell for sure this won't break other devices, then fine, we can go and enable this for all "igb_uio" pci devices.

[Helin] It is in PCIe specification, and enable it can provide better performance generally. But I cannot confirm that it would not break any other devices, as I don’t validate all devices. If you really concern it, ‘setpci’ can be the best option for you. We can add a script for that later.

- By the way, there is also the CONFIG_MAX_READ_REQUEST_SIZE option that seems to be disabled (or at least its value 0 seems to tell so).
What is its purpose ?

[Helin] Yes, it was added for performance tuning long long ago. But now it seems contribute nothing or too few for the performance number, so I just skip it. The default value does nothing on PCIe registers, just keep it as is.


For ‘CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n’ by default, we want to support 32 bytes rx descriptors by default. Two reasons:
One is 32 bytes rx descriptors can provide more powerful features, and more offload features.
The other is Linux PF host use 32 bytes rx descriptor by default which might not able to be changed, to support Linux PF host, it would be better to use 32 bytes rx descriptors in DPDK VF by default.

Ok, good to know.

Thanks.

--
David Marchand

Regards,
Helin

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!
  2014-09-18  2:39       ` Zhang, Helin
@ 2014-09-18  8:57         ` David Marchand
  2014-09-19  3:43           ` Zhang, Helin
  0 siblings, 1 reply; 11+ messages in thread
From: David Marchand @ 2014-09-18  8:57 UTC (permalink / raw)
  To: Zhang, Helin; +Cc: dev

Hello Helin,

On Thu, Sep 18, 2014 at 4:39 AM, Zhang, Helin <helin.zhang@intel.com> wrote:

>  Hi David
>
>
>
> *From:* David Marchand [mailto:david.marchand@6wind.com]
> *Sent:* Wednesday, September 17, 2014 10:03 PM
> *To:* Zhang, Helin
> *Cc:* dev@dpdk.org
> *Subject:* Re: [dpdk-dev] i40e: Steps and required configurations of how
> to achieve the best performance!
>
>
>
> On Wed, Sep 17, 2014 at 10:50 AM, Zhang, Helin <helin.zhang@intel.com>
> wrote:
>
>  For the ‘extended tag’, it was defined in PCIe spec, but actually not
> all BIOS implements it. Enabling it in BIOS or at runtime are two choices
> of doing the same thing. I don’t think it can be configured per PCI device
> in BIOS, so we don’t need to do that per PCI device in DPDK. Right?
> Actually we don’t want to touch PCIe settings in DPDK code, that’s why we
> want to let BIOS config as it is by default. If no better choice, we can do
> it in DPDK by changing configurations.
>
>
>
> - Ok, then if we can make a runtime decision (at dpdk level), there is no
> need for bios configuration and there is no need for a build option.
>
> Why don't we get rid of this option ?
>
>
>
> [Helin] Initially, we want to do that for BIOS, if specific BIOS does not
> implement it. That way it needs to be initialized once during
> initialization for each PCI device. Sure, that might not be the best
> option, but it is the easiest way. For Linux end users, the best option
> could be using ‘setpci’ command. It can enable ‘extended_tag’ per PCI
> device.
>

I am not sure I can see how easy it is since you are forcing this in a
build option.
Anyway, all this knowledge should be in the documentation and not in an
obscure build option that looks to be useless in the end.

The more I look at this, the more I think we did not have good enough
argument for this change in eal / igb_uio yet.

We have something that gives "better performance" on "some server" with
"some bios".



>
> As far as the per-device runtime configuration is concerned, I want to
> make sure this pci configuration will not break other "igb_uio" pci devices.
>
> If Intel can tell for sure this won't break other devices, then fine, we
> can go and enable this for all "igb_uio" pci devices.
>
>
>
> [Helin] It is in PCIe specification, and enable it can provide better
> performance generally. But I cannot confirm that it would not break any
> other devices, as I don’t validate all devices. If you really concern it,
> ‘setpci’ can be the best option for you. We can add a script for that later.
>

Why not a script, but documentation is important too: I would say that we
need an explicit list of platforms and nics which support this.



>
> - By the way, there is also the CONFIG_MAX_READ_REQUEST_SIZE option that
> seems to be disabled (or at least its value 0 seems to tell so).
>
> What is its purpose ?
>
>
>
> [Helin] Yes, it was added for performance tuning long long ago. But now it
> seems contribute nothing or too few for the performance number, so I just
> skip it. The default value does nothing on PCIe registers, just keep it as
> is.
>

Not so long ago to dpdk.org (somewhere around 1.7.0 ...).
If this code had no use for "so long", why did it end up on dpdk.org ?
Why should we keep it ?


Thanks.

-- 
David Marchand

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!
  2014-09-18  8:57         ` David Marchand
@ 2014-09-19  3:43           ` Zhang, Helin
  2014-10-15  9:41             ` Thomas Monjalon
  0 siblings, 1 reply; 11+ messages in thread
From: Zhang, Helin @ 2014-09-19  3:43 UTC (permalink / raw)
  To: David Marchand; +Cc: dev

Hi David

I agree with you that we need to re-think of these two configurations. Thank you very much for the good comments!

My idea on it could be,

1.       Write a script to use ‘setpci’ to configure pci configuration. End user can decide which PCI device needs to be changed.

2.       Add code to change that PCI configuration in i40e PMD only, as it seems nobody else need it till now.

Regards,
Helin

From: David Marchand [mailto:david.marchand@6wind.com]
Sent: Thursday, September 18, 2014 4:58 PM
To: Zhang, Helin
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!

Hello Helin,

On Thu, Sep 18, 2014 at 4:39 AM, Zhang, Helin <helin.zhang@intel.com<mailto:helin.zhang@intel.com>> wrote:
Hi David

From: David Marchand [mailto:david.marchand@6wind.com<mailto:david.marchand@6wind.com>]
Sent: Wednesday, September 17, 2014 10:03 PM
To: Zhang, Helin
Cc: dev@dpdk.org<mailto:dev@dpdk.org>
Subject: Re: [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!

On Wed, Sep 17, 2014 at 10:50 AM, Zhang, Helin <helin.zhang@intel.com<mailto:helin.zhang@intel.com>> wrote:
For the ‘extended tag’, it was defined in PCIe spec, but actually not all BIOS implements it. Enabling it in BIOS or at runtime are two choices of doing the same thing. I don’t think it can be configured per PCI device in BIOS, so we don’t need to do that per PCI device in DPDK. Right? Actually we don’t want to touch PCIe settings in DPDK code, that’s why we want to let BIOS config as it is by default. If no better choice, we can do it in DPDK by changing configurations.

- Ok, then if we can make a runtime decision (at dpdk level), there is no need for bios configuration and there is no need for a build option.
Why don't we get rid of this option ?

[Helin] Initially, we want to do that for BIOS, if specific BIOS does not implement it. That way it needs to be initialized once during initialization for each PCI device. Sure, that might not be the best option, but it is the easiest way. For Linux end users, the best option could be using ‘setpci’ command. It can enable ‘extended_tag’ per PCI device.

I am not sure I can see how easy it is since you are forcing this in a build option.
Anyway, all this knowledge should be in the documentation and not in an obscure build option that looks to be useless in the end.

The more I look at this, the more I think we did not have good enough argument for this change in eal / igb_uio yet.

We have something that gives "better performance" on "some server" with "some bios".




As far as the per-device runtime configuration is concerned, I want to make sure this pci configuration will not break other "igb_uio" pci devices.
If Intel can tell for sure this won't break other devices, then fine, we can go and enable this for all "igb_uio" pci devices.

[Helin] It is in PCIe specification, and enable it can provide better performance generally. But I cannot confirm that it would not break any other devices, as I don’t validate all devices. If you really concern it, ‘setpci’ can be the best option for you. We can add a script for that later.

Why not a script, but documentation is important too: I would say that we need an explicit list of platforms and nics which support this.



- By the way, there is also the CONFIG_MAX_READ_REQUEST_SIZE option that seems to be disabled (or at least its value 0 seems to tell so).
What is its purpose ?

[Helin] Yes, it was added for performance tuning long long ago. But now it seems contribute nothing or too few for the performance number, so I just skip it. The default value does nothing on PCIe registers, just keep it as is.

Not so long ago to dpdk.org<http://dpdk.org> (somewhere around 1.7.0 ...).
If this code had no use for "so long", why did it end up on dpdk.org<http://dpdk.org> ?
Why should we keep it ?


Thanks.

--
David Marchand

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!
  2014-09-19  3:43           ` Zhang, Helin
@ 2014-10-15  9:41             ` Thomas Monjalon
  2014-10-16  0:43               ` Zhang, Helin
  0 siblings, 1 reply; 11+ messages in thread
From: Thomas Monjalon @ 2014-10-15  9:41 UTC (permalink / raw)
  To: Zhang, Helin; +Cc: dev

Hi Helin,

2014-09-19 03:43, Zhang, Helin:
> My idea on it could be,
> 1.       Write a script to use ‘setpci’ to configure pci configuration.
> End user can decide which PCI device needs to be changed.
> 2.       Add code to change that PCI configuration in i40e PMD only, as
> it seems nobody else need it till now.

The second solution seems better because more integrated and automatic.
But I would like to have some EAL functions to access to PCI configuration.
These functions would have Linux and BSD implementations.
Then the PMD could change the configuration if it's allowed by a run-time
option and would notify the change with a warning/log.

Thanks for keeping us notified of your progress.
-- 
Thomas

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!
  2014-10-15  9:41             ` Thomas Monjalon
@ 2014-10-16  0:43               ` Zhang, Helin
  2015-02-09 12:12                 ` David Marchand
  0 siblings, 1 reply; 11+ messages in thread
From: Zhang, Helin @ 2014-10-16  0:43 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

Hi Thomas

Yes, your proposal it the perfect one, also the most complicated one. I was thinking of that one as well, but we did not have enough time for that in our 1.8 timeframe.
In the long run, I agree with you to implement EAL function to access PCI config space directly. I will try to put it in our plan as soon as possible, if no objections.

For now, I think the quickest and easiest way might be to write out a script of using ‘setpci’, the Linux command. It is harmless for our code base, and we can remove it when we have better choice. What do you think?

Thank you very much for the great comments on this topic! I really like it!

Regards,
Helin

From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
Sent: Wednesday, October 15, 2014 5:42 PM
To: Zhang, Helin
Cc: dev@dpdk.org; David Marchand
Subject: Re: [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!


Hi Helin,



2014-09-19 03:43, Zhang, Helin:

> My idea on it could be,

> 1. Write a script to use ‘setpci’ to configure pci configuration.

> End user can decide which PCI device needs to be changed.

> 2. Add code to change that PCI configuration in i40e PMD only, as

> it seems nobody else need it till now.



The second solution seems better because more integrated and automatic.

But I would like to have some EAL functions to access to PCI configuration.

These functions would have Linux and BSD implementations.

Then the PMD could change the configuration if it's allowed by a run-time

option and would notify the change with a warning/log.



Thanks for keeping us notified of your progress.

--

Thomas

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!
  2014-10-16  0:43               ` Zhang, Helin
@ 2015-02-09 12:12                 ` David Marchand
  2015-02-10  0:27                   ` Zhang, Helin
  0 siblings, 1 reply; 11+ messages in thread
From: David Marchand @ 2015-02-09 12:12 UTC (permalink / raw)
  To: Zhang, Helin; +Cc: dev

Hello Helin,

On Thu, Oct 16, 2014 at 2:43 AM, Zhang, Helin <helin.zhang@intel.com> wrote:

>  Hi Thomas
>
>
>
> Yes, your proposal it the perfect one, also the most complicated one. I
> was thinking of that one as well, but we did not have enough time for that
> in our 1.8 timeframe.
>
> In the long run, I agree with you to implement EAL function to access PCI
> config space directly. I will try to put it in our plan as soon as
> possible, if no objections.
>
>
>
> For now, I think the quickest and easiest way might be to write out a
> script of using ‘setpci’, the Linux command. It is harmless for our code
> base, and we can remove it when we have better choice. What do you think?
>
>
>
> Thank you very much for the great comments on this topic! I really like it!
>

Did you make any progress on this ?
Actually, looking at Stephen patches (
http://dpdk.org/dev/patchwork/patch/3024/), I think we could go with this
approach once both uio and vfio are fine.


-- 
David Marchand

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!
  2015-02-09 12:12                 ` David Marchand
@ 2015-02-10  0:27                   ` Zhang, Helin
  0 siblings, 0 replies; 11+ messages in thread
From: Zhang, Helin @ 2015-02-10  0:27 UTC (permalink / raw)
  To: David Marchand; +Cc: dev

Hi David

It seems we just have minor progress, we have a script which is using setpci to do that.
  [dpdk-dev] scripts: enable extended tag of PCIe (http://www.dpdk.org/dev/patchwork/patch/2762/)

Yes, Stephen's patch could be the direction we need to follow up. Thanks for your indicating that!

Regards,
Helin


From: David Marchand [mailto:david.marchand@6wind.com] 
Sent: Monday, February 9, 2015 8:12 PM
To: Zhang, Helin
Cc: Thomas Monjalon; dev@dpdk.org
Subject: Re: [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance!

Hello Helin, 

On Thu, Oct 16, 2014 at 2:43 AM, Zhang, Helin <helin.zhang@intel.com> wrote:
Hi Thomas
 
Yes, your proposal it the perfect one, also the most complicated one. I was thinking of that one as well, but we did not have enough time for that in our 1.8 timeframe.
In the long run, I agree with you to implement EAL function to access PCI config space directly. I will try to put it in our plan as soon as possible, if no objections.
 
For now, I think the quickest and easiest way might be to write out a script of using ‘setpci’, the Linux command. It is harmless for our code base, and we can remove it when we have better choice. What do you think?
 
Thank you very much for the great comments on this topic! I really like it!

Did you make any progress on this ?
Actually, looking at Stephen patches (http://dpdk.org/dev/patchwork/patch/3024/), I think we could go with this approach once both uio and vfio are fine.


-- 
David Marchand


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-02-10  0:27 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-17  4:18 [dpdk-dev] i40e: Steps and required configurations of how to achieve the best performance! Zhang, Helin
2014-09-17  8:33 ` David Marchand
2014-09-17  8:50   ` Zhang, Helin
2014-09-17 14:03     ` David Marchand
2014-09-18  2:39       ` Zhang, Helin
2014-09-18  8:57         ` David Marchand
2014-09-19  3:43           ` Zhang, Helin
2014-10-15  9:41             ` Thomas Monjalon
2014-10-16  0:43               ` Zhang, Helin
2015-02-09 12:12                 ` David Marchand
2015-02-10  0:27                   ` Zhang, Helin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).