patches for DPDK stable branches
 help / color / mirror / Atom feed
* [dpdk-stable] [PATCH] doc: fix description on intel VFs
@ 2017-04-02  0:23 Jingjing Wu
  2017-04-02 11:36 ` Mcnamara, John
  0 siblings, 1 reply; 3+ messages in thread
From: Jingjing Wu @ 2017-04-02  0:23 UTC (permalink / raw)
  To: john.mcnamara; +Cc: dev, jingjing.wu, stable

This patch corrects the description on Physical and Virtual Function
Infrastructure of Intel NICs. The RSS part description should belong
to ixgbe but not i40e.
This patch also add more notes to describe the queue number on Intel
X710/XL710 NICs.

Fixes: b9fcaeec5fc0 ("doc: add ixgbe VF RSS guide")
Cc: stable@dpdk.org
Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
---
 doc/guides/nics/intel_vf.rst | 88 ++++++++++++++++++++++----------------------
 1 file changed, 43 insertions(+), 45 deletions(-)

diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 91cbae6..1f03fdc 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -124,12 +124,12 @@ However:
 
     The above is an important consideration to take into account when targeting specific packets to a selected port.
 
-Intel® Fortville 10/40 Gigabit Ethernet Controller VF Infrastructure
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Intel® X710/XL710 Gigabit Ethernet Controller VF Infrastructure
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 In a virtualized environment, the programmer can enable a maximum of *128 Virtual Functions (VF)*
-globally per Intel® Fortville 10/40 Gigabit Ethernet Controller NIC device.
-Each VF can have a maximum of 16 queue pairs.
+globally per Intel® X710/XL710 Gigabit Ethernet Controller NIC device.
+The number of queue pairs of each VF can be configured by ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF`` in ``config`` file.
 The Physical Function in host could be either configured by the Linux* i40e driver
 (in the case of the Linux Kernel-based Virtual Machine [KVM]) or by DPDK PMD PF driver.
 When using both DPDK PMD PF/VF drivers, the whole NIC will be taken over by DPDK based application.
@@ -156,47 +156,6 @@ For example,
 
     Launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
 
-*   Using the DPDK PMD PF ixgbe driver to enable VF RSS:
-
-    Same steps as above to install the modules of uio, igb_uio, specify max_vfs for PCI device, and
-    launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
-
-    The available queue number(at most 4) per VF depends on the total number of pool, which is
-    determined by the max number of VF at PF initialization stage and the number of queue specified
-    in config:
-
-    *   If the max number of VF is set in the range of 1 to 32:
-
-        If the number of rxq is specified as 4(e.g. '--rxq 4' in testpmd), then there are totally 32
-        pools(ETH_32_POOLS), and each VF could have 4 or less(e.g. 2) queues;
-
-        If the number of rxq is specified as 2(e.g. '--rxq 2' in testpmd), then there are totally 32
-        pools(ETH_32_POOLS), and each VF could have 2 queues;
-
-    *   If the max number of VF is in the range of 33 to 64:
-
-        If the number of rxq is 4 ('--rxq 4' in testpmd), then error message is expected as rxq is not
-        correct at this case;
-
-        If the number of rxq is 2 ('--rxq 2' in testpmd), then there is totally 64 pools(ETH_64_POOLS),
-        and each VF have 2 queues;
-
-    On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
-    or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated(max_vfs >= 1).
-    It also needs config VF RSS information like hash function, RSS key, RSS key length.
-
-    .. code-block:: console
-
-        testpmd -l 0-15 -n 4 -- --coremask=<core-mask> --rxq=4 --txq=4 -i
-
-.. Note: The preferred option is -c XX or -l n-n,n instead of a coremask value. The --coremask option
-         is a feature of the application and not DPDK EAL options.
-
-    The limitation for VF RSS on Intel® 82599 10 Gigabit Ethernet Controller is:
-    The hash and key are shared among PF and all VF, the RETA table with 128 entries is also shared
-    among PF and all VF; So it could not to provide a method to query the hash and reta content per
-    VF on guest, while, if possible, please query them on host(PF) for the shared RETA information.
-
 Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
 When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
 represented by (Bus#, Device#, Function#) in sequence starting from 0 to 3.
@@ -210,6 +169,9 @@ However:
 
     The above is an important consideration to take into account when targeting specific packets to a selected port.
 
+    For Intel® X710/XL710 Gigabit Ethernet Controller, queues are in pairs. One queue pair means one receive queue and
+    one transmit queue. The default number of queue pairs per VF is 4, and can be 16 in maximum.
+
 Intel® 82599 10 Gigabit Ethernet Controller VF Infrastructure
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
@@ -244,6 +206,42 @@ For example,
 
     Launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
 
+*   Using the DPDK PMD PF ixgbe driver to enable VF RSS:
+
+    Same steps as above to install the modules of uio, igb_uio, specify max_vfs for PCI device, and
+    launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
+
+    The available queue number(at most 4) per VF depends on the total number of pool, which is
+    determined by the max number of VF at PF initialization stage and the number of queue specified
+    in config:
+
+    *   If the max number of VFs(max_vfs) is set in the range of 1 to 32:
+
+        If the number of Rx queues is specified as 4(e.g. `--rxq=4` in testpmd), then there are totally 32
+        pools(ETH_32_POOLS), and each VF could have 4 Rx queues;
+
+        If the number of Rx queues is specified as 2(e.g. `--rxq=2` in testpmd), then there are totally 32
+        pools(ETH_32_POOLS), and each VF could have 2 Rx queues;
+
+    *   If the max number of VFs(max_vfs) is in the range of 33 to 64:
+
+        If the number of Rx queues in specified as 4 (`--rxq=4` in testpmd), then error message is expected
+        as `rxq` is not correct at this case;
+
+        If the number of rxq is 2 (`--rxq=2` in testpmd), then there is totally 64 pools(ETH_64_POOLS),
+        and each VF have 2 Rx queues;
+
+    On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
+    or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated(max_vfs >= 1).
+    It also needs config VF RSS information like hash function, RSS key, RSS key length.
+
+.. note::
+
+    The limitation for VF RSS on Intel® 82599 10 Gigabit Ethernet Controller is:
+    The hash and key are shared among PF and all VF, the RETA table with 128 entries is also shared
+    among PF and all VF; So it could not to provide a method to query the hash and reta content per
+    VF on guest, while, if possible, please query them on host(PF) for the shared RETA information.
+
 Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
 When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
 represented by (Bus#, Device#, Function#) in sequence starting from 0 to 3.
-- 
2.4.11

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-stable] [PATCH] doc: fix description on intel VFs
  2017-04-02  0:23 [dpdk-stable] [PATCH] doc: fix description on intel VFs Jingjing Wu
@ 2017-04-02 11:36 ` Mcnamara, John
  0 siblings, 0 replies; 3+ messages in thread
From: Mcnamara, John @ 2017-04-02 11:36 UTC (permalink / raw)
  To: Wu, Jingjing; +Cc: dev, stable



> -----Original Message-----
> From: Wu, Jingjing
> Sent: Sunday, April 2, 2017 1:24 AM
> To: Mcnamara, John <john.mcnamara@intel.com>
> Cc: dev@dpdk.org; Wu, Jingjing <jingjing.wu@intel.com>; stable@dpdk.org
> Subject: [PATCH] doc: fix description on intel VFs
> 
> This patch corrects the description on Physical and Virtual Function
> Infrastructure of Intel NICs. The RSS part description should belong to
> ixgbe but not i40e.
> This patch also add more notes to describe the queue number on Intel
> X710/XL710 NICs.
> 
> Fixes: b9fcaeec5fc0 ("doc: add ixgbe VF RSS guide")

> ...
> +
> +    *   If the max number of VFs(max_vfs) is set in the range of 1 to 32:
> +
> +        If the number of Rx queues is specified as 4(e.g. `--rxq=4` in
> testpmd), then there are totally 32
> +        pools(ETH_32_POOLS), and each VF could have 4 Rx queues;

I know you are only moving this section of the docs but there are a few issues
that are worth fixing while you are doing it.

The rxq values require 2 backquote: ``--rxq=4`` in several places.

Also, make should that there is a space before each open parenthesis in this
section. There are several places where there isn't one:  pools(ETH_32_POOLS),


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [dpdk-stable] [PATCH] doc: fix description on intel VFs
@ 2017-05-22  7:54 Jingjing Wu
  0 siblings, 0 replies; 3+ messages in thread
From: Jingjing Wu @ 2017-05-22  7:54 UTC (permalink / raw)
  To: stable; +Cc: jingjing.wu

[ backported from upstream commit e05e340eb231fca5b6467c7cd08419121bb3b4f9 ]

This patch corrects the description on Physical and Virtual Function
Infrastructure of Intel NICs. The RSS part description should belong
to ixgbe but not i40e.
This patch also add more notes to describe the queue number on Intel
X710/XL710 NICs.

Fixes: b9fcaeec5fc0 ("doc: add ixgbe VF RSS guide")

Signed-off-by: Jingjing Wu <jingjing.wu@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/nics/intel_vf.rst | 85 ++++++++++++++++++++++----------------------
 1 file changed, 43 insertions(+), 42 deletions(-)

diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 9fe4209..ebf5809 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -124,12 +124,12 @@ However:
 
     The above is an important consideration to take into account when targeting specific packets to a selected port.
 
-Intel® Fortville 10/40 Gigabit Ethernet Controller VF Infrastructure
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Intel® X710/XL710 Gigabit Ethernet Controller VF Infrastructure
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 In a virtualized environment, the programmer can enable a maximum of *128 Virtual Functions (VF)*
-globally per Intel® Fortville 10/40 Gigabit Ethernet Controller NIC device.
-Each VF can have a maximum of 16 queue pairs.
+globally per Intel® X710/XL710 Gigabit Ethernet Controller NIC device.
+The number of queue pairs of each VF can be configured by ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF`` in ``config`` file.
 The Physical Function in host could be either configured by the Linux* i40e driver
 (in the case of the Linux Kernel-based Virtual Machine [KVM]) or by DPDK PMD PF driver.
 When using both DPDK PMD PF/VF drivers, the whole NIC will be taken over by DPDK based application.
@@ -156,44 +156,6 @@ For example,
 
     Launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
 
-*   Using the DPDK PMD PF ixgbe driver to enable VF RSS:
-
-    Same steps as above to install the modules of uio, igb_uio, specify max_vfs for PCI device, and
-    launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
-
-    The available queue number(at most 4) per VF depends on the total number of pool, which is
-    determined by the max number of VF at PF initialization stage and the number of queue specified
-    in config:
-
-    *   If the max number of VF is set in the range of 1 to 32:
-
-        If the number of rxq is specified as 4(e.g. '--rxq 4' in testpmd), then there are totally 32
-        pools(ETH_32_POOLS), and each VF could have 4 or less(e.g. 2) queues;
-
-        If the number of rxq is specified as 2(e.g. '--rxq 2' in testpmd), then there are totally 32
-        pools(ETH_32_POOLS), and each VF could have 2 queues;
-
-    *   If the max number of VF is in the range of 33 to 64:
-
-        If the number of rxq is 4 ('--rxq 4' in testpmd), then error message is expected as rxq is not
-        correct at this case;
-
-        If the number of rxq is 2 ('--rxq 2' in testpmd), then there is totally 64 pools(ETH_64_POOLS),
-        and each VF have 2 queues;
-
-    On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
-    or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated(max_vfs >= 1).
-    It also needs config VF RSS information like hash function, RSS key, RSS key length.
-
-    .. code-block:: console
-
-        testpmd -c 0xffff -n 4 -- --coremask=<core-mask> --rxq=4 --txq=4 -i
-
-    The limitation for VF RSS on Intel® 82599 10 Gigabit Ethernet Controller is:
-    The hash and key are shared among PF and all VF, the RETA table with 128 entries is also shared
-    among PF and all VF; So it could not to provide a method to query the hash and reta content per
-    VF on guest, while, if possible, please query them on host(PF) for the shared RETA information.
-
 Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
 When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
 represented by (Bus#, Device#, Function#) in sequence starting from 0 to 3.
@@ -207,6 +169,9 @@ However:
 
     The above is an important consideration to take into account when targeting specific packets to a selected port.
 
+    For Intel® X710/XL710 Gigabit Ethernet Controller, queues are in pairs. One queue pair means one receive queue and
+    one transmit queue. The default number of queue pairs per VF is 4, and can be 16 in maximum.
+
 Intel® 82599 10 Gigabit Ethernet Controller VF Infrastructure
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
@@ -241,6 +206,42 @@ For example,
 
     Launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
 
+*   Using the DPDK PMD PF ixgbe driver to enable VF RSS:
+
+    Same steps as above to install the modules of uio, igb_uio, specify max_vfs for PCI device, and
+    launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
+
+    The available queue number (at most 4) per VF depends on the total number of pool, which is
+    determined by the max number of VF at PF initialization stage and the number of queue specified
+    in config:
+
+    *   If the max number of VFs (max_vfs) is set in the range of 1 to 32:
+
+        If the number of Rx queues is specified as 4 (``--rxq=4`` in testpmd), then there are totally 32
+        pools (ETH_32_POOLS), and each VF could have 4 Rx queues;
+
+        If the number of Rx queues is specified as 2 (``--rxq=2`` in testpmd), then there are totally 32
+        pools (ETH_32_POOLS), and each VF could have 2 Rx queues;
+
+    *   If the max number of VFs (max_vfs) is in the range of 33 to 64:
+
+        If the number of Rx queues in specified as 4 (``--rxq=4`` in testpmd), then error message is expected
+        as ``rxq`` is not correct at this case;
+
+        If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (ETH_64_POOLS),
+        and each VF have 2 Rx queues;
+
+    On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
+    or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
+    It also needs config VF RSS information like hash function, RSS key, RSS key length.
+
+.. note::
+
+    The limitation for VF RSS on Intel® 82599 10 Gigabit Ethernet Controller is:
+    The hash and key are shared among PF and all VF, the RETA table with 128 entries is also shared
+    among PF and all VF; So it could not to provide a method to query the hash and reta content per
+    VF on guest, while, if possible, please query them on host for the shared RETA information.
+
 Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
 When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
 represented by (Bus#, Device#, Function#) in sequence starting from 0 to 3.
-- 
2.4.11

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2017-05-22  7:58 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-02  0:23 [dpdk-stable] [PATCH] doc: fix description on intel VFs Jingjing Wu
2017-04-02 11:36 ` Mcnamara, John
2017-05-22  7:54 Jingjing Wu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).