DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH] doc: add Vector FM10K introductions
@ 2015-12-23  8:59 Chen Jing D(Mark)
  2016-02-06  6:48 ` [dpdk-dev] [PATCH v2] " Chen Jing D(Mark)
  0 siblings, 1 reply; 7+ messages in thread
From: Chen Jing D(Mark) @ 2015-12-23  8:59 UTC (permalink / raw)
  To: dev

From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>

Add introductions on how to enable Vector FM10K Rx/Tx functions,
the preconditions and assumptions on Rx/Tx configuration parameters.
The new content also lists the limitations of vector, so app/customer
can do better to select best Rx/Tx functions.

Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
---
 doc/guides/nics/fm10k.rst |   89 +++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 89 insertions(+), 0 deletions(-)

diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index 4206b7f..54b761c 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -34,6 +34,95 @@ FM10K Poll Mode Driver
 The FM10K poll mode driver library provides support for the Intel FM10000
 (FM10K) family of 40GbE/100GbE adapters.
 
+Vector PMD for FM10K
+--------------------
+Vector PMD uses Intel® SIMD instructions to optimize packet I/O.
+It improves load/store bandwidth efficiency of L1 data cache by using a wider
+SSE/AVX register 1 (1).
+The wider register gives space to hold multiple packet buffers so as to save
+instruction number when processing bulk of packets.
+
+There is no change to PMD API. The RX/TX handler are the only two entries for
+vPMD packet I/O. They are transparently registered at runtime RX/TX execution
+if all condition checks pass.
+
+1.  To date, only an SSE version of FM10K vPMD is available.
+    To ensure that vPMD is in the binary code, ensure that the option
+    CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y is in the configure file.
+
+Some constraints apply as pre-conditions for specific optimizations on bulk
+packet transfers. The following sections explain RX and TX constraints in the
+vPMD.
+
+RX Constraints
+~~~~~~~~~~~~~~
+
+Prerequisites and Pre-conditions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Number of descriptor ring must be power of 2. This is the assumptions for
+Vector RX. With this pre-condition, ring pointer can easily scroll back to head
+after hitting tail without conditional check. Besides that, Vector RX can use
+it to do bit mask by ``ring_size - 1``.
+
+Feature not Supported by Vector RX PMD
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Some features are not supported when trying to increase the throughput in vPMD.
+They are:
+
+*   IEEE1588
+
+*   FDIR
+
+*   Header split
+
+*   RX checksum offload
+
+Other features are supported using optional MACRO configuration. They include:
+
+*   HW VLAN strip
+
+*   L3/L4 packet type
+
+To enabled by RX_OLFLAGS (RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y)
+
+To guarantee the constraint, configuration flags in dev_conf.rxmode will be
+checked:
+
+*   hw_vlan_extend
+
+*   hw_ip_checksum
+
+*   header_split
+
+*   fdir_conf->mode
+
+RX Burst Size
+^^^^^^^^^^^^^
+
+As vPMD is focused on high throughput, which processes 4 packets at a time.
+So it assumes that the RX burst should be greater than 4 per burst. It returns
+zero if using nb_pkt < 4 in the receive handler. If nb_pkt is not multiple of
+4, a floor alignment will be applied.
+
+TX Constraint
+~~~~~~~~~~~~~
+
+Feature not Supported by TX Vector PMD
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+TX vPMD only works when txq_flags is set to FM10K_SIMPLE_TX_FLAG.
+This means that it does not support TX multi-segment, VLAN offload and TX csum
+offload. The following MACROs are used for these three features:
+
+*   ETH_TXQ_FLAGS_NOMULTSEGS
+
+*   ETH_TXQ_FLAGS_NOVLANOFFL
+
+*   ETH_TXQ_FLAGS_NOXSUMSCTP
+
+*   ETH_TXQ_FLAGS_NOXSUMUDP
+
+*   ETH_TXQ_FLAGS_NOXSUMTCP
 
 Limitations
 -----------
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [dpdk-dev] [PATCH v2] doc: add Vector FM10K introductions
  2015-12-23  8:59 [dpdk-dev] [PATCH] doc: add Vector FM10K introductions Chen Jing D(Mark)
@ 2016-02-06  6:48 ` Chen Jing D(Mark)
  2016-02-22 13:47   ` Mcnamara, John
  2016-02-26  5:56   ` [dpdk-dev] [PATCH v3] " Chen Jing D(Mark)
  0 siblings, 2 replies; 7+ messages in thread
From: Chen Jing D(Mark) @ 2016-02-06  6:48 UTC (permalink / raw)
  To: dev

From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>

Add introductions on how to enable Vector FM10K Rx/Tx functions,
the preconditions and assumptions on Rx/Tx configuration parameters.
The new content also lists the limitations of vector, so app/customer
can do better to select best Rx/Tx functions.

Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
---
v2:
 - rebase to latest repo
 - Reword a few sentences that not follow coding style.

 doc/guides/nics/fm10k.rst |   98 +++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 98 insertions(+), 0 deletions(-)

diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index 4206b7f..a502ffd 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -35,6 +35,104 @@ The FM10K poll mode driver library provides support for the Intel FM10000
 (FM10K) family of 40GbE/100GbE adapters.
 
 
+Vector PMD for FM10K
+--------------------
+
+Vector PMD (vPMD) uses Intel® SIMD instructions to optimize packet I/O.
+It improves load/store bandwidth efficiency of L1 data cache by using a wider
+SSE/AVX register 1 (1).
+The wider register gives space to hold multiple packet buffers so as to save
+instruction number when processing bulk of packets.
+
+There is no change to PMD API. The RX/TX handler are the only two entries for
+vPMD packet I/O. They are transparently registered at runtime RX/TX execution
+if all condition checks pass.
+
+1.  To date, only an SSE version of FM10K vPMD is available.
+    To ensure that vPMD is in the binary code, ensure that the option
+    CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y is in the configure file.
+
+Some constraints apply as pre-conditions for specific optimizations on bulk
+packet transfers. The following sections explain RX and TX constraints in the
+vPMD.
+
+
+RX Constraints
+~~~~~~~~~~~~~~
+
+
+Prerequisites and Pre-conditions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+For Vector RX it is assumed that the number of descriptor ring will be power
+of 2. With this pre-condition, the ring pointer can easily scroll back to the
+head after hitting the tail without conditional check. In addition Vector RX
+can use this assumption to do a bit mask using ``ring_size - 1``.
+
+
+Features not Supported by Vector RX PMD
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Some features are not supported when trying to increase the throughput in
+vPMD. They are:
+
+*   IEEE1588
+
+*   Flow director
+
+*   Header split
+
+*   RX checksum offload
+
+Other features are supported using optional MACRO configuration. They include:
+
+*   HW VLAN strip
+
+*   L3/L4 packet type
+
+To enabled by RX_OLFLAGS use ``RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y``.
+
+To guarantee the constraint, the configuration flags in ``dev_conf.rxmode``
+will be checked:
+
+*   ``hw_vlan_extend``
+
+*   ``hw_ip_checksum``
+
+*   ``header_split``
+
+*   ``fdir_conf->mode``
+
+
+RX Burst Size
+^^^^^^^^^^^^^
+
+As vPMD is focused on high throughput, it 4 packets at a time.  So it assumes
+that the RX burst should be greater than 4 per burst. It returns zero if using
+``nb_pkt`` < 4 in the receive handler. If ``nb_pkt`` is not multiple of 4, a
+floor alignment will be applied.
+
+
+TX Constraint
+~~~~~~~~~~~~~
+
+Features not Supported by TX Vector PMD
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+TX vPMD only works when ``txq_flags`` is set to ``FM10K_SIMPLE_TX_FLAG``.
+This means that it does not support TX multi-segment, VLAN offload or TX csum
+offload. The following MACROs are used for these three features:
+
+*   ``ETH_TXQ_FLAGS_NOMULTSEGS``
+
+*   ``ETH_TXQ_FLAGS_NOVLANOFFL``
+
+*   ``ETH_TXQ_FLAGS_NOXSUMSCTP``
+
+*   ``ETH_TXQ_FLAGS_NOXSUMUDP``
+
+*   ``ETH_TXQ_FLAGS_NOXSUMTCP``
+
 Limitations
 -----------
 
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] [PATCH v2] doc: add Vector FM10K introductions
  2016-02-06  6:48 ` [dpdk-dev] [PATCH v2] " Chen Jing D(Mark)
@ 2016-02-22 13:47   ` Mcnamara, John
  2016-02-23  7:37     ` Chen, Jing D
  2016-02-26  5:56   ` [dpdk-dev] [PATCH v3] " Chen Jing D(Mark)
  1 sibling, 1 reply; 7+ messages in thread
From: Mcnamara, John @ 2016-02-22 13:47 UTC (permalink / raw)
  To: Chen, Jing D, dev

> -----Original Message-----
> From: Chen, Jing D
> Sent: Saturday, February 6, 2016 6:48 AM
> To: dev@dpdk.org
> Cc: Mcnamara, John <john.mcnamara@intel.com>; Chen, Jing D
> <jing.d.chen@intel.com>
> Subject: [PATCH v2] doc: add Vector FM10K introductions
> 
> From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
> 
> Add introductions on how to enable Vector FM10K Rx/Tx functions, the
> preconditions and assumptions on Rx/Tx configuration parameters.

Hi Mark,

Thanks for the update. A few minor comments below.



> +Vector PMD (vPMD) uses Intel® SIMD instructions to optimize packet I/O.
> +It improves load/store bandwidth efficiency of L1 data cache by using a
> +wider SSE/AVX register 1 (1).

This should probably be "register (1)"


> +The wider register gives space to hold multiple packet buffers so as to
> +save instruction number when processing bulk of packets.

Maybe a little clearer as:

The wider register gives space to hold multiple packet buffers so as to save
on the number of instructions when bulk processing packets.


> +
> +There is no change to PMD API. The RX/TX handler are the only two
> +entries for vPMD packet I/O. They are transparently registered at
> +runtime RX/TX execution if all condition checks pass.

s/if all condition checks pass./if all conditions are met./


> +As vPMD is focused on high throughput, it 4 packets at a time.  So it

s/it 4 packets at a time/it processes 4 packets at a time/

John


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] [PATCH v2] doc: add Vector FM10K introductions
  2016-02-22 13:47   ` Mcnamara, John
@ 2016-02-23  7:37     ` Chen, Jing D
  0 siblings, 0 replies; 7+ messages in thread
From: Chen, Jing D @ 2016-02-23  7:37 UTC (permalink / raw)
  To: Mcnamara, John, dev

Hi, John,

Best Regards,
Mark


> -----Original Message-----
> From: Mcnamara, John
> Sent: Monday, February 22, 2016 9:47 PM
> To: Chen, Jing D; dev@dpdk.org
> Subject: RE: [PATCH v2] doc: add Vector FM10K introductions
> 
> > -----Original Message-----
> > From: Chen, Jing D
> > Sent: Saturday, February 6, 2016 6:48 AM
> > To: dev@dpdk.org
> > Cc: Mcnamara, John <john.mcnamara@intel.com>; Chen, Jing D
> > <jing.d.chen@intel.com>
> > Subject: [PATCH v2] doc: add Vector FM10K introductions
> >
> > From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
> >
> > Add introductions on how to enable Vector FM10K Rx/Tx functions, the
> > preconditions and assumptions on Rx/Tx configuration parameters.
> 
> Hi Mark,
> 
> Thanks for the update. A few minor comments below.
> 
> 
> 
> > +Vector PMD (vPMD) uses Intel® SIMD instructions to optimize packet I/O.
> > +It improves load/store bandwidth efficiency of L1 data cache by using a
> > +wider SSE/AVX register 1 (1).
> 
> This should probably be "register (1)"
> 
> 
> > +The wider register gives space to hold multiple packet buffers so as to
> > +save instruction number when processing bulk of packets.
> 
> Maybe a little clearer as:
> 
> The wider register gives space to hold multiple packet buffers so as to save
> on the number of instructions when bulk processing packets.
> 
> 
> > +
> > +There is no change to PMD API. The RX/TX handler are the only two
> > +entries for vPMD packet I/O. They are transparently registered at
> > +runtime RX/TX execution if all condition checks pass.
> 
> s/if all condition checks pass./if all conditions are met./
> 
> 
> > +As vPMD is focused on high throughput, it 4 packets at a time.  So it
> 
> s/it 4 packets at a time/it processes 4 packets at a time/
> 
> John

Many thanks for the comments. I'll change and send a new version soon.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [dpdk-dev] [PATCH v3] doc: add Vector FM10K introductions
  2016-02-06  6:48 ` [dpdk-dev] [PATCH v2] " Chen Jing D(Mark)
  2016-02-22 13:47   ` Mcnamara, John
@ 2016-02-26  5:56   ` Chen Jing D(Mark)
  2016-03-08  8:06     ` Mcnamara, John
  1 sibling, 1 reply; 7+ messages in thread
From: Chen Jing D(Mark) @ 2016-02-26  5:56 UTC (permalink / raw)
  To: dev

From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>

Add introductions on how to enable Vector FM10K Rx/Tx functions,
the preconditions and assumptions on Rx/Tx configuration parameters.
The new content also lists the limitations of vector, so app/customer
can do better to select best Rx/Tx functions.

Signed-off-by: Chen Jing D(Mark) <jing.d.chen@intel.com>
---
v3:
 - rebase to dpdk-next-16.04
 - Minor change to reword a few sentences.

v2:
 - rebase to latest repo
 - Reword a few sentences that not follow coding style.

 doc/guides/nics/fm10k.rst |   98 +++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 98 insertions(+), 0 deletions(-)

diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index 4206b7f..b97f611 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -35,6 +35,104 @@ The FM10K poll mode driver library provides support for the Intel FM10000
 (FM10K) family of 40GbE/100GbE adapters.
 
 
+Vector PMD for FM10K
+--------------------
+
+Vector PMD (vPMD) uses Intel® SIMD instructions to optimize packet I/O.
+It improves load/store bandwidth efficiency of L1 data cache by using a wider
+SSE/AVX ''register (1)''.
+The wider register gives space to hold multiple packet buffers so as to save
+on the number of instructions when bulk processing packets.
+
+There is no change to the PMD API. The RX/TX handlers are the only two entries for
+vPMD packet I/O. They are transparently registered at runtime RX/TX execution
+if all required conditions are met.
+
+1.  To date, only an SSE version of FM10K vPMD is available.
+    To ensure that vPMD is in the binary code, set
+    ``CONFIG_RTE_LIBRTE_FM10K_INC_VECTOR=y`` in the configure file.
+
+Some constraints apply as pre-conditions for specific optimizations on bulk
+packet transfers. The following sections explain RX and TX constraints in the
+vPMD.
+
+
+RX Constraints
+~~~~~~~~~~~~~~
+
+
+Prerequisites and Pre-conditions
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+For Vector RX it is assumed that the number of descriptor rings will be a power
+of 2. With this pre-condition, the ring pointer can easily scroll back to the
+head after hitting the tail without a conditional check. In addition Vector RX
+can use this assumption to do a bit mask using ``ring_size - 1``.
+
+
+Features not Supported by Vector RX PMD
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Some features are not supported when trying to increase the throughput in
+vPMD. They are:
+
+*   IEEE1588
+
+*   Flow director
+
+*   Header split
+
+*   RX checksum offload
+
+Other features are supported using optional MACRO configuration. They include:
+
+*   HW VLAN strip
+
+*   L3/L4 packet type
+
+To enable via ``RX_OLFLAGS`` use ``RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y``.
+
+To guarantee the constraint, the following configuration flags in ``dev_conf.rxmode``
+will be checked:
+
+*   ``hw_vlan_extend``
+
+*   ``hw_ip_checksum``
+
+*   ``header_split``
+
+*   ``fdir_conf->mode``
+
+
+RX Burst Size
+^^^^^^^^^^^^^
+
+As vPMD is focused on high throughput, it processes 4 packets at a time. So it assumes
+that the RX burst should be greater than 4 packets per burst. It returns zero if using
+``nb_pkt`` < 4 in the receive handler. If ``nb_pkt`` is not a multiple of 4, a
+floor alignment will be applied.
+
+
+TX Constraint
+~~~~~~~~~~~~~
+
+Features not Supported by TX Vector PMD
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+TX vPMD only works when ``txq_flags`` is set to ``FM10K_SIMPLE_TX_FLAG``.
+This means that it does not support TX multi-segment, VLAN offload or TX csum
+offload. The following MACROs are used for these three features:
+
+*   ``ETH_TXQ_FLAGS_NOMULTSEGS``
+
+*   ``ETH_TXQ_FLAGS_NOVLANOFFL``
+
+*   ``ETH_TXQ_FLAGS_NOXSUMSCTP``
+
+*   ``ETH_TXQ_FLAGS_NOXSUMUDP``
+
+*   ``ETH_TXQ_FLAGS_NOXSUMTCP``
+
 Limitations
 -----------
 
-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] [PATCH v3] doc: add Vector FM10K introductions
  2016-02-26  5:56   ` [dpdk-dev] [PATCH v3] " Chen Jing D(Mark)
@ 2016-03-08  8:06     ` Mcnamara, John
  2016-03-09 17:42       ` Thomas Monjalon
  0 siblings, 1 reply; 7+ messages in thread
From: Mcnamara, John @ 2016-03-08  8:06 UTC (permalink / raw)
  To: Chen, Jing D, dev

> -----Original Message-----
> From: Chen, Jing D
> Sent: Friday, February 26, 2016 5:57 AM
> To: dev@dpdk.org
> Cc: Mcnamara, John <john.mcnamara@intel.com>; Chen, Jing D
> <jing.d.chen@intel.com>
> Subject: [PATCH v3] doc: add Vector FM10K introductions
> 
> From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>

Acked-by: John McNamara <john.mcnamara@intel.com>


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] [PATCH v3] doc: add Vector FM10K introductions
  2016-03-08  8:06     ` Mcnamara, John
@ 2016-03-09 17:42       ` Thomas Monjalon
  0 siblings, 0 replies; 7+ messages in thread
From: Thomas Monjalon @ 2016-03-09 17:42 UTC (permalink / raw)
  To: Chen, Jing D; +Cc: dev

> > From: "Chen Jing D(Mark)" <jing.d.chen@intel.com>
> 
> Acked-by: John McNamara <john.mcnamara@intel.com>

Applied, thanks

Next step: fill the matrix in overview.rst :)

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-03-09 17:44 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-23  8:59 [dpdk-dev] [PATCH] doc: add Vector FM10K introductions Chen Jing D(Mark)
2016-02-06  6:48 ` [dpdk-dev] [PATCH v2] " Chen Jing D(Mark)
2016-02-22 13:47   ` Mcnamara, John
2016-02-23  7:37     ` Chen, Jing D
2016-02-26  5:56   ` [dpdk-dev] [PATCH v3] " Chen Jing D(Mark)
2016-03-08  8:06     ` Mcnamara, John
2016-03-09 17:42       ` Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).